A public group for people interested in working through more humane indicators of excellence for the humanities and social sciences. For more about the project, please visit our website at humetricshss.org.

Files List

  • The Use of Bibliometrics for Assessing Research: Possibilities, Limitations and Adverse Effects  
    Uploaded by Nicky Agate on 7 October 2017.

    Stefanie Haustein and Vincent Larivière
    Abstract Researchers are used to being evaluated: publications, hiring, tenure and funding decisions are
    all based on the evaluation of research. Traditionally, this evaluation relied on judgement of peers but, in
    the light of limited resources and increased bureaucratization of science, peer review is getting more and
    more replaced or complemented with bibliometric methods. Central to the introduction of bibliometrics in
    research evaluation was the creation of the Science Citation Index (SCI) in the 1960s, a citation database
    initially developed for the retrieval of scientific information. Embedded in this database was the Impact
    Factor, first used as a tool for the selection of journals to cover in the SCI, which then became a synonym
    for journal quality and academic prestige. Over the last 10 years, this indicator became powerful enough to
    influence researchers’ publication patterns in so far as it became one of the most important criteria to select
    a publication venue. Regardless of its many flaws as a journal metric and its inadequacy as a predictor of
    citations on the paper level, it became the go-to indicator of research quality and was used and misused by
    authors, editors, publishers and research policy makers alike. The h-index, introduced as an indicator of
    both output and impact combined in one simple number, has experienced a similar fate, mainly due to
    simplicity and availability. Despite their massive use, these measures are too simple to capture the
    complexity and multiple dimensions of research output and impact. This chapter provides an overview of
    bibliometric methods, from the development of citation indexing as a tool for information retrieval to its
    application in research evaluation, and discusses their misuse and effects on researchers’ scholarly
    communication behavior.

  • Generous Thinking: Introduction  
    In category: HuMetricsHSS W1 Readings.
    Uploaded by Nicky Agate on 7 October 2017.

    Blog post by Kathleen Fitzpatrick

  • The Invisible Labor of Minority Professors  
    In category: HuMetricsHSS W1 Readings.
    Uploaded by Nicky Agate on 7 October 2017.

    Author: Audrey Williams June

  • “Excellence R Us”: university research and the fetishisation of excellence  
    In category: HuMetricsHSS W1 Readings.
    Uploaded by Nicky Agate on 7 October 2017.

    The rhetoric of “excellence” is pervasive across the academy. It is used to refer to research outputs as well as researchers, theory and education, individuals and organizations, from art history to zoology. But does “excellence” actually mean anything? Does this pervasive narrative of “excellence” do any good? Drawing on a range of sources we interrogate “excellence” as a concept and find that it has no intrinsic meaning in academia. Rather it functions as a linguistic interchange mechanism. To investigate whether this linguistic function is useful we examine how the rhetoric of excellence combines with narratives of scarcity and competition to show that the hyper-competition that arises from the performance of “excellence” is completely at odds with the qualities of good research. We trace the roots of issues in reproducibility, fraud, and homophily to this rhetoric. But we also show that this rhetoric is an internal, and not primarily an external, imposition. We conclude by proposing an alternative rhetoric based on soundness and capacity-building. In the final analysis, it turns out that that “excellence” is not excellent. Used in its current unqualified form it is a pernicious and dangerous rhetoric that undermines the very foundations of good research and scholarship. This article is published as part of a collection on the future of research assessment.