• Stefanie Haustein and Vincent Larivière
    Abstract Researchers are used to being evaluated: publications, hiring, tenure and funding decisions are
    all based on the evaluation of research. Traditionally, this evaluation relied on judgement of peers but, in
    the light of limited resources and increased bureaucratization of science, peer review is getting more and
    more replaced or complemented with bibliometric methods. Central to the introduction of bibliometrics in
    research evaluation was the creation of the Science Citation Index (SCI) in the 1960s, a citation database
    initially developed for the retrieval of scientific information. Embedded in this database was the Impact
    Factor, first used as a tool for the selection of journals to cover in the SCI, which then became a synonym
    for journal quality and academic prestige. Over the last 10 years, this indicator became powerful enough to
    influence researchers’ publication patterns in so far as it became one of the most important criteria to select
    a publication venue. Regardless of its many flaws as a journal metric and its inadequacy as a predictor of
    citations on the paper level, it became the go-to indicator of research quality and was used and misused by
    authors, editors, publishers and research policy makers alike. The h-index, introduced as an indicator of
    both output and impact combined in one simple number, has experienced a similar fate, mainly due to
    simplicity and availability. Despite their massive use, these measures are too simple to capture the
    complexity and multiple dimensions of research output and impact. This chapter provides an overview of
    bibliometric methods, from the development of citation indexing as a tool for information retrieval to its
    application in research evaluation, and discusses their misuse and effects on researchers’ scholarly
    communication behavior.