We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

What's the problem with the Journal Impact Factor?

00:00

Formal Metadata

Title
What's the problem with the Journal Impact Factor?
Title of Series
Number of Parts
19
Author
Contributors
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Other Version
Production Year2024
Production PlaceHanover

Content Metadata

Subject Area
Genre
Abstract
The short video focuses on the Journal Impact Factor, its origins, and the issues that can arise when scientific research is evaluated solely based on quantitative indicators such as the JIF. The video was created as part of the BMBF-funded project open-access.network.
Keywords
Meeting/InterviewDiagramComputer animationProgram flowchart
Computer animationDiagram
Computer animation
Computer animationProgram flowchart
Transcript: English(auto-generated)
What's the problem with the impact factor? Researchers are recurrently evaluated by their supervisors or when applying for jobs and third-party funding. But how can we measure their performance and the relevance of their work? This is often done by using the journal impact factor.
The impact factor is symmetric, used to measure the impact and thus the relevance of a scientific journal. It is calculated by dividing the number of citations in one year by the number of articles in the previous two years. This means that if the articles of one journal are cited frequently, the impact factor of the journal will increase.
Originally, the impact factor was meant as an indicator to assist librarians in choosing journals for acquisition. In spite of its sheer simplicity, the metric is challenging to apply.
Some prestigious journals boast an impact factor of 40, while equally influential ones have an impact factor of 4. The impact factor heavily relies on other factors like the size of the scientific community and the publication culture.
It is also easy to manipulate, which led to questionable practices. For instance, authors can form so-called citation cartels, referencing each other's work to inflate citations and hence the impact factor. Authors can also misrepresent the scientific significance of their work by excessively citing their own papers.
Additionally, new journals are disadvantaged due to the multi-year evaluation. Journals in languages other than English or journals covering novel topics
with a longer uptake period in the community's discussion face challenges too. Rather than rewarding transparency, innovative research and good scientific practice, the focus on the impact factor may incentivize behavior that does not advance scientific debate.
Such as competition in citation counts or sightlining innovative research with fewer immediate citations. It is therefore important to note that the impact factor is unsuitable for assessing the quality of an individual article.
It is a metric for journals, not for specific papers. Moreover, it cannot cover the diversity of outputs, practices and activities that maximize the quality and impact of research. To fairly evaluate all aspects of good scientific practice,
assessments should therefore primarily rely on qualitative judgment with quantitative indicators like the impact factor as supplementary rather than sole criteria.