Bibiliometrics are the rage in academia, attempting to quantify the quality and impact of publication. As Paul Jump notes, getting impact down to a number gives “at least the impression of objectivity.” But what are some of the drawbacks? (html)
Jump reviews a new report, The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management, and pulls out some dangers of bibliometrics.
First, research managers can become “over-reliant on indicators that are widely felt to be problematic or not properly understood”. Every metric has strengths and weaknesses, but this is not always understood institutionally.
Second, these metrics can distort research priorities, push early career researchers to focus on publishing the “right” things in places with the highest Impact Factor instead of making the most useful contributions to research.
Third, bibliometrics have a gender bias. Research shows that men are reluctant to cite women.
Altmetrics, which would quanitfy impact outside of the normal journal citation calculations show some promise. These would look at mentions in blogs, media, and other digital publication. But altmetrics are even more highly sensitive to context than traditional bibliometrics.
And, we’d add, what is the chance they’d avoid the gender bias and incessant gaming already on display in traditional bibliometrics?
Virtual and IRL tipping are practices that may increase inequity. But what are our other options? See How to End Tipping
Source: Impressions of Objectivity