All about metrics: definitions, how-to, and tools.
Using numbers to evaluate research and researchers is a tricky business. Work is being done to be sure these metrics are used responsibly.
- In Leaving the Gold Standard, Cameron Neylon points out that the citation and all the metrics built upon it shouldn't be the foundation of evaluation. A 2017 paper in PLOS ONE shows authorship and citation behavior can be manipulated. This editorial shows a researcher's view from the Global South.
- HuMetricsHSS is an attempt to bring values into metrics, aimed specifically at humanities and social sciences researchers. Supporting blog posts are here and here.
- The Leiden Manifesto and the San Francisco Declaration on Research Assessment (DORA) both encourage best practices in the use of bibliometrics when assessing researchers.
- In this 2018 Nature Comment, an associate professor discusses how the conversation around metrics doesn't give early career researchers enough guidance to move forward.
- Snowball Metrics provides 'recipes' for standardized metrics used by research universities around the world.
- A publisher estimates how reliable different metrics tools are in a 2017 blog post.
- A paper on bioRxiv presents results of research that prove the Journal Impact Factor (JIF) can't be used to evaluate an article or author; the JIF can only be used to evaluate journals.
- Karin Wulf, professor of history at the College of William & Mary warns about metrics in this post. In the post, she points at a 2015 HEFCE report titled The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management.