This month’s post in our series of guest academic librarian bloggers is by Bonnie Swoger, Science and Technology Librarian at the State University of New York (SUNY) Geneseo. She blogs at The Undergraduate Science Librarian.
Last week I taught an information literacy class to a group of senior Chemistry students. We didn’t talk about databases or indexes, we talked about numbers. We talked about impact factors and h-indexes and alternative metrics, and the students loved it. Librarians have used these metrics for years in collection development, and have looked them up to help faculty with tenure and promotion packets. But many librarians don’t know where the numbers come from, or what some of the criticisms are.
The students in this class needed to select a research topic, and the professor was tired of reading about obscure and “uninteresting” topics. He wanted his students to be able to find out what’s “hot” right now in chemical research.
At this level, the students are just starting to develop a sense about the nature of chemical research. It is hard for them to look at a journal article and know if that item is “hot” (or not). Librarians are often in the same boat. But there are some strategies for helping non-specialists do this. One is to look at science news sites such as C&E News, and the news wings of Science and Nature.
Another strategy is to make use of the metrics used to quantitatively assess journals, authors and articles.
We started the class by talking about the Journal Impact Factor (JIF) developed by Eugene Garfield and Irving Sher almost 50 years ago (see this article for the history of the JIF). It is a simple calculation:
JIF = Number of Citations/Number of articles
I had asked the students to read a brief commentary prior to class discussing the use (and abuse) of this metric, and in class we discussed some of criticisms of the number:
- The numerator and denominator count different things (commentary articles are included in the numerator but not the denominator, so a journal can get an extra boost if commentary-type articles are cited)
- The publication of review articles can quickly increase the impact factor because they are more likely to be cited.
These students were particularly interested in how the JIF could be manipulated and intrigued to learn about the story of how a single article increased the impact factor of Acta Crystallographia – Section A from 2 to 50 in a single year.
Importantly, we talked about how the impact factor was never meant to assess individual articles or authors.
So we explored alternatives.
The h-index was first suggested by physicist Jorge Hirsch, and and is now sometimes used to assess the influence of particular authors.
It works like this: Let’s say that professor Jane Smith has published 5 articles. Each article has been cited a different number of times:
Article |
Citations |
Article 1 |
9 |
Article 2 |
10 |
Article 3 |
4 |
Article 4 |
2 |
Article 5 |
1 |
The h-index is the number that fills in the phrase “x number of articles have been cited x number of times.” In this case, we can easily say that 3 of Jane’s papers have been cited at least 3 times, so she has an h-index of 3. The major citation indexes (Scopus, Web of Knowledge) can calculate this number easily.
Like all other measures, h-index isn’t perfect. It never decreases, even as a researcher’s influence in their field decreases. It favors fields that tend to have larger numbers of authors on each paper (like high energy physics), and it can easily be manipulated by citing your own papers (or those of your friends and relatives). It does provide a way to try to sort out those authors who just write a lot from those authors who write a lot of good stuff.
We then turned to a brief discussion about some of the alternative metrics now being proposed by various journals and publishers. Some of the simplest measures in this category are the number of on-site views of an article and the number of times a PDF has been downloaded. Other tools include article ratings, comments, and how many times an article has been bookmarked. I think these developments are exciting, and it will be interesting to see how scholars react as more publishers offer these services.
Of course, none of these numbers are useful without context. Is an impact factor of 12 in organic chemistry considered good or bad? What about an h-index of 7 for a cancer researcher? And when an article is downloaded 457 times, what does that actually mean?
At the end of the class, I gave students an article citation and asked to students to determine if the research topic (and the article) was “hot” or not. They were asked to find some of the relevant metrics, and asked to provide a bit of background to give some context to their numbers. They had fun exploring the numbers, and I think they felt more confident in their ability to determine how important or buzz-worthy their prospective research topics might be as a result of our in-class discussion.
The numbers without context aren’t very helpful. But if you can find the numbers, and gain a sense of context, they can help non-specialists gain a sense of perspective about particular journals, authors and articles.