Thinking Tenure Thoughts

Last week Meredith Farkas wrote a thoughtful post on her blog, Information Wants to Be Free, about tenure status for academic librarians. Spirited discussion ensued in Meredith’s blog comments and on libraryland Twitter (much of which Meredith Storified) which has continued to today. The conversation has included many varied perspectives on the advantages and disadvantages of tenure for academic librarians, including preparation for research and scholarship in graduate library programs, the perceptions of status and equality between academic librarians and faculty in other departments, salary parity, academic freedom, and the usefulness and rigor of the library literature.

I support tenure for academic librarians as I do for faculty in other departments primarily because I believe that tenure ensures academic freedom, which is as important in the library as it is in other disciplines. I also have concerns about the tenure system more generally, concerns that many academics in libraries and other departments also voice. One of my big concerns is that the pressure to publish can result in quantity over quality.

This conundrum was raised during the Twitter discussion of Meredith’s post and had me nodding vigorously as I read. I am absolutely in agreement that the tenure system as it currently stands has encouraged the publication of large amounts of scholarship that ranges from the excellent and thought-provoking, to the interesting if somewhat obvious, to the just not very good, to the occasionally completely wrong. Of course, this is a problem not just in academic librarianship but in other disciplines as well. The avalanche of scholarship resulting from the pressures to publish to gain tenure affects libraries and the broader academic enterprise in a variety of ways.

It takes time to write and publish, and time spent on that is less time to spend on doing research or reading the research that others have published, research that might be useful in our jobs as well as our own research. You might remember the article in the Guardian late last year in which Nobel Prize-winning physicist Peter Higgs suggested that he’d be unlikely to get tenure in today’s academic climate because he hasn’t published enough. I try to stay current on what’s being published in a handful of library journals, but like many of us my interests are interdisciplinary and there is no way I can read even a fraction of what’s relevant to my scholarly interests. And the more that’s published, the more difficult it can become to find the good stuff — something we see when we teach students to evaluate sources, but something that can stymie more experienced researchers as well.

There’s also a direct connection between the ever-increasing publication for tenure needs and academic library budgets. Those articles need to go somewhere, and journal publishers have been more than willing to create new journals to fill up with reports of academic research and sell back to libraries. Publishing in open access journals can help, as others including Barbara Fister have suggested.

But I think academic librarians with tenure can make an impact on the quality versus quantity problem, both in the library literature and in scholarly communication more widely. I’m coming up for tenure in the fall, and while I’ve published my research open access, it’s also true that I’ve submitted most of my work for publication in peer reviewed journals, primarily because that’s what “counts” most. I don’t know that I’ve written anything in the past 6 years that I wouldn’t have otherwise, but as Meredith and others noted in the Twitter conversation, without worries about what counts I probably wouldn’t have felt as much pressure to write as much as I have for peer reviewed journals, and might have spread my efforts more evenly between blogging or other forms of publication as well. I’ve also felt torn spending time on other work that I know isn’t as highly regarded as traditional scholarly publishing — work like conference organizing and article reviewing and blogging, for example.

I’m looking forward to coming up for tenure in part because I’d like to help work toward expanding the definition of scholarly productivity to include alternatives to peer-reviewed publication in journals, and to focus on quality over quantity. Some of this is work that librarians are already doing — work in promoting open access, for example, among faculty in other departments who may not realize that there are peer-reviewed, highly-regarded OA journals. As academic librarians we have a view of the scholarly publishing landscape that other faculty may not share, and I hope we can use this position to advocate for tenure requirements that take into account more of the possibilities for contributing to the creation and propagation of knowledge than peer review and impact factor alone.

Evaluating Research By the Numbers

This month’s post in our series of guest academic librarian bloggers is by Bonnie Swoger, Science and Technology Librarian at the State University of New York (SUNY) Geneseo. She blogs at The Undergraduate Science Librarian.

Last week I taught an information literacy class to a group of senior Chemistry students. We didn’t talk about databases or indexes, we talked about numbers. We talked about impact factors and h-indexes and alternative metrics, and the students loved it. Librarians have used these metrics for years in collection development, and have looked them up to help faculty with tenure and promotion packets. But many librarians don’t know where the numbers come from, or what some of the criticisms are.

The students in this class needed to select a research topic, and the professor was tired of reading about obscure and “uninteresting” topics. He wanted his students to be able to find out what’s “hot” right now in chemical research.

At this level, the students are just starting to develop a sense about the nature of chemical research. It is hard for them to look at a journal article and know if that item is “hot” (or not). Librarians are often in the same boat. But there are some strategies for helping non-specialists do this. One is to look at science news sites such as C&E News, and the news wings of Science and Nature.

Another strategy is to make use of the metrics used to quantitatively assess journals, authors and articles.

We started the class by talking about the Journal Impact Factor (JIF) developed by Eugene Garfield and Irving Sher almost 50 years ago (see this article for the history of the JIF). It is a simple calculation:

JIF = Number of Citations/Number of articles

I had asked the students to read a brief commentary prior to class discussing the use (and abuse) of this metric, and in class we discussed some of criticisms of the number:

  • The numerator and denominator count different things (commentary articles are included in the numerator but not the denominator, so a journal can get an extra boost if commentary-type articles are cited)
  • The publication of review articles can quickly increase the impact factor because they are more likely to be cited.

These students were particularly interested in how the JIF could be manipulated and intrigued to learn about the story of how a single article increased the impact factor of Acta Crystallographia – Section A from 2 to 50 in a single year.

Importantly, we talked about how the impact factor was never meant to assess individual articles or authors.

So we explored alternatives.

The h-index was first suggested by physicist Jorge Hirsch, and and is now sometimes used to assess the influence of particular authors.

It works like this: Let’s say that professor Jane Smith has published 5 articles. Each article has been cited a different number of times:

Article Citations
Article 1 9
Article 2 10
Article 3 4
Article 4 2
Article 5 1

The h-index is the number that fills in the phrase “x number of articles have been cited x number of times.” In this case, we can easily say that 3 of Jane’s papers have been cited at least 3 times, so she has an h-index of 3. The major citation indexes (Scopus, Web of Knowledge) can calculate this number easily.

Like all other measures, h-index isn’t perfect. It never decreases, even as a researcher’s influence in their field decreases. It favors fields that tend to have larger numbers of authors on each paper (like high energy physics), and it can easily be manipulated by citing your own papers (or those of your friends and relatives). It does provide a way to try to sort out those authors who just write a lot from those authors who write a lot of good stuff.

We then turned to a brief discussion about some of the alternative metrics now being proposed by various journals and publishers. Some of the simplest measures in this category are the number of on-site views of an article and the number of times a PDF has been downloaded. Other tools include article ratings, comments, and how many times an article has been bookmarked. I think these developments are exciting, and it will be interesting to see how scholars react as more publishers offer these services.

Of course, none of these numbers are useful without context. Is an impact factor of 12 in organic chemistry considered good or bad? What about an h-index of 7 for a cancer researcher? And when an article is downloaded 457 times, what does that actually mean?

At the end of the class, I gave students an article citation and asked to students to determine if the research topic (and the article) was “hot” or not. They were asked to find some of the relevant metrics, and asked to provide a bit of background to give some context to their numbers. They had fun exploring the numbers, and I think they felt more confident in their ability to determine how important or buzz-worthy their prospective research topics might be as a result of our in-class discussion.

The numbers without context aren’t very helpful. But if you can find the numbers, and gain a sense of context, they can help non-specialists gain a sense of perspective about particular journals, authors and articles.