Early last week I opened the New York Times and was surprised to see a front-page article about sham academic publishers and conferences. The article discussed something we in the library world have been aware of for some time: open access publishers with low (or no) standards for peer review and acceptance, sometimes even with fictional editorial boards. The publications are financed by authors’ fees, which may not be clear from their submission guidelines, and, with the relatively low cost of hosting an online-only journal, are presumably making quite a bit of money. The article included an interview with and photo of University of Colorado Denver librarian Jeffrey Beall, compiler of the useful Beall’s List guide to potentially predatory open access scholarly journals and publishers.
I’ve long been an admirer of Jeffrey Beall’s work and I’m glad to see him getting recognition outside of the library world. But the frankly alarmist tone of the Times article was disappointing to say the least, as was the seeming equation of open access with less-than-aboveboard publishers, which of course is not the case. As biologist Michael Eisen notes, there are lots of toll-access scholarly journals (and conferences) of suspicious quality. With the unbelievably high profits of scholarly publishing, it’s not surprising that the number of journals has proliferated and that not all of them are of the best quality. And there are many legitimate, highly-regarded journals — both open access and toll-access — that charge authors’ fees, especially in the sciences.
As I’ve bounced these thoughts around my brain for the past week, I keep coming back to one thing: the importance of evaluating information. Evaluating sources is something that faculty and librarians teach students, and students are required to use high quality sources in their work. How do we teach students to get at source quality? Research! Dig into the source: find out more about the author/organization, and read the text to see whether it’s comprehensible, typo-free, etc. Metrics like Journal Impact Factor can help make these determinations, but they’re far from the only aspects of a work to examine. In addition to Beall’s List, Gavia Libraria has a great post from last year detailing some specific steps to take and criteria to consider when evaluating a scholarly journal. I like to go by the classic TANSTAAFL: there ain’t no such thing as a free lunch. Get an email to contribute to a journal or conference out of the blue? It’s probably not the cream of the crop.
So if faculty and librarians teach our students to evaluate sources, why do we sometimes forget (or ignore?) to do so ourselves? I’d guess that the seemingly ever-increasing need for publications and presentations to support tenure and promotion plays into it, especially as the number of full-time faculty and librarian positions continue to decrease. I appreciate reasoned calls for quality over quantity, but I wonder whether slowing down the academic publishing arms race will end the proliferation of low quality journals.
The Times article last week notes that one danger of increasing numbers of fraudulent journals is that “nonexperts doing online research will have trouble distinguishing credible research from junk.” This isn’t the fault of the open access movement at all; if anything, open access can help determine the legitimacy of a journal. Shining a light on these sham journals makes it easier than ever to identify them. It’s up to us, both faculty and librarians: if the research and scholarship we do is work we should be proud of, prestigious work that’s worth publishing, then it stands to reason that we should share that work and prestige only with and via publications that are worth it.
One thought on “Evaluating Information: The Light Side of Open Access”
My issue with Beale is his neocolonialism. I was interested to conduct some initial research that may indicate that 65% of Beale’s predatory publishers are actually based in the US.