Evaluating Information: The Light Side of Open Access

Early last week I opened the New York Times and was surprised to see a front-page article about sham academic publishers and conferences. The article discussed something we in the library world have been aware of for some time: open access publishers with low (or no) standards for peer review and acceptance, sometimes even with fictional editorial boards. The publications are financed by authors’ fees, which may not be clear from their submission guidelines, and, with the relatively low cost of hosting an online-only journal, are presumably making quite a bit of money. The article included an interview with and photo of University of Colorado Denver librarian Jeffrey Beall, compiler of the useful Beall’s List guide to potentially predatory open access scholarly journals and publishers.

I’ve long been an admirer of Jeffrey Beall’s work and I’m glad to see him getting recognition outside of the library world. But the frankly alarmist tone of the Times article was disappointing to say the least, as was the seeming equation of open access with less-than-aboveboard publishers, which of course is not the case. As biologist Michael Eisen notes, there are lots of toll-access scholarly journals (and conferences) of suspicious quality. With the unbelievably high profits of scholarly publishing, it’s not surprising that the number of journals has proliferated and that not all of them are of the best quality. And there are many legitimate, highly-regarded journals — both open access and toll-access — that charge authors’ fees, especially in the sciences.

As I’ve bounced these thoughts around my brain for the past week, I keep coming back to one thing: the importance of evaluating information. Evaluating sources is something that faculty and librarians teach students, and students are required to use high quality sources in their work. How do we teach students to get at source quality? Research! Dig into the source: find out more about the author/organization, and read the text to see whether it’s comprehensible, typo-free, etc. Metrics like Journal Impact Factor can help make these determinations, but they’re far from the only aspects of a work to examine. In addition to Beall’s List, Gavia Libraria has a great post from last year detailing some specific steps to take and criteria to consider when evaluating a scholarly journal. I like to go by the classic TANSTAAFL: there ain’t no such thing as a free lunch. Get an email to contribute to a journal or conference out of the blue? It’s probably not the cream of the crop.

So if faculty and librarians teach our students to evaluate sources, why do we sometimes forget (or ignore?) to do so ourselves? I’d guess that the seemingly ever-increasing need for publications and presentations to support tenure and promotion plays into it, especially as the number of full-time faculty and librarian positions continue to decrease. I appreciate reasoned calls for quality over quantity, but I wonder whether slowing down the academic publishing arms race will end the proliferation of low quality journals.

The Times article last week notes that one danger of increasing numbers of fraudulent journals is that “nonexperts doing online research will have trouble distinguishing credible research from junk.” This isn’t the fault of the open access movement at all; if anything, open access can help determine the legitimacy of a journal. Shining a light on these sham journals makes it easier than ever to identify them. It’s up to us, both faculty and librarians: if the research and scholarship we do is work we should be proud of, prestigious work that’s worth publishing, then it stands to reason that we should share that work and prestige only with and via publications that are worth it.

Unpacking Assessment

ACRLog welcomes a guest post from Lisa Horowitz, Assessment Librarian at MIT Libraries.

As an assessment librarian, I am always looking for different ways to think about assessment. Most librarians aren’t statisticians, and for some, even the word itself, assessment, is daunting in that its meaning is unclear. Additionally, it’s such a broad topic that many of us are interested in only specific angles: learning outcomes, collection assessment, return on investment, the Value of Academic Libraries, and so on.

So what is assessment, when you come right down to it? Some librarians where I work find that the terms assessment, evaluation, statistics and data seem to be used interchangeably. The most meaningful way for me to approach the topic is to think of assessment as quality control. It is a way to look at your services, your workflows, your teaching — whatever — to determine what works and what can be improved. In that sense, yes, it is also evaluation. I’ve seen explanations that differentiate between assessment and evaluation, but I tend to just use the term assessment.

Statistics that are gathered for whatever reason, for ARL or ACRL, or for accreditation or other purposes, are actually gathered to assess something. Sometimes they are separated from that assessment because often those who gather these statistics are not the ones who do the assessment. About a dozen years ago, I was on a team that was involved in assessing our reference services while a different team was analyzing our reference-statistics-gathering procedures, until we all realized that the procedures we used to gather statistics would really depend on what we were trying to learn about our services; in other words, we needed to know what we were trying to assess in order to determine what statistics would be useful. Statistics should be inextricably tied to what you are assessing.

The use of the word “data” in libraries can be equally confusing. In the case of assessment, data are the actual numbers, or anecdotes even, that are used to assess. The data themselves are not assessment, but the use of those data are. Sometimes collections librarians see their data-gathering as separate from assessment. Sometimes instruction librarians see their evaluations as unrelated to assessment of library services as a whole. Sometimes librarians from different areas will collect different data to represent something (e.g., the number of items in a collection), but because they use different sources, they come up with different numbers. All of this relates to assessment, and ideally, it should all support library planning, resource allocation and project development.

Assessment, if done well, shows how services, workflows, collections, etc., can be improved. At the same time, it also should contribute to the library’s planning efforts. Let’s say that a library has done collection assessment which shows that a particular collection needs to be developed because of a new area of research among the faculty. At the same time, the instruction assessment has shown that students’ learning outcomes could be improved if information literacy training efforts were doubled, while assessment of the workflows at the service desks show that books are getting to the stacks more efficiently but interlibrary loans are taking longer than users expect. The point of assessment is not only to use these results to determine how to improve those particular areas, but they should also contribute to decisions made by senior management about resource allocation and strategic directions. In other words, assessment should help determine priorities by comparing needs uncovered by assessment with strategic goals, and by advocating for resources not only where they are most needed but where they advance the strategic goals of the library.

If you are new to assessment, there are a few articles that you may want to look at.
• Tina E. Chrzastowski (2008): “Assessment 101 for Librarians: A Guidebook,” Science & Technology Libraries 28:1-2, 155-176.
• Lisa R. Horowitz (2009): “Assessing Library Services: A Practical Guide for the Nonexpert,” Library Leadership & Management 23:4, 193-203.

Both of these have bibliographies that may be helpful, as well as links to tools, blogs, and organizations that may be useful.

What does assessment mean to you? What tools do you use? What have you done that helps staff at your library be more comfortable with assessing library services?

Citations Needed

Yesterday there was a fascinating article on Inside Higher Ed about a presentation at the recent Conference on College Composition and Communication. The presentation reported on research undertaken by composition faculty members Rebecca Moore Howard and Sandra Jamieson in their Citation Project, which focuses on understanding how students approach their research writing to help instructors help students avoid plagiarism. Their research team reviewed 160 introductory English Composition papers from 16 diverse colleges and universities and found that the student papers they examined were full of “patchwriting” — the term they use to describe improper paraphrasing that’s essentially inadvertent plagiarism — and very short on true summarizing.

While the ways in which students incorporate sources into their writing was the primary focus of the study, the researchers also examined student understanding of sources. Here the evidence is equally bleak: students relied heavily on brief documents that were less than five pages long, and most of the material they cited could be found in the beginning of the source, within the first few pages. The Citation Project team found little evidence that students were engaging deeply and thoughtfully with their research sources, rather they were, as the IHE article is titled, skimming the surface.

As many librarians commented when this article link made the rounds on Twitter yesterday, this hardly comes as a shock to us — many of our encounters with students at the reference desk and during instruction sessions corroborate these findings. Still, I admit to a tiny bit of surprise that it seems like librarians were only barely mentioned at the conference presentation:

“Whatever else the Internet has done,” Jamieson continued, “it has made it easier to find sources and harder to tell what’s junk.”

Some in the audience said the findings point to the need to place greater emphasis on teaching students how to select proper sources. “It’s probably not far off to say that their sources are the first hits on Google,” one audience member observed.

Another commenter was not prepared to give up on the 20th-century expectations of student research and citation. “There’s some value to reminding students about the authority on certain subjects that are not in a digital archive,” she said. “What we’ve forgotten is that libraries were the repositories where people made judicious claims about what sources are worth reading.”

What does this mean for academic librarians? While I’m glad we were mentioned tangentially, it hurts a bit to see a faculty discussion about how awful students’ research sources are that doesn’t include librarians. At the recent ACRL Conference I heard lots about our relationships with faculty, which many of us still find to be unsatisfyingly one-sided. There are a variety of strategies we can (and are) try(ing), but everyone’s local conditions are different, and there doesn’t seem to be one silver bullet.

Two other relevant readings I came across yesterday might help. Kim Leeder on In the Library with the Lead Pipe shares practical advice in her post outlining five steps for collaborating with faculty. And Bobbi Newman lets us know about the Great Librarian Write-Out, in which Patrick Sweeney is awarding $250 to a librarian who writes an article about libraries that gets published in a non-library publication.

What other strategies could we try to collaborate with faculty to increase student engagement with research sources? Are there any strategies that have worked well for you?

Do We Need a Bigger Carrot?

I coordinate the instruction program at my library, and I spend an enormous amount of time contemplating ACRL Information Literacy Standard 3: “The information literate student evaluates information and its sources critically and incorporates selected information into his or her knowledge base and value system.” I feel that it’s one of the most critical standards for our students to learn; it’s important for their work in college, their careers, and their everyday lives.

I have two primary opportunities to work with students on evaluating information – in our English Comp I one-shots and in our 3-credit information literacy course. And they could not be more different. In the one-shots I can devote maybe 15-20 minutes, tops, to discussing doing research on the internet, during which we usually discuss and evaluate the sources they’ve found while searching the internet during our session. In the credit-bearing course I spend two entire classes just on evaluation after having spent several weeks discussing the production and distribution of information, during which we’ve touched on issues of quality and credibility.

Despite the increased focus in our course on evaluating information, many students still gravitate to Google and other search engines. They’re most comfortable searching the internet, and they rightfully claim that using Google is faster — just type in your search terms and bingo, millions of results. It’s the aftermath of that Google search that’s sometimes still a sticking point. We’ve talked a lot in class about the research process; I emphasize that research takes time: time to figure out a search strategy and time to iterate, because no one finds exactly what they need on the first try. But it can still be really difficult to convince students to move away from that first page or two of websites, to dig deeper to find expert sources, to try library resources when they need scholarly information.

One reason for this might be the perceived benefits of finding high-quality information compared to the time it takes to find. If a student uses a “bad” source in his assignment, what are the consequences? Even when faculty take subtract points from an assignment for poor quality information, how much of the student’s grade can realistically be pegged to the sources students use? In a 5 page research paper that requires 5 sources, if a student uses 1 or 2 mediocre sources from one of the limitless content farms on the internet, how many points will she lose? The paper’s content, clarity of writing, grammar, mechanics, in-text citations, the list of references: all are factors in an assignment’s grade, too.

And if the grade isn’t compelling enough to convince students that it’s worthwhile to make the effort to find the best information out there on their research topics, what will?