Let’s Not (Just) Do the Numbers

Meredith Farkas has a thoughtful post at Information Wants to be Free on our love of numbers and how little they tell us without context. Less traffic at the reference desk: what does that mean? It could mean that students don’t find the help they get there useful, or that your redesigned website or new signage has solved problems that used to require human intervention. More instruction sessions? Maybe more faculty attended conferences and needed a babysitter.

Meredith’s post made me think about the statistics I recently compiled for our annual report. Many of them are things we count in order to share that information with others through national surveys. We dutifully count how much microfiche and microfilm we have added to the collection (seriously?) and how many print periodicals we have (fewer all the time, but our growing access to electronic full text is virtually impossible to measure; does a title that has a 12 month embargo count?). We haven’t used this report to share how much use our databases are getting and which journals in those databases are getting downloaded most often, or what Google Analytics tells us about which web pages attract the most attention. We use that information for decision-making, but it doesn’t become part of the record because the time series we use was started back when the earth’s crust was still cooling. (Guess what: acquisition of papyrus scrolls, clay tablets and wax cylinders is way down.)

In the end, I’m not all that interested in the numbers. The really interesting data is usually the hardest to gather. How do students decide which sources to use, and does their ability to make good choices improve over time? When they read a news item that someone has posted to Facebook, are they better prepared after our sessions to determine whether it’s accurate? Do students who figured out how to use their college library transfer those skills to unfamiliar settings after they graduate? Do students grow in their ability to reason based on evidence? Have they developed a respect for arguments that arrive at conclusions with information that isn’t cherry-picked or taken out of context? Can they make decisions quickly without neglecting to check the facts? The kind of literacy we’re hoping to foster goes far beyond being able to write a term paper. And knowing how many microfiche we own doesn’t have anything to do with it.

Now I have a question for our readers. Are there ways you regularly assess the kinds of deep learning that we hope to encourage? What measures of learning, direct and indirect, do you use at your library? Have you conducted studies that have had an impact on your programs? Are you gathering statistics that seem particularly pointless? Should we start an Awful Library Statistics blog? The floor is open for comments.

photo courtesy of Leo Reynolds.

Author: Barbara Fister

I'm an academic librarian at Gustavus Adolphus College in St. Peter, Minnesota. Like all librarians at our small, liberal arts institution I am involved in reference, collection development, and shared management of the library. My area of specialization is instruction, with research interests also in media literacy, popular literacy, publishing, and assessment.

4 thoughts on “Let’s Not (Just) Do the Numbers”

  1. The type of evaluation you have in mind is quite difficult and time consuming. It is much easier to gather input/output data and then use it to compare ourselves to each other – which is what our parent institutions do as well. My colleagues and I wanted to know if the time we invest in creating course specific libguides helps improve student research. We’ve been working on it for two years – a year to get the methodology in order and a year to execute the study – now we are looking at another year to analyze the data and write it up. Just imagine us trying to figure out the ways in which a library contributes to the institution’s graduation rate – how long would that take. I’m not saying we shouldn’t, but we need to figure out ways to do this sort of assessment that are manageable. That’s what I’m thinking about.

  2. This is exactly why I am increasingly convinced that academic library assessment is meaningless unless it is integrated with assessment of student learning outcomes and evaluation of instructors’ performances. (Which are tricky enough, especially at an institution that has lifelong learning as an expected outcome for its students.)

    “The really interesting data is usually the hardest to gather”–yes, and I want to be part of a profession that considers that to make for interesting challenges.

  3. I agree – plus it’s more fun. Another issue is that we can’t just settle for evaluating the library’s impact without being willing to share the credit with others. If we’re trying to isolate library impact from everything else that leads to student learning, we’ll miss the most important learning. And then what we discover isn’t just validating the library, it’s validating the library as part of a web of influences on student learning. Which works for me.

  4. Yes…Sharing the credit is an inherently good-person and truthful thing to do. It’s also politically wise for libraries, since we seem to spend a lot of time worrying about our relevance. Nothing works for relevance better than establishing that “web of influences” as a shared institutional narrative.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.