Category Archives: Assessment

On Being Valuable: Point-Counterpoint

The POINT: Amy Fry

On Tuesday, September 14, ACRL released Value of Academic Libraries: A Comprehensive Research Review and Report by Dr. Megan Oakleaf. The report lays out the current landscape of academic library assessment and seeks to provide strategies for libraries to demonstrate and quantify their value within the context of institutional missions and goals.

Oakleaf states that internal measures of value, such as use statistics, user satisfaction, and service quality, while interesting to librarians, are less compelling for external stakeholders such as administrators and trustees (11). Instead, she suggests determining externally-focused measures of value such as “library impact” (best measured by observing what users are doing and producing as a result of using the library) and “competing alternatives” (which focuses on defining what users want and how libraries, rather than our competitors, can help them achieve it) (21-22). She suggests ten key areas libraries should try to address in such assessment: enrollment, retention/graduation, student success, student achievement, student learning, student engagement, faculty research productivity, faculty grants, faculty teaching, and institutional reputation (17). Oakleaf also offers strategies for approaching assessment related to each area.

Oakleaf claims that “use-based definitions of value are not compelling to many institutional decision makers and external stakeholders. Furthermore, use is not meaningful, unless that use can be connected to institutional outcomes” (20). In a brief section about e-resources, she explains that usage counts don’t show why a library resource was used or the user’s satisfaction with it (50); she therefore suggests that, rather than collecting and reporting usage data for electronic resources, libraries try to collect qualitative data, like the purpose of the use (using the ARL MINES protocol). She also suggests examining successful grant applications to “examine the degree to which citations impact whether or not faculty are awarded grants.”

The question of how to use e-resources statistics to draw qualitative conclusions about users’ information literacy levels and the effectiveness of electronic collections (or even about the library’s impact on faculty research or student recruitment and retention), is of special interest to me now, as I have just agreed to examine (and hopefully overhaul) my institution’s management of e-resources statistics. However, such questions are overshadowed for me (and for most libraries), by how to effectively gather, merge, and analyze the statistics themselves, what to do with resources that don’t offer statistics at all or don’t offer them in COUNTER format, and when and how to communicate them internally for collection decisions. It is difficult to see arriving at higher-level methods for library assessment that involve overlaying complex demographic data, research output data, cost data, collection data, and use data in order to tell compelling stories about library use and impact when even the most basic systems for managing inputs and outputs have not been implemented.

I understand and even agree with Oakleaf’s characterization of the shortcomings of “use-based definitions of value,” but am not sure that surveying users about the purpose of their information use or linking library collections to successful grant applications truly gives a more compelling picture of the value of electronic resources collections, nor one that is more complete. For example, assessing value by linking library collections to grants funded or patents produced seems like it would discount libraries’ value to humanities research, because humanities scholarship will never approach the sciences in the amount of dollars coming in.

It is true that libraries currently “do not track data that would provide evidence that students who engage in more library instruction are more likely to graduate on time, that faculty who use library services are more likely to be tenured, or that student affairs professionals that integrate library services into their work activities are more likely to be promoted” (13). But that stuff just really seems like no-brainers to me. If we spend a lot of time and energy collecting the data and putting it together to get the numbers that will allow us to make these claims – then what? What’s the payoff for the library? Administrators who don’t think libraries are just black holes for funding? A way to prove to students that they should use the library? If administrators and trustees are not inclined to fund libraries because their backgrounds did not include library use, or students are not inclined to use libraries because they are focused on graduation and employment instead of research, I don’t know that any such externally focused assessment will result in what seems to be, ultimately, the desired outcome – a reassertion of libraries’ relevance to our core constituents. It will, however, be a drain on library staff time and expertise – time and expertise that could be spent on core activities, like collection building, collection access, and public service.

Oakleaf concludes that our focus should be not to prove but to increase value (141). We should not ask, “Are libraries valuable?” but “How valuable are libraries?” she says. What about “How are libraries valuable?” But this is semantics. No matter what our approach to assessment, I’m afraid the answer will still depend less on what data we present than who we ask.

The COUNTERPOINT: Steven Bell

What’s the payoff for the library? That’s an important question when it comes to assessment and efforts to demonstrate the academic library’s value to its own institution and higher education. Amy Fry makes a good point that we could invest considerable time and energy to collect and analyze the data needed to determine our value in any or all of the ten key areas recommended in the ACRL Value Report – but why bother? She states that when it comes to questioning if library instruction sessions can be connected to better grades or students graduating on time, that’s “no brainer” territory.

But can we in fact assume that just because a student attends an instruction session – or because faculty have access to research databases – that they are indeed achieving institutional outcomes? If, as a profession, we thought that was no brainer territory why are there hundreds of research articles in our literature that attempt to prove that students who sit through library instruction sessions are better off than the ones who don’t – we clearly aren’t just assuming they are, we want to prove it – and in doing so prove why we make a difference to our students’ education and learning process.

As Barbara Fister points out in her response to the Report, provosts already acknowledge, anecdotally, that they value their libraries and librarians. And we also know that the library is the heart of the institution, and that libraries are like Mom and apple pie; everyone likes the library. You probably couldn’t find an academic administrator who would go on record trashing the academic library (well, maybe this one). But none of that may stop administrators, when push comes to shove, from taking drastic measures with library services to resolve a budget crisis. Being the heart of the institution didn’t stop Arizona’s Coconino Community College from performing radical heart surgery by outsourcing the library operations to North Arizona University’s (NAU) Cline Library. Admittedly, that’s a rare occurrence, and I can’t say for sure that even the best set of library value data could prevent it from happening. Yet one can’t help but imagine that if Coconino’s librarians had some rock solid assessment data on hand to confirm their value to administrators – be it how the library keeps students retained or helps students to achieve higher GPAs – they’d still have their jobs and be delivering services to their students at their own library (which was largely chopped up and pieced out to other academic units).

And better assessment and demonstration of library value can indeed result in a financial payoff for the institution if awarded government grants and the indirect costs associated with conducting research. Those indirect costs, typically a percentage rate negotiated between the institution and federal government, can make a huge difference in institutional funding for research. Given the size of some grants, just a slight increase – perhaps a percentage point or two – can make a real impact over time. Amy mentions the ARL MINES protocol, which is a process for making a concrete connection between researchers working on grant projects and their use the library resources to conduct that research. Often the contribution of the library is drastically understated, and therefore it is barely reflected in the calculation of the ICR (indirect cost recovery). My own institution is currently conducting a survey similar to MINES so that our “bean counters” (as Barbara likes to refer to them) can more accurately connect the expenditures for library electronic resources to research productivity – and the government’s own bean counters have very rigid rules for calculating increases to the ICR. It can’t be based on anecdotal evidence or simply having researchers state that they use library resources for their research. In this case, asking the users if we provide value doesn’t mean squat. Providing convincing evidence might mean an increase to our ICR of one or two percent –which over time could add up to significant amounts of funding to support research. That is a real payoff, but make no mistake that we have invested considerable time and expense in setting up the survey process.

For many academic librarians, it may be better to, as Amy suggests, focus on the core activities such as collection building and traditional services (reference, instruction, etc.) – and to keep improving on or expanding in those areas. But I like to think that what drives real advancement in our academic libraries is confronting new mysteries that will force us to seek out new answers that could lead to improvements in fundamental library operations. What happens when we fail to seek out new mysteries to explore is that we simply continue to exploit the same existing answers over and over again until we drive them and ourselves into obsolescence (for more on “knowledge funnel” theory read here).

Lately I’ve been advocating that the new mystery for academic librarians should focus on assessment. We need to get much better at answering a simple question that represents this mystery: How can we tell that we are making a difference – and how will we gather the data to quantitatively prove it? From this perspective the question would be neither “How valuable are academic libraries?” or “How are libraries valuable?” but “How are academic libraries making a real difference and how do we prove it?” Perhaps it remains a case of semantics, but any way we approach this new mystery, the road should lead to a better grasp of the value we provide and new ways to communicate it to our communities. Whatever you may think about assessment and the value question, take some time to review the ACRL Value Study. I’ll be at the Library Assessment Conference in DC at the end of October. I’m looking forward to learning more about how academic librarians are approaching the new mystery of assessment, and how we can all do a better job of quantifying and communicating our value proposition.

Learning From The Alumni

I came across an interesting piece of news about how some IHEs are just asking their alumni questions – and listening to the answers. The calls are not about hitting the alums up for contributions. The folks in charge of alumni offices are realizing that they need to learn much more about their instituiton’s graduates. There is particular interest in new, younger alumni because there are concerns that they have no interest in becoming active alumni. And no doubt, there’s always that nagging uncertainty about the potential young alumni have as future donors to the institution:

After hour-long phone conversations, alumni interviewers like Wong hope to be able to tell the college something about what makes graduates tick. They’ll have a pretty good idea of what alumni’s interests are, how they feel about the college and what might potentially motivate them to contribute. What the interviewers won’t ask for is a check.

I like this idea – just contacting the alumni to learn more about what they are doing and how they feel about the institution and their education. Academic libraries clearly have a different mission – and resources for this sort of thing – than the alumni office, but I feel there is much that academic librarians could learn from conversations with alumni. There are plenty of potential questions to ask about their use (or not) of the library. Did anything they heard in an instruction session stay with them, and did they learn it well enough for it to impact their research behavior? It might be helpful just to learn if they do professional research on a regular basis or if they just use search engines for personal, lifestyle research. Would they be interested in continuing to have access to the library databases they used as students (or not)?

As our profession becomes increasingly focused on assessment and documenting our contributions to student learning, it seems inevitable that we would need to engage our alumni in conversations about their library experience. It’s one thing to say the academic library contributes to lifelong learning, but only by connecting with alumni and asking them the right questions can we learn how well we succeed at our goals. If the development officers are taking the institutional lead in connecting with alumni, perhaps that is the starting point. Let’s learn more about what our colleagues in the alumni office are doing when they listen to our ex-students, and whether there is an opportunity for the academic librarian to ask a few questions as well.

Let’s Not (Just) Do the Numbers

Meredith Farkas has a thoughtful post at Information Wants to be Free on our love of numbers and how little they tell us without context. Less traffic at the reference desk: what does that mean? It could mean that students don’t find the help they get there useful, or that your redesigned website or new signage has solved problems that used to require human intervention. More instruction sessions? Maybe more faculty attended conferences and needed a babysitter.

Meredith’s post made me think about the statistics I recently compiled for our annual report. Many of them are things we count in order to share that information with others through national surveys. We dutifully count how much microfiche and microfilm we have added to the collection (seriously?) and how many print periodicals we have (fewer all the time, but our growing access to electronic full text is virtually impossible to measure; does a title that has a 12 month embargo count?). We haven’t used this report to share how much use our databases are getting and which journals in those databases are getting downloaded most often, or what Google Analytics tells us about which web pages attract the most attention. We use that information for decision-making, but it doesn’t become part of the record because the time series we use was started back when the earth’s crust was still cooling. (Guess what: acquisition of papyrus scrolls, clay tablets and wax cylinders is way down.)

In the end, I’m not all that interested in the numbers. The really interesting data is usually the hardest to gather. How do students decide which sources to use, and does their ability to make good choices improve over time? When they read a news item that someone has posted to Facebook, are they better prepared after our sessions to determine whether it’s accurate? Do students who figured out how to use their college library transfer those skills to unfamiliar settings after they graduate? Do students grow in their ability to reason based on evidence? Have they developed a respect for arguments that arrive at conclusions with information that isn’t cherry-picked or taken out of context? Can they make decisions quickly without neglecting to check the facts? The kind of literacy we’re hoping to foster goes far beyond being able to write a term paper. And knowing how many microfiche we own doesn’t have anything to do with it.

Now I have a question for our readers. Are there ways you regularly assess the kinds of deep learning that we hope to encourage? What measures of learning, direct and indirect, do you use at your library? Have you conducted studies that have had an impact on your programs? Are you gathering statistics that seem particularly pointless? Should we start an Awful Library Statistics blog? The floor is open for comments.

photo courtesy of Leo Reynolds.

Assessment is the New Black

I’m teaching a course this semester for the Graduate School of Library & Information Science at Illinois called, “Libraries, Information, and Society.” Like similar courses, it presents an introduction to a number of core concepts for future information professionals, as well as an introduction to professional skills, values, and employment environments. This week, we heard an excellent presentation from my colleague, Tina Chrzastowski, author of “Assessment 101 for Librarians,” an essay that appeared Science & Technology Libraries in 2008. The point of the presentation, and the message that I hope my students took from it, is that the ability to design an assessment program and to use its results in planning and decision making is a critical skill set for any information professional. Assessment is the new black – it goes with whatever job you have, and it is relevant to every library environment.

Assessment may also the new instruction, though – a critical skill set for academic librarians that is not clearly and appropriately addressed in LIS programs. It is no coincidence that instruction librarians have been among the early leaders in assessment activities (I’m looking at you, Deb Gilchrist!): this reflects their connection to broader campus efforts to identify student learning outcomes, but also their experience in having to learn critical skills on the job that were not a focus for their professional education. The list of studies showing that teaching skills are required for a wide variety of academic library positions is almost as long as the list of studies showing that few LIS programs have ever made this a focus for their coursework or their faculty hiring (a shout-out to those who break that mold, including the University of Washington and Syracuse University). I imagine that a similar list of studies will find their way into the literature regarding the importance of assessment and evidence-based library and information practice for librarians of all types, and the need for greater attention to those skills across the LIS curriculum. As we remain concerned about attention paid to instruction in LIS programs some 30 years after those first studies started to come out, though, it may take a while to see real change. Of course, it may be that assessment is really the new knowledge management, in which case the courses will be available much more quickly!

As Chrzastowski’s article points out, there are many resources available to librarians interested in continuing professional education in assessment. The Association of Research Libraries has held two successful conferences on this topic, and there is an international movement in support of evidence-based practice that supports a journal and conference programs. As with instruction, there are “lighthouse” LIS programs, too; in this case the University of North Carolina, which offers a course on “Evidence Based Practices in the Library and Information Sciences”.

What can ACRL do? If assessment is the new instruction, should we see more attention to looking at assessment across the association, and to fostering the development of a corps of academic librarians (beyond assessment coordinators) who see this as a critical area of personal expertise? Since assessment skills are critical not only to public services and collections librarians, but also to technical services and information technology specialists, is this an area of functional specialty that could broaden our appeal across the academic library enterprise, or be an initiative on which we can fruitfully collaborate across ALA divisions?

I don’t have the answers, but I know you all look good in black!