Monthly Archives: December 2011

Unpacking Assessment

ACRLog welcomes a guest post from Lisa Horowitz, Assessment Librarian at MIT Libraries.

As an assessment librarian, I am always looking for different ways to think about assessment. Most librarians aren’t statisticians, and for some, even the word itself, assessment, is daunting in that its meaning is unclear. Additionally, it’s such a broad topic that many of us are interested in only specific angles: learning outcomes, collection assessment, return on investment, the Value of Academic Libraries, and so on.

So what is assessment, when you come right down to it? Some librarians where I work find that the terms assessment, evaluation, statistics and data seem to be used interchangeably. The most meaningful way for me to approach the topic is to think of assessment as quality control. It is a way to look at your services, your workflows, your teaching — whatever — to determine what works and what can be improved. In that sense, yes, it is also evaluation. I’ve seen explanations that differentiate between assessment and evaluation, but I tend to just use the term assessment.

Statistics that are gathered for whatever reason, for ARL or ACRL, or for accreditation or other purposes, are actually gathered to assess something. Sometimes they are separated from that assessment because often those who gather these statistics are not the ones who do the assessment. About a dozen years ago, I was on a team that was involved in assessing our reference services while a different team was analyzing our reference-statistics-gathering procedures, until we all realized that the procedures we used to gather statistics would really depend on what we were trying to learn about our services; in other words, we needed to know what we were trying to assess in order to determine what statistics would be useful. Statistics should be inextricably tied to what you are assessing.

The use of the word “data” in libraries can be equally confusing. In the case of assessment, data are the actual numbers, or anecdotes even, that are used to assess. The data themselves are not assessment, but the use of those data are. Sometimes collections librarians see their data-gathering as separate from assessment. Sometimes instruction librarians see their evaluations as unrelated to assessment of library services as a whole. Sometimes librarians from different areas will collect different data to represent something (e.g., the number of items in a collection), but because they use different sources, they come up with different numbers. All of this relates to assessment, and ideally, it should all support library planning, resource allocation and project development.

Assessment, if done well, shows how services, workflows, collections, etc., can be improved. At the same time, it also should contribute to the library’s planning efforts. Let’s say that a library has done collection assessment which shows that a particular collection needs to be developed because of a new area of research among the faculty. At the same time, the instruction assessment has shown that students’ learning outcomes could be improved if information literacy training efforts were doubled, while assessment of the workflows at the service desks show that books are getting to the stacks more efficiently but interlibrary loans are taking longer than users expect. The point of assessment is not only to use these results to determine how to improve those particular areas, but they should also contribute to decisions made by senior management about resource allocation and strategic directions. In other words, assessment should help determine priorities by comparing needs uncovered by assessment with strategic goals, and by advocating for resources not only where they are most needed but where they advance the strategic goals of the library.

If you are new to assessment, there are a few articles that you may want to look at.
• Tina E. Chrzastowski (2008): “Assessment 101 for Librarians: A Guidebook,” Science & Technology Libraries 28:1-2, 155-176.
• Lisa R. Horowitz (2009): “Assessing Library Services: A Practical Guide for the Nonexpert,” Library Leadership & Management 23:4, 193-203.

Both of these have bibliographies that may be helpful, as well as links to tools, blogs, and organizations that may be useful.

What does assessment mean to you? What tools do you use? What have you done that helps staff at your library be more comfortable with assessing library services?

Considering Conferences

This semester I went to two academic conferences that weren’t library conferences. While I’ve attended conferences outside of librarianship in the past, both before I was a librarian as well as more recently, this is the first time in my library career that I’ve intentionally gone to non-library conferences. At both conferences I was making a presentation, which of course was a major factor in my decision to attend. But I highly enjoyed them both, and was pleased to find much of relevance both to my interests in librarianship as well as in higher education and the disciplines.

The first conference I attended this semester, the MobilityShifts conference at the New School (about which I wrote a brief wrap-up here on ACRLog), broadly addressed issues in teaching and learning, and specifically focused on mobility and education. This was a busy conference that spanned multiple days, and though it meant for a breakneck schedule I was able to see lots of great sessions. While there were presentations by and for librarians, I was most interested in the sessions that addressed bigger pedagogical questions. In our day to day work it’s easy to think only of the library — after all, that’s the physical and mental space in which we likely spend most of our time. But I found it incredibly valuable to have the opportunity to step back and consider the library as it relates to the whole of the college while I listened to presentations by classroom faculty, researchers, students, and more.

I also went to a discipline-specific conference this fall, the American Anthropological Association Annual Meetings, where I was part of a session on library ethnographies. Unfortunately I didn’t have as much time to spend at the AAAs as I had at MobilityShifts, but I was able to catch a few other sessions and had the chance to browse the exhibits, who were mostly scholarly publishers. I work at a college library so I spend much of my time considering student use of the library, and it was interesting to see the ways that researchers embedded in their disciplines consider issues of interest to libraries, like academic publishing, open access, and digital scholarship.

In the future I’d like to try to continue to head out to non-library conferences on occasion. Of course, a major factor that impacts our ability to go to conferences in any discipline is cost. As travel budgets are often slashed along with other belt-tightening measures at colleges and universities, it may not be feasible to attend to both library and non-library conferences. But if it is possible, I highly recommend it as a way to keep up with academia beyond reading the higher ed news and blogs. If you’ve gone to academic conferences outside of librarianship, what are some of the benefits you’ve found? Would you ever substitute a non-library conference for one that caters solely to our profession?

The Limits of Mobility

Some interesting articles about mobile technology caught my eye last week as I was finishing up the leftover turkey. Apple has come under fire for the reported inability of Siri, the voice recognition application on the new iPhone 4S, to find abortion clinics. As reported by CNN, quoting the American Civil Liberties Union:

“Although it isn’t clear that Apple is intentionally trying to promote an anti-choice agenda, it is distressing that Siri can point you to Viagra, but not the Pill, or help you find an escort, but not an abortion clinic,” the group wrote in a blog post Wednesday.

A spokesperson for Apple responded quickly:

“These are not intentional omissions meant to offend anyone. It simply means that as we bring Siri from beta to a final product, we find places where we can do better and we will in the coming weeks.”

This is but one example of problematic access and information issues with our mobile devices, a topic that was explored in more detail last week by Harvard professor Jonathan Zittrain in MIT’s Technology Review in his provocatively-titled article The Personal Computer is Dead. Zittrain begins by asserting that:

Rising numbers of mobile, lightweight, cloud-centric devices don’t merely represent a change in form factor. Rather, we’re seeing an unprecedented shift of power from end users and software developers on the one hand, to operating system vendors on the other—and even those who keep their PCs are being swept along. This is a little for the better, and much for the worse.

Zittrain continues with an analysis of the state of mobile software development for Apple and Android devices, and the restrictions this development operates within. In Apple’s case users are limited to the software available in the company’s commercial space: the App Store (unless the device is jailbroken). Android apps are potentially available outside of the Android Marketplace, though I wonder whether many users go to the extra effort to locate and download those apps. In both cases developers are tied to the operating system of the device which dictates the parameters of the software. Perhaps most distressingly, there are hints that a similar environment for software development may soon be prevalent even on the PC: Apple has already introduced its App Store for Mac.

How does this aspect of mobile computing affect us as academic librarians? While we still have a sizable number of students without smartphones on our campuses on average,* there’s no question that smartphone and tablet usage is on the rise overall. What challenges will we face that accompany the increasing reliance on mobile devices? Certainly library database vendors are rushing to develop apps for these devices — how will we promote these apps to our users and integrate their use with the library website and other existing services? And while many libraries are also developing apps, that strategy may not be feasible for smaller libraries that already feel stretched by the efforts to provide digital library services.

Access to information — an aspect of information literacy — may also be affected by these restrictions around mobile devices. We’ve already read about the possibility of a filter bubble that impacts Google search results. With the increasing move to an app-driven environment, could an internet search provider’s app restrict or shape search results even further?

What should academic libraries be considering as we adapt to an information landscape that’s increasingly mediated by mobile technologies? How can we help our students, faculty, and other library patrons with their information needs while ensuring that they’re aware of the strengths and limitations that these technologies have to offer?

* The latest survey results from the Pew Internet Project show that the vast majority of undergrads have a cellphone (between 94-96%), and about 44% of 18-24 year olds own smartphones.