Category Archives: Assessment

Digging Into Institutional Data

I have both a professional and scholarly interest in how the students at the college where I work do their academic work, and (of course) whether and how they use the library. In my own research I’m much more likely to use qualitative than quantitative methods. I prefer interviews and other qualitative methods because they offer so much more depth and detail than surveys, though of course that comes at the expense of breadth of respondents. Still, I appreciate learning more about our students’ lives; these compelling narratives can be used to augment what we learn from surveys and other broad but shallow methods of data collection.

Not *that* kind of survey
Not *that* kind of survey

But even though I love a good interview, I can also be a part-time numbers nerd: I admit to enjoying browsing through survey results occasionally. Recently I was working on a presentation for a symposium on teaching and technology at one of the other colleges in my university system and found myself hunting around the university’s Office of Institutional Research and Assessment website for some survey data to help contextualize students’ use of technology. My university runs a student experience survey every 2 years, and until last week I hadn’t realized that the data collected this past Spring had just been released.

Reader, I nearly missed dinnertime as I fell down the rabbit hole of the survey results. It’s a fascinating look at student data points at the 19 undergraduate institutions that make up the university. There’s the usual info you’d expect from the institutional research folks — how many students are enrolled at each college, part-time vs. full-time students, race and ethnicity, and age, to name a few examples. But this survey asks students lots of other questions, too. How long is their commute? Are they the first in their family to attend college? How many people live in their household? Do they work at a job and, if so, how many hours per week? How often do they use campus computer labs? Do they have access to broadband wifi off-campus? If they transferred to their current college, why? How do they prefer to communicate with faculty and administrators?

My university isn’t the only one that collects this data, of course. I imagine there are homegrown and locally-administered surveys at many colleges and universities. There’s also the National Survey of Student Engagement, abbreviated NSSE (pronounced “Nessie” like the mythical water beast), which collects data from 1,500+ American and Canadian colleges and universities. The NSSE website offers access to the data via a query tool, as well as annual reports that summarize notable findings (fair warning: the NSSE website can be another rabbit hole for the numbers nerds among us). There’s also the very local data that my own college’s Office of Assessment and Institutional Research collects. This includes the number of students enrolled in each of the college’s degree programs, as well as changes through time. Retention and graduation rates are there for browsing on our college website, too.

What does all of this student data collected by offices of institutional research have to do with academic libraries? Plenty! We might use the number of students enrolled in a particular major to help us plan how to work with faculty in that department around information literacy instruction, for example. The 2012 annual NSSE report revealed that students often don’t buy their course textbooks because of the expense (as have other studies), findings that librarians might use to justify programs for faculty to create or curate open educational resources, as librarians at Temple University and the University of Massachusetts Amherst have done. And at my library we’re using data on how and where students do their academic work outside of the library, both the university-collected survey results as well as qualitative data collected by me and my colleagues, to consider changes to the physical layout to better support students doing their academic work.

Have you ever found yourself captivated by institutional research data? How have you used college or university-wide survey results in your own library practice? Let us know in the comments.

Photo by Farrukh.

If At First You Don’t Assess, Try, Try Again

ACRLog welcomes a guest post from Katelyn Tucker & Alyssa Archer, Instruction Librarians at Radford University.

Instruction librarians are always looking for new & flashy ways to engage our students in the classroom. New teaching methods are exciting, but how do we know if they’re working? Here at Radford University, we’ve been flipping and using games for one-shot instruction sessions for a while, and our Assessment Librarian wasn’t going to accept anecdotal evidence of success any longer. We decided that the best way to see if our flipped and gamified lessons were accomplishing our goals was to evaluate the students’ completed assignments. We tried to think of every possible issue in designing the study. Our results, however, had issues that could have been prevented in hindsight. We want you to learn from our mistakes so you are not doomed to repeat them.

Our process

Identifying classes to include in this assessment of flipped versus gamified lessons was a no-brainer for us. A cohort of four sections of the same course that use identical assignment descriptions, assignment sheets, and grading rubrics meant that we had an optimal sample population. All students in the four sections created annotated bibliographies based on these same syllabi and assignment instructions. We randomly assigned two classes to receive flipped information literacy instruction and two to play a library game. After final grades had been submitted for the semester, the teaching faculty members of each section stripped identifying information from their students’ annotated bibliographies and sent them to us. We assigned each bibliography a number and then assigned two librarian coders to each paper. We felt confident that we had a failsafe study design.

Using a basic rubric (see image below, click to enlarge), librarians coded each bibliography for three outcomes using a binary scale. Since our curriculum lists APA documentation style, scholarly source evaluation, and search strategy as outcomes for the program, we coded for competency in these 3 areas. This process took about two months to complete, as coding student work is a time-consuming process.

assessmentchart

The challenges

After two librarians independently coded each bibliography, our assessment librarian ran inter-rater reliability statistics, and… we failed. We had previously used rubrics to code annotated bibliographies for another assessment project, so we didn’t spend any time explaining the process with our experienced coders. As we only hit around 30% agreement between coders, it is obvious that we should have done a better job with training.

Because we had such low agreement between coders, we weren’t confident in our success with each outcome. When we compared the flipped sections to the gamified ones, we didn’t find any significant differences in any of our outcomes. Students who played the game did just as well as those who were part of the flipped sections. However, our low inter-rater reliability threw a wrench in those results.

What we’ve learned

We came to understand the importance of norming, discussing among coders what the rubric means, and incorporating meaningful conversations on how to interpret assessment data into the norming process. Our inter-rater reliability issues could have been avoided with detailed training and discussion. Even though we thought we were safe on this project, because of earlier coding projects, the length of time between assessments created some large inconsistencies.

We haven’t given up on norming: including multiple coders may be time-intensive, but when done well, gives our team confidence in the results. The same applies to qualitative methodologies. As a side part of this project, one librarian looked at research narratives written by some participants, and decided to bravely go it alone on coding the students’ text using Dedoose. While it was an interesting experiment, the key point learned was to bring in more coders! While qualitative software can help identify patterns, it’s nothing compared to a partner looking at the same data and discussing as a team.

We also still believe in assessing output. As librarians, we don’t get too many opportunities to see how students use their information literacy skills in their written work. By assessing student output, we can actually track competency in our learning outcomes. We believe that students’ papers provide the best evidence of success or failure in the library classroom, and we feel lucky that our teaching faculty partners have given us access to graded work for our assessment projects.

Digital Badges for Library Research?

The world of higher education has been abuzz this past year with the idea of digital badges. Many see digital badges as an alternative to higher education’s system of transcripts and post-secondary degrees, which are constantly being critically scrutinized for their value and ability demonstrate that students are ready for a competitive workforce. There have been several articles from the Chronicle of Higher Education discussing this educational trend. One such article is Kevin Carey’s “A Future Full of Badges,” published back in April. In it, Carey describes how UC Davis, a national leader in agriculture, is pioneering a digital open badge program.

UC Davis’s badge system was created specifically for undergraduate students majoring in Sustainable Agriculture and Food Systems. Their innovative system was one of the winners of the Digital Media and Learning Competition (sponsored by Mozilla and the MacArthur Foundation). According to Carey,

Instead of being built around major requirements and grades in standard three-credit courses, the Davis badge system is based on the sustainable-agriculture program’s core competencies—”systems thinking,” for example. It is designed to organize evidence of both formal and informal learning, from within traditional higher education and without.

As opposed to a university transcript, digital badges could provide a well-rounded view of a student’s accomplishments because it could take into account things like conferences attended and specific skills learned. Clearly, we’re not talking about Girl Scout badges.

Carey seems confident that digital badges aren’t simply a higher education fad. He believes that that with time, these types of systems will grow and be recognized by employers. But I’m still a bit skeptical over whether this movement will gain enough momentum to last.

But just for a moment, let’s assume that this open badge system proves to be a fixture in the future of higher education. Does this mean someday a student could get a badge in various areas of library research, such as searching Lexis/Nexis, locating a book by its call number, or correctly citing a source within a paper? Many college and university librarians struggle with getting information competency skills inserted into the curriculum in terms of learning outcomes or core competencies. And even if they are in the curriculum, librarians often struggle when it comes to working with teaching faculty and students to ensure that these skills are effectively being taught and graded. Perhaps badges could be a way for librarians to play a significant role in the development and assessment student information competency skills.

Would potential employers or graduate school admissions departments be impressed with a set of library research badges on someone’s application? I have no idea. But I do know that as the amount of content available via the Internet continues to grow exponentially, the more important it is that students possess the critical thinking skills necessary to search, find, assess, and use information. If digital badges do indeed flourish within higher education, I hope that library research will be a vital part of the badge sash.

Unpacking Assessment

ACRLog welcomes a guest post from Lisa Horowitz, Assessment Librarian at MIT Libraries.

As an assessment librarian, I am always looking for different ways to think about assessment. Most librarians aren’t statisticians, and for some, even the word itself, assessment, is daunting in that its meaning is unclear. Additionally, it’s such a broad topic that many of us are interested in only specific angles: learning outcomes, collection assessment, return on investment, the Value of Academic Libraries, and so on.

So what is assessment, when you come right down to it? Some librarians where I work find that the terms assessment, evaluation, statistics and data seem to be used interchangeably. The most meaningful way for me to approach the topic is to think of assessment as quality control. It is a way to look at your services, your workflows, your teaching — whatever — to determine what works and what can be improved. In that sense, yes, it is also evaluation. I’ve seen explanations that differentiate between assessment and evaluation, but I tend to just use the term assessment.

Statistics that are gathered for whatever reason, for ARL or ACRL, or for accreditation or other purposes, are actually gathered to assess something. Sometimes they are separated from that assessment because often those who gather these statistics are not the ones who do the assessment. About a dozen years ago, I was on a team that was involved in assessing our reference services while a different team was analyzing our reference-statistics-gathering procedures, until we all realized that the procedures we used to gather statistics would really depend on what we were trying to learn about our services; in other words, we needed to know what we were trying to assess in order to determine what statistics would be useful. Statistics should be inextricably tied to what you are assessing.

The use of the word “data” in libraries can be equally confusing. In the case of assessment, data are the actual numbers, or anecdotes even, that are used to assess. The data themselves are not assessment, but the use of those data are. Sometimes collections librarians see their data-gathering as separate from assessment. Sometimes instruction librarians see their evaluations as unrelated to assessment of library services as a whole. Sometimes librarians from different areas will collect different data to represent something (e.g., the number of items in a collection), but because they use different sources, they come up with different numbers. All of this relates to assessment, and ideally, it should all support library planning, resource allocation and project development.

Assessment, if done well, shows how services, workflows, collections, etc., can be improved. At the same time, it also should contribute to the library’s planning efforts. Let’s say that a library has done collection assessment which shows that a particular collection needs to be developed because of a new area of research among the faculty. At the same time, the instruction assessment has shown that students’ learning outcomes could be improved if information literacy training efforts were doubled, while assessment of the workflows at the service desks show that books are getting to the stacks more efficiently but interlibrary loans are taking longer than users expect. The point of assessment is not only to use these results to determine how to improve those particular areas, but they should also contribute to decisions made by senior management about resource allocation and strategic directions. In other words, assessment should help determine priorities by comparing needs uncovered by assessment with strategic goals, and by advocating for resources not only where they are most needed but where they advance the strategic goals of the library.

If you are new to assessment, there are a few articles that you may want to look at.
• Tina E. Chrzastowski (2008): “Assessment 101 for Librarians: A Guidebook,” Science & Technology Libraries 28:1-2, 155-176.
• Lisa R. Horowitz (2009): “Assessing Library Services: A Practical Guide for the Nonexpert,” Library Leadership & Management 23:4, 193-203.

Both of these have bibliographies that may be helpful, as well as links to tools, blogs, and organizations that may be useful.

What does assessment mean to you? What tools do you use? What have you done that helps staff at your library be more comfortable with assessing library services?

Clickers, or Does Technology Really Cure What Ails You?

ACRLog welcomes a guest post from Cori Strickler, Information Literacy Librarian at Bridgewater College.

During idle times at the reference desk, or when the students are gone for a break, I find myself creating instruction “wish lists” of tools or gadgets that I’d love to have for my sessions. One item that has been on my list for a few years now is clickers, or student response systems as they are officially called. In academic classrooms they are used for attendance, quiz taking, or other more informal assessments. For me, I saw clickers as a way to solve one of my basic and most frustrating problems: getting students to be engaged during the sessions. Students have little desire to participate in library sessions and trying to get them to comment on their library experience is like pulling teeth, except that the process is a lot more painful for me than it is for the students.

For those of you who haven’t heard of clickers before, they are little remote control like devices that allow the students to answer multiple choice questions by sending their responses to the computer for real time analysis. They are sort of like the devices they use on Who Wants to Be a Millionaire to poll the audience.

My library doesn’t have the budget for clickers, but this semester through a chance discussion with the director of the health services department, I learned that the college received a grant for 100 TurningPoint clickers and the necessary software. The director rarely needed all of the clickers at the same time, so she offered about fifty for me to use during my instruction sessions.

So, I now have access to a tool that I had coveted for many years, but that was only the easy part. I still have to figure out how to meaningfully integrate this technology into my sessions.

My overall goals are relatively simple. I want to encourage student involvement in any way possible so I would not have to lecture for fifty minutes straight. My voice just can’t handle the pressure. To be successful, though, I need to be purposeful with my inclusion. I can’t just stick a clicker quiz at the beginning of a session and assume that the students will suddenly be overwhelmed with a desire to learn everything there is about the library. Most faculty who schedule a library instruction session have a particular purpose in mind, so I also need to be sure that I fulfill their expectations as well.

After much consideration, I decided not to add the clickers to all my sessions. Instead, I decided to focus on first year students, who hopefully aren’t quite as jaded as the upper classmen, and haven’t already decided that they know everything about research.

For my first clicker experiment, I used them with a quiz to help me gauge the classes’ knowledge of the library. I also decided to use them as an alternative way to administer our session evaluation survey. Ultimately, I had mixed results with the clickers. The students did respond better than before, but I did not get full participation. While this isn’t a big issue with the quiz, this lack of participation was an issue when they were asked to complete the evaluation survey. For most survey questions I lacked responses from five or six students, which was a larger number than when I used the paper surveys and could potentially affect my survey results.

Their lack of participation could be due to a number of reasons. The students claimed they were familiar with the clickers, but they did not seem to be as adept as they claimed. Also, due to my inexperience with the clickers there might have been a malfunction with the devices themselves. Or, maybe the students just didn’t want to engage, especially since there was still no incentive to participate. When I looked back through the survey results, they did not seem to indicate any greater amount of satisfaction regarding the sessions.

This first experience with the clickers left me a bit skeptical, but I decided to try them again. This time, I created brief quizzes related to brainstorming keywords and types of plagiarism. My second class was smaller than the first, and I seemed to receive better engagement. The clickers also seemed to allow them to be more honest with the surveys and they seem more comfortable indicating their disinterest in the information presented, though the results also indicated that they saw the overall value in the information.

I have used the clickers in about twelve sessions this semester, and overall they were well received by the students. However, I am not completely sure that it adds significantly to the engagement. I also have not seen any indication in the surveys that my sessions are better or worse with their inclusion. I have discovered though that there may be some sessions, and topics, that are better suited for clickers than others. Upper level classes where I am trying to show specific resources do not lend themselves initially to clickers, and the time may be better spent with other activities or instruction.

I am still in the process of learning how clickers will fit into my classes, but I would generally call them a success, if only for the fact that is makes the survey process easier. Though, they aren’t the panacea for student engagement for which I had hoped. Activity type and student familiarity are essential variables that appear to affect clicker success.

Unfortunately, the overall nature of one shot instruction seems to be the greatest contributor to student disengagement. Student and faculty buy-in is the necessary component for library instruction success, whether it includes clickers or not.