Category Archives: Assessment

Facilitating student learning and engagement with formative assessment

Information literacy instruction is a big part of my job. For a little context, I teach somewhere in the range of 35-45 classes per semester at my small liberal arts college. While a few of the sessions might sometimes be repeats for a course with multiple sections, they’re mostly unique classes running 75 minutes each. I’ve been teaching for some time now and while I’m a better teacher than I was ten or five years ago or even last year, there’s always plenty of room for improvement of course. A few months ago, I wrote a post about reflection on and in my teaching, about integrating “more direct discussion of process and purpose into my classes […] to lay bare for students the practice, reflection, and progression that complicates [information literacy] work, but also connects the gaps, that brings them closer to crossing the threshold.” Each year, I’ve been devoting more attention to trying to do just that: integrate process and purpose into my classes to improve student learning and engagement.

It didn’t start out as anything momentous, just a little bit all the time. Initially, it was only a small activity here or there to break things up, to give students a chance to apply and test the concept or resource under discussion, and to scaffold to the next concept or resource. I would demo a search strategy or introduce a new database and then ask students to try it out for their own research topic. I would circle the class and consult individually as needed. After a few minutes of individual exploration, we would come back together to address questions or comments and then move on to the next resource, strategy, or concept. This appeared to be working well enough. Students seemed to be on board and making progress. By breaking a class into more discrete chunks and measuring the pace a bit, students had more of a chance to process and develop along the way. Spacing out the hands-on work kept students engaged all class long, too.

For some time, I’ve started classes by reviewing the assignment at hand to define and interpret related information needs, sometimes highlighting possible areas of confusion students might encounter. Students expressed appreciation for this kind of outlining and the shape and structure it gave them. I felt a shift, though, when I started asking students, rather than telling them, about their questions and goals at the outset of a class. Less Here are the kinds of information sources we’ll need to talk about today and more What kinds of information do you think you need to know how to access for this assignment? What do you hope that information will do for you? What have been sticky spots in your past research experiences that you want to clarify? I wanted students to acknowledge their stake in our class goals and this conversation modeled setting a scope for learning and information needs. We then used our collective brainstorm as a guiding plan for our class. More often than not, students offered the same needs, questions, and problems that I had anticipated and used to plan the session, but it felt more dynamic and collaboratively constructed this way. (Of course, I filled in the most glaring gaps when needed.)

So why not, I finally realized one day, extend the reach of this approach into the entire class? While scaffolding instruction with small activities had helped students process, develop, and engage, I was still leading the charge at the pace I set. But what if we turned things around?  What if, essentially, they experimented on their own in order to determine something that worked for them (and why!) and shared their thoughts with the class? What if we constructed the class together? Rather than telling them what to do at the outset of each concept chunk, I could first ask them to investigate. Instead of demonstrating, for example, recommended search strategies and directing students to apply them to their own research, I could ask students to experiment first with multiple search strategies in a recommended database for a common topic in order to share with the class the strategies they found valuable. The same goes for navigating, filtering, and refining search results or for evaluating sources and selecting the most relevant or for any concept or resource for that matter. Why not, I thought, ask students to take a first pass and experiment? We could then share ideas as a class, demonstrating and discussing the strengths and weaknesses of their tactics along the way, collaboratively building a list of best practices strategies. Students could then revisit their work, applying those best practices where needed.

This kind of experiment-first-then-build-together-then-revise approach is simple enough, but its advantages feel rather significant to me. It makes every class exciting, because it’s—in part, at least—unique and responsive to precisely those students’ needs. Of course I have a structure and goals in mind, prepared notes in hand, but it’s a flexible approach. While it’s not appropriate for every class, the low stakes/low prep makeup is readily applicable to different scenarios and content areas. The students and I are actively involved in constructing the work of the class together. Everyone has a chance to contribute and learn from each other. In particular, more experienced students get to share their knowledge while less experienced students learn from their peers. The expectation to contribute helps students pay attention to the work and to each other. Its scaffolded and iterative design helps students digest and apply information. Its reflective nature reveals for students practice and process, too; it models the metacognitive mindset behind how to learn, how to do research. I don’t mean to get too ebullient here. It’s not a panacea. But it has made a difference. It’s probably no surprise that this kind of teaching has required a degree of comfort, a different kind of classroom leadership, and a different kind of instinct that would have been much, much harder to conjure in my earlier teaching.

While I wasn’t aware of it initially and didn’t set out to make it so, I now recognize this as formative assessment. Not only do these small activities increase opportunities for engagement and learning, they serve as authentic assessment of students’ knowledge and abilities in the moment. They provide evidence of student learning and opportunities for action immediately. With that immediate input, I can adjust the nature and depth of instruction appropriately at the point of need. All in a way that’s authentic to and integrated in the work of the class.

The informality of this approach is part of what makes it flexible, low prep, and engaging. It’s such a rich site for documentation and evaluation of student learning, though. I want to capture the richness of this knowledge, demonstrate the impact of instruction, document students’ learning. But I’m struggling with this. I haven’t yet figured out how to do this effectively and systematically. Some formative assessments result in student work artifacts that can illustrate learning or continuing areas of difficulty, but the shape my implementation has so far taken results in less tangible products. At the ACRL 2015 conference a few weeks ago, I attended a great session led by Mary Snyder Broussard, Carrie Donovan, Michelle Dunaway, and Teague Orblych: “Learning Diagnostics: Using Formative Assessment to Sustainably Improve Teaching & Learning.” When I posed this question in the session, Mary suggested using a “teacher journal” to record my qualitative reflections and takeaways after each class and to notice trends over time. I’m interested in experimenting with this idea, but I’m still searching for something that might better capture student learning, rather than only my perception of it. I’m curious to read Mary’s book Snapshots of Reality: A Practical Guide to Formative Assessment in Library Instruction, as well as Michelle and Teague’s article “Formative Assessment: Transforming Information Literacy Instruction” to see if I might be able to grab onto or adapt any other documentation practices.

Do you use formative assessment in your teaching? How do you document this kind of informal evidence of student learning? I’d love to hear your thoughts in the comments.

Digging Into Institutional Data

I have both a professional and scholarly interest in how the students at the college where I work do their academic work, and (of course) whether and how they use the library. In my own research I’m much more likely to use qualitative than quantitative methods. I prefer interviews and other qualitative methods because they offer so much more depth and detail than surveys, though of course that comes at the expense of breadth of respondents. Still, I appreciate learning more about our students’ lives; these compelling narratives can be used to augment what we learn from surveys and other broad but shallow methods of data collection.

Not *that* kind of survey
Not *that* kind of survey

But even though I love a good interview, I can also be a part-time numbers nerd: I admit to enjoying browsing through survey results occasionally. Recently I was working on a presentation for a symposium on teaching and technology at one of the other colleges in my university system and found myself hunting around the university’s Office of Institutional Research and Assessment website for some survey data to help contextualize students’ use of technology. My university runs a student experience survey every 2 years, and until last week I hadn’t realized that the data collected this past Spring had just been released.

Reader, I nearly missed dinnertime as I fell down the rabbit hole of the survey results. It’s a fascinating look at student data points at the 19 undergraduate institutions that make up the university. There’s the usual info you’d expect from the institutional research folks — how many students are enrolled at each college, part-time vs. full-time students, race and ethnicity, and age, to name a few examples. But this survey asks students lots of other questions, too. How long is their commute? Are they the first in their family to attend college? How many people live in their household? Do they work at a job and, if so, how many hours per week? How often do they use campus computer labs? Do they have access to broadband wifi off-campus? If they transferred to their current college, why? How do they prefer to communicate with faculty and administrators?

My university isn’t the only one that collects this data, of course. I imagine there are homegrown and locally-administered surveys at many colleges and universities. There’s also the National Survey of Student Engagement, abbreviated NSSE (pronounced “Nessie” like the mythical water beast), which collects data from 1,500+ American and Canadian colleges and universities. The NSSE website offers access to the data via a query tool, as well as annual reports that summarize notable findings (fair warning: the NSSE website can be another rabbit hole for the numbers nerds among us). There’s also the very local data that my own college’s Office of Assessment and Institutional Research collects. This includes the number of students enrolled in each of the college’s degree programs, as well as changes through time. Retention and graduation rates are there for browsing on our college website, too.

What does all of this student data collected by offices of institutional research have to do with academic libraries? Plenty! We might use the number of students enrolled in a particular major to help us plan how to work with faculty in that department around information literacy instruction, for example. The 2012 annual NSSE report revealed that students often don’t buy their course textbooks because of the expense (as have other studies), findings that librarians might use to justify programs for faculty to create or curate open educational resources, as librarians at Temple University and the University of Massachusetts Amherst have done. And at my library we’re using data on how and where students do their academic work outside of the library, both the university-collected survey results as well as qualitative data collected by me and my colleagues, to consider changes to the physical layout to better support students doing their academic work.

Have you ever found yourself captivated by institutional research data? How have you used college or university-wide survey results in your own library practice? Let us know in the comments.

Photo by Farrukh.

If At First You Don’t Assess, Try, Try Again

ACRLog welcomes a guest post from Katelyn Tucker & Alyssa Archer, Instruction Librarians at Radford University.

Instruction librarians are always looking for new & flashy ways to engage our students in the classroom. New teaching methods are exciting, but how do we know if they’re working? Here at Radford University, we’ve been flipping and using games for one-shot instruction sessions for a while, and our Assessment Librarian wasn’t going to accept anecdotal evidence of success any longer. We decided that the best way to see if our flipped and gamified lessons were accomplishing our goals was to evaluate the students’ completed assignments. We tried to think of every possible issue in designing the study. Our results, however, had issues that could have been prevented in hindsight. We want you to learn from our mistakes so you are not doomed to repeat them.

Our process

Identifying classes to include in this assessment of flipped versus gamified lessons was a no-brainer for us. A cohort of four sections of the same course that use identical assignment descriptions, assignment sheets, and grading rubrics meant that we had an optimal sample population. All students in the four sections created annotated bibliographies based on these same syllabi and assignment instructions. We randomly assigned two classes to receive flipped information literacy instruction and two to play a library game. After final grades had been submitted for the semester, the teaching faculty members of each section stripped identifying information from their students’ annotated bibliographies and sent them to us. We assigned each bibliography a number and then assigned two librarian coders to each paper. We felt confident that we had a failsafe study design.

Using a basic rubric (see image below, click to enlarge), librarians coded each bibliography for three outcomes using a binary scale. Since our curriculum lists APA documentation style, scholarly source evaluation, and search strategy as outcomes for the program, we coded for competency in these 3 areas. This process took about two months to complete, as coding student work is a time-consuming process.

assessmentchart

The challenges

After two librarians independently coded each bibliography, our assessment librarian ran inter-rater reliability statistics, and… we failed. We had previously used rubrics to code annotated bibliographies for another assessment project, so we didn’t spend any time explaining the process with our experienced coders. As we only hit around 30% agreement between coders, it is obvious that we should have done a better job with training.

Because we had such low agreement between coders, we weren’t confident in our success with each outcome. When we compared the flipped sections to the gamified ones, we didn’t find any significant differences in any of our outcomes. Students who played the game did just as well as those who were part of the flipped sections. However, our low inter-rater reliability threw a wrench in those results.

What we’ve learned

We came to understand the importance of norming, discussing among coders what the rubric means, and incorporating meaningful conversations on how to interpret assessment data into the norming process. Our inter-rater reliability issues could have been avoided with detailed training and discussion. Even though we thought we were safe on this project, because of earlier coding projects, the length of time between assessments created some large inconsistencies.

We haven’t given up on norming: including multiple coders may be time-intensive, but when done well, gives our team confidence in the results. The same applies to qualitative methodologies. As a side part of this project, one librarian looked at research narratives written by some participants, and decided to bravely go it alone on coding the students’ text using Dedoose. While it was an interesting experiment, the key point learned was to bring in more coders! While qualitative software can help identify patterns, it’s nothing compared to a partner looking at the same data and discussing as a team.

We also still believe in assessing output. As librarians, we don’t get too many opportunities to see how students use their information literacy skills in their written work. By assessing student output, we can actually track competency in our learning outcomes. We believe that students’ papers provide the best evidence of success or failure in the library classroom, and we feel lucky that our teaching faculty partners have given us access to graded work for our assessment projects.

Digital Badges for Library Research?

The world of higher education has been abuzz this past year with the idea of digital badges. Many see digital badges as an alternative to higher education’s system of transcripts and post-secondary degrees, which are constantly being critically scrutinized for their value and ability demonstrate that students are ready for a competitive workforce. There have been several articles from the Chronicle of Higher Education discussing this educational trend. One such article is Kevin Carey’s “A Future Full of Badges,” published back in April. In it, Carey describes how UC Davis, a national leader in agriculture, is pioneering a digital open badge program.

UC Davis’s badge system was created specifically for undergraduate students majoring in Sustainable Agriculture and Food Systems. Their innovative system was one of the winners of the Digital Media and Learning Competition (sponsored by Mozilla and the MacArthur Foundation). According to Carey,

Instead of being built around major requirements and grades in standard three-credit courses, the Davis badge system is based on the sustainable-agriculture program’s core competencies—”systems thinking,” for example. It is designed to organize evidence of both formal and informal learning, from within traditional higher education and without.

As opposed to a university transcript, digital badges could provide a well-rounded view of a student’s accomplishments because it could take into account things like conferences attended and specific skills learned. Clearly, we’re not talking about Girl Scout badges.

Carey seems confident that digital badges aren’t simply a higher education fad. He believes that that with time, these types of systems will grow and be recognized by employers. But I’m still a bit skeptical over whether this movement will gain enough momentum to last.

But just for a moment, let’s assume that this open badge system proves to be a fixture in the future of higher education. Does this mean someday a student could get a badge in various areas of library research, such as searching Lexis/Nexis, locating a book by its call number, or correctly citing a source within a paper? Many college and university librarians struggle with getting information competency skills inserted into the curriculum in terms of learning outcomes or core competencies. And even if they are in the curriculum, librarians often struggle when it comes to working with teaching faculty and students to ensure that these skills are effectively being taught and graded. Perhaps badges could be a way for librarians to play a significant role in the development and assessment student information competency skills.

Would potential employers or graduate school admissions departments be impressed with a set of library research badges on someone’s application? I have no idea. But I do know that as the amount of content available via the Internet continues to grow exponentially, the more important it is that students possess the critical thinking skills necessary to search, find, assess, and use information. If digital badges do indeed flourish within higher education, I hope that library research will be a vital part of the badge sash.

Unpacking Assessment

ACRLog welcomes a guest post from Lisa Horowitz, Assessment Librarian at MIT Libraries.

As an assessment librarian, I am always looking for different ways to think about assessment. Most librarians aren’t statisticians, and for some, even the word itself, assessment, is daunting in that its meaning is unclear. Additionally, it’s such a broad topic that many of us are interested in only specific angles: learning outcomes, collection assessment, return on investment, the Value of Academic Libraries, and so on.

So what is assessment, when you come right down to it? Some librarians where I work find that the terms assessment, evaluation, statistics and data seem to be used interchangeably. The most meaningful way for me to approach the topic is to think of assessment as quality control. It is a way to look at your services, your workflows, your teaching — whatever — to determine what works and what can be improved. In that sense, yes, it is also evaluation. I’ve seen explanations that differentiate between assessment and evaluation, but I tend to just use the term assessment.

Statistics that are gathered for whatever reason, for ARL or ACRL, or for accreditation or other purposes, are actually gathered to assess something. Sometimes they are separated from that assessment because often those who gather these statistics are not the ones who do the assessment. About a dozen years ago, I was on a team that was involved in assessing our reference services while a different team was analyzing our reference-statistics-gathering procedures, until we all realized that the procedures we used to gather statistics would really depend on what we were trying to learn about our services; in other words, we needed to know what we were trying to assess in order to determine what statistics would be useful. Statistics should be inextricably tied to what you are assessing.

The use of the word “data” in libraries can be equally confusing. In the case of assessment, data are the actual numbers, or anecdotes even, that are used to assess. The data themselves are not assessment, but the use of those data are. Sometimes collections librarians see their data-gathering as separate from assessment. Sometimes instruction librarians see their evaluations as unrelated to assessment of library services as a whole. Sometimes librarians from different areas will collect different data to represent something (e.g., the number of items in a collection), but because they use different sources, they come up with different numbers. All of this relates to assessment, and ideally, it should all support library planning, resource allocation and project development.

Assessment, if done well, shows how services, workflows, collections, etc., can be improved. At the same time, it also should contribute to the library’s planning efforts. Let’s say that a library has done collection assessment which shows that a particular collection needs to be developed because of a new area of research among the faculty. At the same time, the instruction assessment has shown that students’ learning outcomes could be improved if information literacy training efforts were doubled, while assessment of the workflows at the service desks show that books are getting to the stacks more efficiently but interlibrary loans are taking longer than users expect. The point of assessment is not only to use these results to determine how to improve those particular areas, but they should also contribute to decisions made by senior management about resource allocation and strategic directions. In other words, assessment should help determine priorities by comparing needs uncovered by assessment with strategic goals, and by advocating for resources not only where they are most needed but where they advance the strategic goals of the library.

If you are new to assessment, there are a few articles that you may want to look at.
• Tina E. Chrzastowski (2008): “Assessment 101 for Librarians: A Guidebook,” Science & Technology Libraries 28:1-2, 155-176.
• Lisa R. Horowitz (2009): “Assessing Library Services: A Practical Guide for the Nonexpert,” Library Leadership & Management 23:4, 193-203.

Both of these have bibliographies that may be helpful, as well as links to tools, blogs, and organizations that may be useful.

What does assessment mean to you? What tools do you use? What have you done that helps staff at your library be more comfortable with assessing library services?