Category Archives: Assessment

Following the road of assessment

This Fall semester has been taking off like a rocket. It’s been a little less than a month, but library instruction has been taking up a good chunk of my time. At my institution, American University, we have a program called College Writing. This program requires all incoming freshman to take at least one section of College Writing.

Every faculty member that teaches College Writing is paired with a librarian. At least one library instruction session is required and it’s up to us to shape the lesson so that it’s relevant to the student’s’ current assignment.

This semester is a bit different. I had a total of 18 sections of College Writing, compared to the nine sections I had last Fall. I was prepared for a busy semester. Oh boy, has it been busy and it’s only been 2 weeks!

I could be as detailed as I want about my routine, but it’s basically a chain of communication. I ask the faculty member about learning outcomes, what they want out of this library instruction day, what skill level their students are at, and are the students quiet? Do they participate? Details like these help me out a lot, since I will only see the students in the classroom once or twice in the semester.

As I scheduled classes, reserved rooms, and worked on my class outlines, I struggled with how I would incorporate assessment into my lessons. Assessment is a topic I have been thinking about for a while. To be honest, this was a subject that I had been avoiding because it was something that made me uneasy. I have always told myself “I’ll do it next semester” or “I’ll find more information about it later.”

However, it’s been a year since I have started my job at American and decided that this semester it was time to incorporate assessment into my library instruction. When I think of assessment, I tend to think of a ton of data, a desk full of papers everywhere, and an endless amount of work (OK, I like to exaggerate). Now, I do have some forms of assessment in my classes, but it’s in the form of the questions I ask the students in order to evaluate their familiarity with not only the library, but the resources that we are using in class.

Assessment comes in many forms, but I specifically had one method in mind. Over the summer, I worked with another colleague in doing library instruction for the Summer Transition Enrichment Program (STEP). This program provides incoming freshman with preparation for academic success. STEP is a 7 week residential program that helps students with the transition from high school to college. They have a class that is very similar to a College Writing class, meaning, they have a research paper due by the end of the program. One of the components of that class is a library instruction day. As my colleague and I started preparing to co-teach one of the classes, she asked what form of assessment I do for my College Writing classes.

Immediately, I felt ashamed. All the time I had put assessment off and this was the moment where I finally had to own up to it. However, I have awesome colleagues who don’t poke (too much) fun at me. She talked about the post class questionnaire that she usually did with her students. Together, we came up with a couple of questions for the students in the STEP class. It was not a long process whatsoever, but I came to see that there is actually nothing scary about it, like I had thought.

There are many different types of assessment, ones more complicated and time consuming than my little questionnaire. However, I wanted to start small and with something I was comfortable with.  My library instruction classes only started last week, but I remember getting back the questionnaires and leaving them on my desk for a couple of hours. I was afraid to look at them. What if the students did not learn anything? What if they hated me? What if I was the worst librarian ever?

After a couple hours, I needed to log my classes into our stats. I counted the questionnaires and look through them. To my surprise, the students did well. Now, this is an assessment to help me analyze what the students had trouble comprehending and also the areas where I need to do better.

And guess what happened? I found one area where I realized I needed to explain better and spend a little more time on. It’s only the beginning of the semester and I have already found ways to improve upon and this is what it’s really about. To me, assessment is an opportunity to learn about your teaching and improve as you go along.

As someone who is new to this, I want to continue to learn about assessment. There are a couple of resources that one can turn to:

-Look at your own institution to see if they offer any workshops on assessment. What resources do they offer to help their staff or faculty?

-Research other institutions to see if they have assessment in place or an assessment toolkit

-Research the literature on instruction and assessment to see how other institutions go about it

Finally, your colleagues will be your most valuable tools. What assessment do they do? Take them out for coffee and ask them!

I still have a couple more College Writing classes, but I am going to make it my goal to incorporate even more assessment for next semester’s classes. In other words, I am going to make myself accountable. For next semester, I will write another post on how I plan to incorporate more assessment into my teaching, but I also want to know from our readers, what assessment do you do for library instruction? Stay tuned!

Dear diary: Using a reflective teaching journal for improvement and assessment?

A few months ago, I posted about how I’ve shifted to using more constructivist activities and formative assessments in classes. I wrote about how I think these pedagogical frameworks have helped me to strengthen student learning and engagement. I said things about how–by developing opportunities for students to experiment in classes with tools, strategies, and concepts in order to construct their understanding, at least in part–they can deepen and expand their learning. And I wrote, too, about how these activities serve as informal assessments of students’ knowledge, such that I can adjust instruction in real time to better meet students where they are. I’m still feeling rather enthusiastic about all of this. I’m sure there are a million ways for me to do this better still, but in every instance so far this has been an invaluable shift in my thinking and teaching, not to mention a welcome revitalization for my frame of mind.

The data I’m informally gathering have helped me learn a lot about my students and my teaching. About where they’re coming from and how they approach and interpret concepts and strategies. About what I assume or where we don’t connect. I worry, though, that I’m not maximizing the data. I want to grab hold of it a little more and put it to more use. The activities, approaches, and assessments I’ve been doing, though, are largely informal and the data sometimes feel fleeting and anecdotal. Without tangible artifacts of student work (such as worksheets, write-ups, polls, quizzes, or papers) to ground my analysis, I’ve been struggling with how to do that. Couldn’t I somehow compile it across classes for broader understanding of student learning? If I could analyze it more rigorously, could I better gauge the effectiveness of my pedagogy? I want to use it more thematically and systematically to inform improvements I can make in the classroom, assess and document students’ learning, and (hopefully!) demonstrate the impact of instruction. So how do I effectively turn this into recordable data for documentation, analysis, and reflection?

At a session at the ACRL conference this past spring, it was suggested to me that I try using a reflective teaching journal. If you’re like me, the skeptical (or even cynical) voice in your head just kicked in. A reflective teaching journal? Maybe it sounds a little hokey. I admit that it did to me. But then I started thinking about the intensively qualitative nature of the data I’m interested in. I started thinking about how productive reflection often is for me. And then I read Elizabeth Tompkins’ article, recommended to me by a colleague, which opened my eyes a bit to what shape(s) a teaching journal might take.

In “A reflective teaching journal: An instructional improvement tool for academic librarians,” Tompkins reviewed relevant literature and described her own experience keeping a journal to document and reflect on instruction. A reflective teaching journal isn’t the same as a diary or a log, Tompkins noted. A journal brings together the “personal reflections” of a diary with the “empirical descriptions” of a log in order to “examine experiences, and to pose questions and solutions for reflection and improvement.” Tompkins reviewed a variety of journaling methods, as described in the literature:

  • Hobson (1996) used a double-entry format to “separate out descriptive writing from reflections. For example, an author would describe an experience on the left side of the journal while placing his or her reflections on the right.”
  • Shepherd (2006) used guiding questions to “make sense of complex situations.” For example:
    • “How do I feel about this?”
    • “What do I think about this?”
    • “What have I learned from this?”
    • “What action will I take as a result of my lessons learned?”
    • “What have I learned from what I’ve done?”
    • “What have I done with what I learned?”
  • Gorman (1998) concentrated on “concrete issues that were problematic in his classroom.” The journal also “served as a record keeper, capturing his students’ progress before and after he instituted new instruction techniques.”
  • Jay and Johnson (2002) classified three levels of reflection: descriptive, comparative, and critical.
    • “Central to the descriptive phase is asking questions about what is taking place. […] It is crucial to find significance in the problem under consideration. It is important to separate out the relevant facts with sufficient detail to avoid jumping to conclusions.”
    • “Comparative reflection involves looking at the area of concern from a variety of viewpoints. […] Examining a situation from the outlook of others may result in uncovering implications that may otherwise have been missed.”
    • “Employ critical reflection to search for the deeper meaning of a situation. […] Contains an element of judgment, allowing the practitioner to look for the most beneficial method of resolving a problem. Ideally, critical reflection will lead the educator to develop a repertoire of best practices. […] Not the ‘last step,’ but rather ‘the constant returning to one’s own understanding of the problem at hand.’”

Still not convinced? If this seems cheesy or prescriptive, I feel you. Or maybe it seems like nothing special. Tompkins cited one critic who “dismisses reflection as a trendy buzzword for merely thinking about what one is doing.” What’s the big deal, right? To me it’s partly about intentionality. As E.M. Forster wrote, “How can I tell what I think till I see what I say?” I want to increase and focus my attention and devote more time and mental space to processing. Time and mental space are always in short supply, it seems, so the structure of a journal feels like it might force my hand. It’s also about data collection. I want to try to move from the instance and the anecdotal to the bigger picture and the systematic. In her article, Tompkins concentrates on using journals for instructional improvements, and therefore the instructor’s perspective. Students are inherent therein, but I hope to spotlight the student perspective and learning more.

So I’m going to give it a shot. I’m not yet committed to any single approach, other than the doing of it. So far, I seem to tend toward models of guiding questions with descriptive, comparative, and critical lenses. I plan to experiment with different structures, though, as described by Tompkins and others–or make it up as I go–and see what works, as long as I can work toward the goals I have in mind:

  • Document what I’m doing and learning so that it’s less transitory
  • Direct and heighten my attention to what I care about in the classroom, what works and doesn’t, what helps students
  • Facilitate my thoughts on how to teach better
  • Capture evidence of student learning in individual classes and across classes
  • Consider how this work demonstrates the value that the library and librarians contribute to student learning
  • Generally try to connect some dots

Your thoughts? How do you grab hold of your daily teaching and learning experiences and make meaning of them? I’d love to hear your ideas in the comments.

Facilitating student learning and engagement with formative assessment

Information literacy instruction is a big part of my job. For a little context, I teach somewhere in the range of 35-45 classes per semester at my small liberal arts college. While a few of the sessions might sometimes be repeats for a course with multiple sections, they’re mostly unique classes running 75 minutes each. I’ve been teaching for some time now and while I’m a better teacher than I was ten or five years ago or even last year, there’s always plenty of room for improvement of course. A few months ago, I wrote a post about reflection on and in my teaching, about integrating “more direct discussion of process and purpose into my classes […] to lay bare for students the practice, reflection, and progression that complicates [information literacy] work, but also connects the gaps, that brings them closer to crossing the threshold.” Each year, I’ve been devoting more attention to trying to do just that: integrate process and purpose into my classes to improve student learning and engagement.

It didn’t start out as anything momentous, just a little bit all the time. Initially, it was only a small activity here or there to break things up, to give students a chance to apply and test the concept or resource under discussion, and to scaffold to the next concept or resource. I would demo a search strategy or introduce a new database and then ask students to try it out for their own research topic. I would circle the class and consult individually as needed. After a few minutes of individual exploration, we would come back together to address questions or comments and then move on to the next resource, strategy, or concept. This appeared to be working well enough. Students seemed to be on board and making progress. By breaking a class into more discrete chunks and measuring the pace a bit, students had more of a chance to process and develop along the way. Spacing out the hands-on work kept students engaged all class long, too.

For some time, I’ve started classes by reviewing the assignment at hand to define and interpret related information needs, sometimes highlighting possible areas of confusion students might encounter. Students expressed appreciation for this kind of outlining and the shape and structure it gave them. I felt a shift, though, when I started asking students, rather than telling them, about their questions and goals at the outset of a class. Less Here are the kinds of information sources we’ll need to talk about today and more What kinds of information do you think you need to know how to access for this assignment? What do you hope that information will do for you? What have been sticky spots in your past research experiences that you want to clarify? I wanted students to acknowledge their stake in our class goals and this conversation modeled setting a scope for learning and information needs. We then used our collective brainstorm as a guiding plan for our class. More often than not, students offered the same needs, questions, and problems that I had anticipated and used to plan the session, but it felt more dynamic and collaboratively constructed this way. (Of course, I filled in the most glaring gaps when needed.)

So why not, I finally realized one day, extend the reach of this approach into the entire class? While scaffolding instruction with small activities had helped students process, develop, and engage, I was still leading the charge at the pace I set. But what if we turned things around?  What if, essentially, they experimented on their own in order to determine something that worked for them (and why!) and shared their thoughts with the class? What if we constructed the class together? Rather than telling them what to do at the outset of each concept chunk, I could first ask them to investigate. Instead of demonstrating, for example, recommended search strategies and directing students to apply them to their own research, I could ask students to experiment first with multiple search strategies in a recommended database for a common topic in order to share with the class the strategies they found valuable. The same goes for navigating, filtering, and refining search results or for evaluating sources and selecting the most relevant or for any concept or resource for that matter. Why not, I thought, ask students to take a first pass and experiment? We could then share ideas as a class, demonstrating and discussing the strengths and weaknesses of their tactics along the way, collaboratively building a list of best practices strategies. Students could then revisit their work, applying those best practices where needed.

This kind of experiment-first-then-build-together-then-revise approach is simple enough, but its advantages feel rather significant to me. It makes every class exciting, because it’s—in part, at least—unique and responsive to precisely those students’ needs. Of course I have a structure and goals in mind, prepared notes in hand, but it’s a flexible approach. While it’s not appropriate for every class, the low stakes/low prep makeup is readily applicable to different scenarios and content areas. The students and I are actively involved in constructing the work of the class together. Everyone has a chance to contribute and learn from each other. In particular, more experienced students get to share their knowledge while less experienced students learn from their peers. The expectation to contribute helps students pay attention to the work and to each other. Its scaffolded and iterative design helps students digest and apply information. Its reflective nature reveals for students practice and process, too; it models the metacognitive mindset behind how to learn, how to do research. I don’t mean to get too ebullient here. It’s not a panacea. But it has made a difference. It’s probably no surprise that this kind of teaching has required a degree of comfort, a different kind of classroom leadership, and a different kind of instinct that would have been much, much harder to conjure in my earlier teaching.

While I wasn’t aware of it initially and didn’t set out to make it so, I now recognize this as formative assessment. Not only do these small activities increase opportunities for engagement and learning, they serve as authentic assessment of students’ knowledge and abilities in the moment. They provide evidence of student learning and opportunities for action immediately. With that immediate input, I can adjust the nature and depth of instruction appropriately at the point of need. All in a way that’s authentic to and integrated in the work of the class.

The informality of this approach is part of what makes it flexible, low prep, and engaging. It’s such a rich site for documentation and evaluation of student learning, though. I want to capture the richness of this knowledge, demonstrate the impact of instruction, document students’ learning. But I’m struggling with this. I haven’t yet figured out how to do this effectively and systematically. Some formative assessments result in student work artifacts that can illustrate learning or continuing areas of difficulty, but the shape my implementation has so far taken results in less tangible products. At the ACRL 2015 conference a few weeks ago, I attended a great session led by Mary Snyder Broussard, Carrie Donovan, Michelle Dunaway, and Teague Orblych: “Learning Diagnostics: Using Formative Assessment to Sustainably Improve Teaching & Learning.” When I posed this question in the session, Mary suggested using a “teacher journal” to record my qualitative reflections and takeaways after each class and to notice trends over time. I’m interested in experimenting with this idea, but I’m still searching for something that might better capture student learning, rather than only my perception of it. I’m curious to read Mary’s book Snapshots of Reality: A Practical Guide to Formative Assessment in Library Instruction, as well as Michelle and Teague’s article “Formative Assessment: Transforming Information Literacy Instruction” to see if I might be able to grab onto or adapt any other documentation practices.

Do you use formative assessment in your teaching? How do you document this kind of informal evidence of student learning? I’d love to hear your thoughts in the comments.

Digging Into Institutional Data

I have both a professional and scholarly interest in how the students at the college where I work do their academic work, and (of course) whether and how they use the library. In my own research I’m much more likely to use qualitative than quantitative methods. I prefer interviews and other qualitative methods because they offer so much more depth and detail than surveys, though of course that comes at the expense of breadth of respondents. Still, I appreciate learning more about our students’ lives; these compelling narratives can be used to augment what we learn from surveys and other broad but shallow methods of data collection.

Not *that* kind of survey
Not *that* kind of survey

But even though I love a good interview, I can also be a part-time numbers nerd: I admit to enjoying browsing through survey results occasionally. Recently I was working on a presentation for a symposium on teaching and technology at one of the other colleges in my university system and found myself hunting around the university’s Office of Institutional Research and Assessment website for some survey data to help contextualize students’ use of technology. My university runs a student experience survey every 2 years, and until last week I hadn’t realized that the data collected this past Spring had just been released.

Reader, I nearly missed dinnertime as I fell down the rabbit hole of the survey results. It’s a fascinating look at student data points at the 19 undergraduate institutions that make up the university. There’s the usual info you’d expect from the institutional research folks — how many students are enrolled at each college, part-time vs. full-time students, race and ethnicity, and age, to name a few examples. But this survey asks students lots of other questions, too. How long is their commute? Are they the first in their family to attend college? How many people live in their household? Do they work at a job and, if so, how many hours per week? How often do they use campus computer labs? Do they have access to broadband wifi off-campus? If they transferred to their current college, why? How do they prefer to communicate with faculty and administrators?

My university isn’t the only one that collects this data, of course. I imagine there are homegrown and locally-administered surveys at many colleges and universities. There’s also the National Survey of Student Engagement, abbreviated NSSE (pronounced “Nessie” like the mythical water beast), which collects data from 1,500+ American and Canadian colleges and universities. The NSSE website offers access to the data via a query tool, as well as annual reports that summarize notable findings (fair warning: the NSSE website can be another rabbit hole for the numbers nerds among us). There’s also the very local data that my own college’s Office of Assessment and Institutional Research collects. This includes the number of students enrolled in each of the college’s degree programs, as well as changes through time. Retention and graduation rates are there for browsing on our college website, too.

What does all of this student data collected by offices of institutional research have to do with academic libraries? Plenty! We might use the number of students enrolled in a particular major to help us plan how to work with faculty in that department around information literacy instruction, for example. The 2012 annual NSSE report revealed that students often don’t buy their course textbooks because of the expense (as have other studies), findings that librarians might use to justify programs for faculty to create or curate open educational resources, as librarians at Temple University and the University of Massachusetts Amherst have done. And at my library we’re using data on how and where students do their academic work outside of the library, both the university-collected survey results as well as qualitative data collected by me and my colleagues, to consider changes to the physical layout to better support students doing their academic work.

Have you ever found yourself captivated by institutional research data? How have you used college or university-wide survey results in your own library practice? Let us know in the comments.

Photo by Farrukh.

If At First You Don’t Assess, Try, Try Again

ACRLog welcomes a guest post from Katelyn Tucker & Alyssa Archer, Instruction Librarians at Radford University.

Instruction librarians are always looking for new & flashy ways to engage our students in the classroom. New teaching methods are exciting, but how do we know if they’re working? Here at Radford University, we’ve been flipping and using games for one-shot instruction sessions for a while, and our Assessment Librarian wasn’t going to accept anecdotal evidence of success any longer. We decided that the best way to see if our flipped and gamified lessons were accomplishing our goals was to evaluate the students’ completed assignments. We tried to think of every possible issue in designing the study. Our results, however, had issues that could have been prevented in hindsight. We want you to learn from our mistakes so you are not doomed to repeat them.

Our process

Identifying classes to include in this assessment of flipped versus gamified lessons was a no-brainer for us. A cohort of four sections of the same course that use identical assignment descriptions, assignment sheets, and grading rubrics meant that we had an optimal sample population. All students in the four sections created annotated bibliographies based on these same syllabi and assignment instructions. We randomly assigned two classes to receive flipped information literacy instruction and two to play a library game. After final grades had been submitted for the semester, the teaching faculty members of each section stripped identifying information from their students’ annotated bibliographies and sent them to us. We assigned each bibliography a number and then assigned two librarian coders to each paper. We felt confident that we had a failsafe study design.

Using a basic rubric (see image below, click to enlarge), librarians coded each bibliography for three outcomes using a binary scale. Since our curriculum lists APA documentation style, scholarly source evaluation, and search strategy as outcomes for the program, we coded for competency in these 3 areas. This process took about two months to complete, as coding student work is a time-consuming process.


The challenges

After two librarians independently coded each bibliography, our assessment librarian ran inter-rater reliability statistics, and… we failed. We had previously used rubrics to code annotated bibliographies for another assessment project, so we didn’t spend any time explaining the process with our experienced coders. As we only hit around 30% agreement between coders, it is obvious that we should have done a better job with training.

Because we had such low agreement between coders, we weren’t confident in our success with each outcome. When we compared the flipped sections to the gamified ones, we didn’t find any significant differences in any of our outcomes. Students who played the game did just as well as those who were part of the flipped sections. However, our low inter-rater reliability threw a wrench in those results.

What we’ve learned

We came to understand the importance of norming, discussing among coders what the rubric means, and incorporating meaningful conversations on how to interpret assessment data into the norming process. Our inter-rater reliability issues could have been avoided with detailed training and discussion. Even though we thought we were safe on this project, because of earlier coding projects, the length of time between assessments created some large inconsistencies.

We haven’t given up on norming: including multiple coders may be time-intensive, but when done well, gives our team confidence in the results. The same applies to qualitative methodologies. As a side part of this project, one librarian looked at research narratives written by some participants, and decided to bravely go it alone on coding the students’ text using Dedoose. While it was an interesting experiment, the key point learned was to bring in more coders! While qualitative software can help identify patterns, it’s nothing compared to a partner looking at the same data and discussing as a team.

We also still believe in assessing output. As librarians, we don’t get too many opportunities to see how students use their information literacy skills in their written work. By assessing student output, we can actually track competency in our learning outcomes. We believe that students’ papers provide the best evidence of success or failure in the library classroom, and we feel lucky that our teaching faculty partners have given us access to graded work for our assessment projects.