Tag Archives: library instruction

Flipping Out: Preflip Planning

One of my current professional goals is to experiment with new ways to improve my library instruction sessions and grow as an instructor. So when our residency librarian decided to lead a group of instruction librarians to test the “flipped classroom” in library instruction, I welcomed the opportunity to discover how “flipping” might transform my classes. Given the previous interest in “flipping” here at ACRLog, I’ve also decided to share a bit of my planning, implementation, and reflection to continue the discussion about “flipping out” in the library world.

At first, re-envisioning my instruction sessions was a bit overwhelming – although I am still a newbie library instructor, I spent a great amount of time last semester crafting lessons and developing my own teaching style. I can only image how daunting this may seem to more experienced instructors who have honed their own lessons and style over several years of teaching!

Although I’ve used different lesson planning methods during graduate school classes and in my first semester of teaching, (e.g., Backward Design and Madeline Hunter’s model), I had trouble using these methods to plan my flip. Pretty soon, I found myself falling back to the “5 W’s” -  Who, What, When, and Why - to organize my thoughts. My considerations for each question are below.

Photo: By Ted Hood (Courtesy of State Library of New South Wales)

WHO: Who are the students in my flipped class? Who is the professor? Which class will lead to the most successful flipped experience?

If considering only learning outcomes and session materials, nearly any of my instruction sessions could be flipped. However, since the professor for my assigned freshman seminar class is equally interested  in trying out new instruction techniques, I decided his class would be a good match for the trial flipped sessions. Due to his support and investment in the process, I feel confident he will actually distribute pre-class materials to students and will motivate students to complete the assigned pre-class work. (As an added bonus, I also have three, 75-minute instruction sessions with this class, which leaves a cushion to “catch-up” if for some reason the entire flipped experience falls apart.)

WHAT: What are the student learning outcomes? What will students learn through pre-class materials? What activities will students complete during class to cement learning?

Answering these questions has been the most difficult part of planning my flipped classroom. During my “regular” classes, I already try to involve students with hands-on, active learning experiences whenever possible. The challenge with the “flip” has been to make those activities more complex, pushing students to deeper levels of learning, as well as to identify what types of pre-class background students need to successfully complete those activities. Our residency librarian presented this as “What are the basics students should come to class knowing? What are the complexities that in-class sessions will address?”

Like many of the librarians in our “flipping” group, I am using the library’s existing collection of online tutorials as the basis of my flipped materials. I decided to give students 2-3 short videos to watch before class to cover  basic skills, like the “click-by-click” mechanics of searching a database and the beginnings of constructing a search. Then, in-class activities will challenge them to apply those skills to their group research project at increasingly challenging levels.

WHERE: How will flipped materials be organized and delivered to students?

I’m already a big fan of using Google Forms to collect student feedback at the end of instruction sessions. Since I wanted to pair the pre-class videos with a measure of how many students completed the activities and how well they understood the material, Google Forms once again turned out to be an easy solution. For each flipped session, I created a Google form with links to videos along with quiz questions, and the course professor will distribute the form to students before our session.

WHEN: When should students complete pre-class activities?

The week before our in-class session, students will have access to the pre-class materials. Any earlier and I worry the connection between pre-class videos and in-class activities would be lost. This decision was fairly easy to nail down, and getting the date on my calendar is a good reminder finish materials with enough time to review the plan with the professor, distribute to students, etc.

WHY: Why is “flipping” an method I want to try for library instruction?

Although “flipping” is one way I’m fulfilling my goal to explore new instructional techniques, the deeper I dig into planning, the more I think it’s a model that can be useful in library instruction. Most of the librarians I work with or have observed are already moving away from lectures and database demonstrations. But it’s hard to jump into more complex applications and exploratory activities during a traditional 50 or 60 minute class if students don’t have a basic foundation on which to build advanced skills. Off-loading the procedural instructions, like how to navigate the library’s website or basic catalog searching, to pre-class activities can free up in-class time for librarians to help students work through more complex activities.

My flipped experiment is also allowing me to carve out a chunk of in-class time to address additional material, including brainstorming and concept mapping. Last semester, I noticed students in the seminar struggling to craft a manageable research question, which later affected their ability to construct effective searches and to evaluate information for it’s relevancy to their topic. This semester, since I’m providing some of the procedural instruction outside of class, I can accommodate more hands-on experiences into the class and set students up for better guided learning.

Ready, Set, Go!

The first round of pre-class materials is going out to students this week, and our first in-class session is next week! I am excited for student responses to the pre-class material to start coming in and to dive into the full flipped experience. I’m planning to report back in March with my thoughts about how the flip unfolds!

Do you have experience with the flipped classroom? What considerations do you think are vital when planning “the flip?”

Not as simple as “click-by-click”

One of the projects I inherited as emerging technologies librarian is managing our library’s collection of “help guides.” The online learning objects in this collection are designed to provide asynchronous guidance to students when completing research-related tasks. Over the last few months, my focus has been on updating existing guides to reflect website and database interface changes, as well ensuring compliance with federal accessibility standards. With those updates nearly complete, the next order of business is to work with our committee of research and instruction librarians to create new content. The most requested guide at the top of our list? How to use the library’s discovery service rolled out during the Fall 2012 semester.

Like many other libraries, we hope the discovery service will allow users to find more materials across the library’s collections and beyond. Previously, our library’s website featured a “Books” search box to search the catalog, as well as an “Articles” search box to search one of our interdisciplinary databases. To ease the transition to the discovery system, we opted to keep the “Books” and “Articles” search boxes, in addition to adding the “one search box to rule them all”; however, these format search boxes now search the discovery tool using the appropriate document type tag. Without going into the nitty gritty details, this method has created certain “quirks” in the system that can lead sub-optimal search results.

This back-story leads to my current question about creating instructional guides for our discovery system – how do we design screencasts to demonstrate simple searches by format?

So far, this has boiled down to two options:

  1. Address the way students are most likely to interact with our system. We know users are drawn to cues with high information scent to help them find what they need; if I’m looking for a book, I’m more likely to be drawn to anything explicitly labeled “Books.” We also know students “satisfice” when completing research tasks, and many are unfortunately unlikely to care if their searches do not retrieve all possible results. Additionally, whatever we put front-and-center on our homepage is, I think, a decision we need to support within our instructional objects.
  2. Provide instruction demonstrating the way the discovery system was designed to be used. If we know our system is set up in a less-than-optimal way, it’s better to steer students away from the more tempting path. In this case, searching the discovery system as a whole and demonstrating how to use the “Format” limiters to find a specific types of materials. While this option requires ignoring the additional search options on our website, it will also allow us to eventually phase out the “Books” and “Articles” search boxes on the website without significant updates to our screencasts.

While debating these options with my colleagues, it’s been interesting to consider how this decision reflects the complexities of creating  standalone digital learning objects. The challenge is that these materials are often designed without necessarily knowing how, when, or why they will be used; our job is to create objects that meet students at a variety of point-of-need moments. Given that objects like screencasts should be kept short and to-the-point, it’s also difficult to add context that explains why the viewer should complete activities as-shown. And library instruction are not usually designed to make our students “mini-librarians.” Our advanced training and interest in information systems means it is our job to be the experts, but our students to not necessarily need to obtain this same level of knowledge to be successful information consumers and creators.

Does this mean we also engage in a bit of “satisficing” to create instructional guides that are “good enough” but not, perhaps, what we know to be “best?” Or do we provide just enough context to help students follow us as we guide them click-by-click from point A to point B, while lacking the complete “big picture” required to understand why this is the best path? Do either of these options fulfill our goals toward helping students develop their own critical information skills?

No instruction interaction is ever perfect. In person or online, synchronous or asynchronous, we’re always making compromises to balance idealism with reality. And in the case of creating and managing a large collection of online learning objects, it’s been interesting to have conversations which demonstrate why good digital learning objects are not synonymous with “click-by-click” instructions. How do we extend what we know about good pedagogy to create better online learning guides?

 

Waiting on Wikipedia

Recently while I was teaching a class the instructor asked me whether I thought that Wikipedia would ever come to be considered a generally trustworthy, credible source. I always talk about Wikipedia in my one-shot instruction sessions, especially with first year students, but this was the first time I’d ever gotten a question along those lines. And I’ve been thinking about it ever since.

In my classes I point out to students that most of us — students, faculty, librarians, everyone — use Wikipedia all the time. My usual strategy for talking about Wikipedia in library instruction is likely similar to many librarians: I show students how to use it for brainstorming and background information, suggest that they mine the references, and point out the View history link to show them how the entry has changed. I end by noting that Wikipedia is a great place to start but that students shouldn’t cite it in their assignments because it’s much too general, just as they wouldn’t cite a general print encyclopedia. Instead, they should use Wikipedia to point them to other resources that are more appropriate for use in college work.

But I do wonder when Wikipedia will cross the line into acceptable-for-use-as-a-cited-source territory. Will it ever? Has it already?

Full disclosure: I cited Wikipedia in a scholarly journal article I wrote last year. I had what I thought were (and still think are) good reasons. I was writing about using games in information literacy instruction, and I used Wikipedia to define several specific genres of videogames. I felt that the Wikipedia definitions for those types of games were more current and accurate than definitions I found in other published sources. In this case the fluidity and impermanence of Wikipedia were assets. Genres and micro-genres can evolve and change quickly, and I think that most Wikipedia entries on popular culture (in which I’d include videogames) are probably written and edited by fans of those topics. There’s an argument to be made that those fans are the subject experts, so it’s the information they’ve put together that I was most confident in citing. While one of the peer reviewers did note the Wikipedia citations, the journal editor and I discussed it and agreed to keep them.

Of course, Wikipedia won’t always be the best source. Right now I’m working on writing up the results of a project and needed to find the construction dates for campus buildings at one of my research sites. After scouring the college’s website with no luck, I stumbled upon the information in Wikipedia only to come up against a dilemma I’m sure our students face all the time: the information seems true, it’s not blatantly, obviously false, but there’s no citation for it. In this case I didn’t feel comfortable citing Wikipedia so I emailed the college archivist for more information, which she quickly and graciously provided. But what do our students do in a situation like this? There won’t always be a readily identifiable person or source to check with for more information.

According to this recent article in the Atlantic, Wikipedia seems to be moving into a more mature phase. The rate at which Wikipedia articles are edited is decreasing, as is the rate for adding new articles. What does this slowdown mean for Wikipedia? Is it really “nearing completion,” as the article suggests? And when Wikipedia is finished, will it then become a citable source?

Clickers, or Does Technology Really Cure What Ails You?

ACRLog welcomes a guest post from Cori Strickler, Information Literacy Librarian at Bridgewater College.

During idle times at the reference desk, or when the students are gone for a break, I find myself creating instruction “wish lists” of tools or gadgets that I’d love to have for my sessions. One item that has been on my list for a few years now is clickers, or student response systems as they are officially called. In academic classrooms they are used for attendance, quiz taking, or other more informal assessments. For me, I saw clickers as a way to solve one of my basic and most frustrating problems: getting students to be engaged during the sessions. Students have little desire to participate in library sessions and trying to get them to comment on their library experience is like pulling teeth, except that the process is a lot more painful for me than it is for the students.

For those of you who haven’t heard of clickers before, they are little remote control like devices that allow the students to answer multiple choice questions by sending their responses to the computer for real time analysis. They are sort of like the devices they use on Who Wants to Be a Millionaire to poll the audience.

My library doesn’t have the budget for clickers, but this semester through a chance discussion with the director of the health services department, I learned that the college received a grant for 100 TurningPoint clickers and the necessary software. The director rarely needed all of the clickers at the same time, so she offered about fifty for me to use during my instruction sessions.

So, I now have access to a tool that I had coveted for many years, but that was only the easy part. I still have to figure out how to meaningfully integrate this technology into my sessions.

My overall goals are relatively simple. I want to encourage student involvement in any way possible so I would not have to lecture for fifty minutes straight. My voice just can’t handle the pressure. To be successful, though, I need to be purposeful with my inclusion. I can’t just stick a clicker quiz at the beginning of a session and assume that the students will suddenly be overwhelmed with a desire to learn everything there is about the library. Most faculty who schedule a library instruction session have a particular purpose in mind, so I also need to be sure that I fulfill their expectations as well.

After much consideration, I decided not to add the clickers to all my sessions. Instead, I decided to focus on first year students, who hopefully aren’t quite as jaded as the upper classmen, and haven’t already decided that they know everything about research.

For my first clicker experiment, I used them with a quiz to help me gauge the classes’ knowledge of the library. I also decided to use them as an alternative way to administer our session evaluation survey. Ultimately, I had mixed results with the clickers. The students did respond better than before, but I did not get full participation. While this isn’t a big issue with the quiz, this lack of participation was an issue when they were asked to complete the evaluation survey. For most survey questions I lacked responses from five or six students, which was a larger number than when I used the paper surveys and could potentially affect my survey results.

Their lack of participation could be due to a number of reasons. The students claimed they were familiar with the clickers, but they did not seem to be as adept as they claimed. Also, due to my inexperience with the clickers there might have been a malfunction with the devices themselves. Or, maybe the students just didn’t want to engage, especially since there was still no incentive to participate. When I looked back through the survey results, they did not seem to indicate any greater amount of satisfaction regarding the sessions.

This first experience with the clickers left me a bit skeptical, but I decided to try them again. This time, I created brief quizzes related to brainstorming keywords and types of plagiarism. My second class was smaller than the first, and I seemed to receive better engagement. The clickers also seemed to allow them to be more honest with the surveys and they seem more comfortable indicating their disinterest in the information presented, though the results also indicated that they saw the overall value in the information.

I have used the clickers in about twelve sessions this semester, and overall they were well received by the students. However, I am not completely sure that it adds significantly to the engagement. I also have not seen any indication in the surveys that my sessions are better or worse with their inclusion. I have discovered though that there may be some sessions, and topics, that are better suited for clickers than others. Upper level classes where I am trying to show specific resources do not lend themselves initially to clickers, and the time may be better spent with other activities or instruction.

I am still in the process of learning how clickers will fit into my classes, but I would generally call them a success, if only for the fact that is makes the survey process easier. Though, they aren’t the panacea for student engagement for which I had hoped. Activity type and student familiarity are essential variables that appear to affect clicker success.

Unfortunately, the overall nature of one shot instruction seems to be the greatest contributor to student disengagement. Student and faculty buy-in is the necessary component for library instruction success, whether it includes clickers or not.

A Tale of Two Sessions

Not long ago I taught two library sessions for two introductory composition classes with the same professor and the same assignment on the same day. I love it when the schedule serendipitously works out to make that happen, in part because it gives me the chance to informally evaluate my teaching: both what I tend to cover and how I structure those sessions.

Like many librarians, I’ve struggled over the past few years to move away from me standing at the front of the class talking talking talking, so I can increase the amount of time for students to work on their own research during the library session. Students are supposed to come to the session having already selected a topic for their research assignment (though not all of them do, of course). I try to spend no more than 10-15 minutes each discussing and demonstrating internet research, the library catalog, and article databases, interspersed with 10-15 minute chunks of time for students to search on their own while I circulate to answer questions and offer suggestions.

Our class sessions are 75 minutes long — this is a lot to do in 75 minutes. I’ve tried to work around those constraints by seriously abbreviating my demo and looking for ways to interject more information while students search on their own. For example, I won’t mention that spelling counts or talk about the difference between keywords and subject headings in a catalog search, but when a student asks me how to revise a search when she hasn’t retrieved any results, I’ll answer her question so the whole class can hear.

Sometimes, though, the class is quiet and the students don’t ask many questions. In these cases I always feel somewhat strange: I walk around the room a bit, but I don’t want to pace back and forth like an old-fashioned school marm monitoring an exam. I check in with the students who look like they’re lost (or Facebooking), but that can be hard to do with students who don’t seem interested in my help, and some of them are genuinely, quietly doing their work. Sometimes I stand in front of the class fiddling with the computer or looking at my notes. This is what happened in the second class I taught last week, and it feels awkward.

But sometimes the less talk more search strategy works really well, which also happened last week. In the first class students were talkative and interested, volunteering answers to my questions during the demos and spending time on their own searches in between. However, there was a wide range of student preparation for the assignment in this class, with some students still working to narrow down a topic and others ready to go. Additionally, several students came to the session with obvious prior experience searching for sources for academic work. In this case I was able to give each student a small amount of personalized attention, which let me suggest topic narrowing strategies to some and advanced search strategies to others.

I chatted with the course professor after both classes who mentioned that in her experience the afternoon class is just a quieter group of students overall (I’d originally suspected post-lunch digestive sleepiness). But it’s still a challenge — what’s the right balance of talking and search time? Will I ever be able to shake that weird, conspicuous feeling while students search and I just stand there? What are some other ways that I can encourage students to open up and ask the questions that I suspect they have?