Tag Archives: library instruction

Not as simple as “click-by-click”

One of the projects I inherited as emerging technologies librarian is managing our library’s collection of “help guides.” The online learning objects in this collection are designed to provide asynchronous guidance to students when completing research-related tasks. Over the last few months, my focus has been on updating existing guides to reflect website and database interface changes, as well ensuring compliance with federal accessibility standards. With those updates nearly complete, the next order of business is to work with our committee of research and instruction librarians to create new content. The most requested guide at the top of our list? How to use the library’s discovery service rolled out during the Fall 2012 semester.

Like many other libraries, we hope the discovery service will allow users to find more materials across the library’s collections and beyond. Previously, our library’s website featured a “Books” search box to search the catalog, as well as an “Articles” search box to search one of our interdisciplinary databases. To ease the transition to the discovery system, we opted to keep the “Books” and “Articles” search boxes, in addition to adding the “one search box to rule them all”; however, these format search boxes now search the discovery tool using the appropriate document type tag. Without going into the nitty gritty details, this method has created certain “quirks” in the system that can lead sub-optimal search results.

This back-story leads to my current question about creating instructional guides for our discovery system – how do we design screencasts to demonstrate simple searches by format?

So far, this has boiled down to two options:

  1. Address the way students are most likely to interact with our system. We know users are drawn to cues with high information scent to help them find what they need; if I’m looking for a book, I’m more likely to be drawn to anything explicitly labeled “Books.” We also know students “satisfice” when completing research tasks, and many are unfortunately unlikely to care if their searches do not retrieve all possible results. Additionally, whatever we put front-and-center on our homepage is, I think, a decision we need to support within our instructional objects.
  2. Provide instruction demonstrating the way the discovery system was designed to be used. If we know our system is set up in a less-than-optimal way, it’s better to steer students away from the more tempting path. In this case, searching the discovery system as a whole and demonstrating how to use the “Format” limiters to find a specific types of materials. While this option requires ignoring the additional search options on our website, it will also allow us to eventually phase out the “Books” and “Articles” search boxes on the website without significant updates to our screencasts.

While debating these options with my colleagues, it’s been interesting to consider how this decision reflects the complexities of creating  standalone digital learning objects. The challenge is that these materials are often designed without necessarily knowing how, when, or why they will be used; our job is to create objects that meet students at a variety of point-of-need moments. Given that objects like screencasts should be kept short and to-the-point, it’s also difficult to add context that explains why the viewer should complete activities as-shown. And library instruction are not usually designed to make our students “mini-librarians.” Our advanced training and interest in information systems means it is our job to be the experts, but our students to not necessarily need to obtain this same level of knowledge to be successful information consumers and creators.

Does this mean we also engage in a bit of “satisficing” to create instructional guides that are “good enough” but not, perhaps, what we know to be “best?” Or do we provide just enough context to help students follow us as we guide them click-by-click from point A to point B, while lacking the complete “big picture” required to understand why this is the best path? Do either of these options fulfill our goals toward helping students develop their own critical information skills?

No instruction interaction is ever perfect. In person or online, synchronous or asynchronous, we’re always making compromises to balance idealism with reality. And in the case of creating and managing a large collection of online learning objects, it’s been interesting to have conversations which demonstrate why good digital learning objects are not synonymous with “click-by-click” instructions. How do we extend what we know about good pedagogy to create better online learning guides?

 

Waiting on Wikipedia

Recently while I was teaching a class the instructor asked me whether I thought that Wikipedia would ever come to be considered a generally trustworthy, credible source. I always talk about Wikipedia in my one-shot instruction sessions, especially with first year students, but this was the first time I’d ever gotten a question along those lines. And I’ve been thinking about it ever since.

In my classes I point out to students that most of us — students, faculty, librarians, everyone — use Wikipedia all the time. My usual strategy for talking about Wikipedia in library instruction is likely similar to many librarians: I show students how to use it for brainstorming and background information, suggest that they mine the references, and point out the View history link to show them how the entry has changed. I end by noting that Wikipedia is a great place to start but that students shouldn’t cite it in their assignments because it’s much too general, just as they wouldn’t cite a general print encyclopedia. Instead, they should use Wikipedia to point them to other resources that are more appropriate for use in college work.

But I do wonder when Wikipedia will cross the line into acceptable-for-use-as-a-cited-source territory. Will it ever? Has it already?

Full disclosure: I cited Wikipedia in a scholarly journal article I wrote last year. I had what I thought were (and still think are) good reasons. I was writing about using games in information literacy instruction, and I used Wikipedia to define several specific genres of videogames. I felt that the Wikipedia definitions for those types of games were more current and accurate than definitions I found in other published sources. In this case the fluidity and impermanence of Wikipedia were assets. Genres and micro-genres can evolve and change quickly, and I think that most Wikipedia entries on popular culture (in which I’d include videogames) are probably written and edited by fans of those topics. There’s an argument to be made that those fans are the subject experts, so it’s the information they’ve put together that I was most confident in citing. While one of the peer reviewers did note the Wikipedia citations, the journal editor and I discussed it and agreed to keep them.

Of course, Wikipedia won’t always be the best source. Right now I’m working on writing up the results of a project and needed to find the construction dates for campus buildings at one of my research sites. After scouring the college’s website with no luck, I stumbled upon the information in Wikipedia only to come up against a dilemma I’m sure our students face all the time: the information seems true, it’s not blatantly, obviously false, but there’s no citation for it. In this case I didn’t feel comfortable citing Wikipedia so I emailed the college archivist for more information, which she quickly and graciously provided. But what do our students do in a situation like this? There won’t always be a readily identifiable person or source to check with for more information.

According to this recent article in the Atlantic, Wikipedia seems to be moving into a more mature phase. The rate at which Wikipedia articles are edited is decreasing, as is the rate for adding new articles. What does this slowdown mean for Wikipedia? Is it really “nearing completion,” as the article suggests? And when Wikipedia is finished, will it then become a citable source?

Clickers, or Does Technology Really Cure What Ails You?

ACRLog welcomes a guest post from Cori Strickler, Information Literacy Librarian at Bridgewater College.

During idle times at the reference desk, or when the students are gone for a break, I find myself creating instruction “wish lists” of tools or gadgets that I’d love to have for my sessions. One item that has been on my list for a few years now is clickers, or student response systems as they are officially called. In academic classrooms they are used for attendance, quiz taking, or other more informal assessments. For me, I saw clickers as a way to solve one of my basic and most frustrating problems: getting students to be engaged during the sessions. Students have little desire to participate in library sessions and trying to get them to comment on their library experience is like pulling teeth, except that the process is a lot more painful for me than it is for the students.

For those of you who haven’t heard of clickers before, they are little remote control like devices that allow the students to answer multiple choice questions by sending their responses to the computer for real time analysis. They are sort of like the devices they use on Who Wants to Be a Millionaire to poll the audience.

My library doesn’t have the budget for clickers, but this semester through a chance discussion with the director of the health services department, I learned that the college received a grant for 100 TurningPoint clickers and the necessary software. The director rarely needed all of the clickers at the same time, so she offered about fifty for me to use during my instruction sessions.

So, I now have access to a tool that I had coveted for many years, but that was only the easy part. I still have to figure out how to meaningfully integrate this technology into my sessions.

My overall goals are relatively simple. I want to encourage student involvement in any way possible so I would not have to lecture for fifty minutes straight. My voice just can’t handle the pressure. To be successful, though, I need to be purposeful with my inclusion. I can’t just stick a clicker quiz at the beginning of a session and assume that the students will suddenly be overwhelmed with a desire to learn everything there is about the library. Most faculty who schedule a library instruction session have a particular purpose in mind, so I also need to be sure that I fulfill their expectations as well.

After much consideration, I decided not to add the clickers to all my sessions. Instead, I decided to focus on first year students, who hopefully aren’t quite as jaded as the upper classmen, and haven’t already decided that they know everything about research.

For my first clicker experiment, I used them with a quiz to help me gauge the classes’ knowledge of the library. I also decided to use them as an alternative way to administer our session evaluation survey. Ultimately, I had mixed results with the clickers. The students did respond better than before, but I did not get full participation. While this isn’t a big issue with the quiz, this lack of participation was an issue when they were asked to complete the evaluation survey. For most survey questions I lacked responses from five or six students, which was a larger number than when I used the paper surveys and could potentially affect my survey results.

Their lack of participation could be due to a number of reasons. The students claimed they were familiar with the clickers, but they did not seem to be as adept as they claimed. Also, due to my inexperience with the clickers there might have been a malfunction with the devices themselves. Or, maybe the students just didn’t want to engage, especially since there was still no incentive to participate. When I looked back through the survey results, they did not seem to indicate any greater amount of satisfaction regarding the sessions.

This first experience with the clickers left me a bit skeptical, but I decided to try them again. This time, I created brief quizzes related to brainstorming keywords and types of plagiarism. My second class was smaller than the first, and I seemed to receive better engagement. The clickers also seemed to allow them to be more honest with the surveys and they seem more comfortable indicating their disinterest in the information presented, though the results also indicated that they saw the overall value in the information.

I have used the clickers in about twelve sessions this semester, and overall they were well received by the students. However, I am not completely sure that it adds significantly to the engagement. I also have not seen any indication in the surveys that my sessions are better or worse with their inclusion. I have discovered though that there may be some sessions, and topics, that are better suited for clickers than others. Upper level classes where I am trying to show specific resources do not lend themselves initially to clickers, and the time may be better spent with other activities or instruction.

I am still in the process of learning how clickers will fit into my classes, but I would generally call them a success, if only for the fact that is makes the survey process easier. Though, they aren’t the panacea for student engagement for which I had hoped. Activity type and student familiarity are essential variables that appear to affect clicker success.

Unfortunately, the overall nature of one shot instruction seems to be the greatest contributor to student disengagement. Student and faculty buy-in is the necessary component for library instruction success, whether it includes clickers or not.

A Tale of Two Sessions

Not long ago I taught two library sessions for two introductory composition classes with the same professor and the same assignment on the same day. I love it when the schedule serendipitously works out to make that happen, in part because it gives me the chance to informally evaluate my teaching: both what I tend to cover and how I structure those sessions.

Like many librarians, I’ve struggled over the past few years to move away from me standing at the front of the class talking talking talking, so I can increase the amount of time for students to work on their own research during the library session. Students are supposed to come to the session having already selected a topic for their research assignment (though not all of them do, of course). I try to spend no more than 10-15 minutes each discussing and demonstrating internet research, the library catalog, and article databases, interspersed with 10-15 minute chunks of time for students to search on their own while I circulate to answer questions and offer suggestions.

Our class sessions are 75 minutes long — this is a lot to do in 75 minutes. I’ve tried to work around those constraints by seriously abbreviating my demo and looking for ways to interject more information while students search on their own. For example, I won’t mention that spelling counts or talk about the difference between keywords and subject headings in a catalog search, but when a student asks me how to revise a search when she hasn’t retrieved any results, I’ll answer her question so the whole class can hear.

Sometimes, though, the class is quiet and the students don’t ask many questions. In these cases I always feel somewhat strange: I walk around the room a bit, but I don’t want to pace back and forth like an old-fashioned school marm monitoring an exam. I check in with the students who look like they’re lost (or Facebooking), but that can be hard to do with students who don’t seem interested in my help, and some of them are genuinely, quietly doing their work. Sometimes I stand in front of the class fiddling with the computer or looking at my notes. This is what happened in the second class I taught last week, and it feels awkward.

But sometimes the less talk more search strategy works really well, which also happened last week. In the first class students were talkative and interested, volunteering answers to my questions during the demos and spending time on their own searches in between. However, there was a wide range of student preparation for the assignment in this class, with some students still working to narrow down a topic and others ready to go. Additionally, several students came to the session with obvious prior experience searching for sources for academic work. In this case I was able to give each student a small amount of personalized attention, which let me suggest topic narrowing strategies to some and advanced search strategies to others.

I chatted with the course professor after both classes who mentioned that in her experience the afternoon class is just a quieter group of students overall (I’d originally suspected post-lunch digestive sleepiness). But it’s still a challenge — what’s the right balance of talking and search time? Will I ever be able to shake that weird, conspicuous feeling while students search and I just stand there? What are some other ways that I can encourage students to open up and ask the questions that I suspect they have?

Context Matters

This month’s post in our series of guest academic librarian bloggers is by Catherine Pellegrino, Reference Librarian and Instruction Coordinator at Saint Mary’s College. She blogs at Spurious Tuples.

Ever since I went to ACRL’s Institute for Information Literacy Immersion program in the summer of 2009, I’ve been fascinated by the idea of the library instruction session with no demonstrations of databases. “What?” you say, “how could that possibly work?” Well, there are lots of variations on this teaching model, but the basic idea is that students learn better by doing than by being lectured at, and many of our traditional-aged college students are very good at figuring out user interfaces. So you set them up in small groups, have them figure out the database(s) on their own, and then the small groups report back to the class as a whole.

I’ve heard anecdotal reports from other librarians that this method works very well for them, but when I tried it with the students at my small liberal arts college, it kind of flopped. In fact, our students almost seem to want to be told about things, rather than figure them out on their own. One of the comments that I get fairly regularly on post-session assessments is “I wish you had gone into more detail about [database].” So for now, I’m not doing no-demonstration classes, although I’d like to find a way to make it work for our students, on our campus. And thinking about how to make it work for our students got me thinking about larger issues of campus cultural contexts.

When Maura contacted me about writing this guest post, I had just returned from a visit to my friend Iris Jastram, who is a reference and instruction librarian at Carleton College in Minnesota. While there, I had noted some differences between Carleton’s students and the students at my own college. Those observations spawned a conversation between Iris and me, and got me thinking about those same issues of campus cultural contexts, and how they affect information literacy instruction. So that’s what I thought I’d write about here.

Iris writes, on her own blog and elsewhere, about some of the things she can do with her information literacy instruction: she can explain to students how scholars index their own literature, and how to use that internal indexing to the students’ advantage in searching efficiently and effectively. She also works with students to help them find ways to uncover the specialized vocabulary that researchers in their disciplines use — both so that they can use that vocabulary effectively when searching for scholarly literature, and also so that they can use it when entering into that scholarly conversation themselves.

In short, Iris is able to tap into a campus culture and mindset where Carleton students, regardless of their ultimate career plans, are able to conceptualize themselves as apprentice scholars, and she’s able to use that to do things in her classroom that don’t work in mine.

I work at Saint Mary’s College, a Catholic women’s liberal arts college in Notre Dame, Indiana (just outside of South Bend). On the surface, we’re very similar to Carleton: about 1400-1500 students, small liberal arts college in the Midwest. But under the surface, there are some key differences: our professional programs (education, business, social work, and nursing) account for a large number of our students, while Carleton has no professional programs. Nearly all of Saint Mary’s science majors enter with the intention of going on in health professions (about half of them keep that intention through graduation) while only a small fraction of them go on to Master’s and Ph.D. programs in the sciences.

More importantly, though — and this is what I observed on my visit to the Gould Library — Carleton College has a campus culture of intense engagement, of students who dive into their studies with gusto, of students for whom whatever is in front of them right now is the most important thing they’re working on. It’s not necessarily that they’re smarter — and my friend Marianne Reddin Aldrich’s observations about the students at her own liberal arts college helped me frame this issue — it’s just a campus culture of being really into things, whether they’re academic or otherwise.

That’s something that Saint Mary’s doesn’t precisely have, or if our students have it, it’s not visible in the classroom. (Our students are very committed to a lot of things, including a lot of service and volunteer work, and their religion and personal faith development, so perhaps those areas are where it’s visible, but those aren’t areas that I see in the library or in the classroom.) So when Iris said that when she “geeks out” over some really cool, powerful, or obscure database tool, it establishes a bond between her and her students, I had to reply that when I geek out over a similar tool, it actually distances me from my students.

And that brings me to the point that all these conversations and observations led me to: a question about how to engage these students, on this campus. What motivates them? What gets them as 100% engaged as the students at Carleton and Colorado College? What pedagogical strategies enable them to learn independently in the classroom? And I realized that I really don’t know. I know a lot about what “they” (whoever “they” are) say about “millennials,” but I’m realizing that local campus and classroom cultures also have powerful effects on students and their learning. So I’m trying to figure out how I can learn more about what drives our students: one thing I’m planning to do is engage in a semi-structured program of observing master teachers on our campus by auditing classes. But I need to find more ideas and strategies.

What engages your students? And how did you find that out?