Tag Archives: failure

When is the Struggle TOO Real?

One of the advantages of having a partner who happens to be a math professor is that we can talk academic shop. A few weeks ago, over a serious dishwasher unloading, we started talking about a recurring theme manifesting itself in our college’s faculty Facebook group: toughening up college students. From debates about trigger warnings to conversations about cultivating students’ grit and comfort with failure, our colleagues are consistently inconsistent about how we should help college students succeed in academia and life. I’ll lump myself and my partner into this group, too. As a faculty we want to be sensitive to student needs and life experiences, but we also don’t want them to fall apart if they get a bad grade on an exam. We want them to make a real attempt at solving a difficult problem or tackling a challenging project on their own before asking for help, but we also recognize that many students have serious outside stressors (economic, familial, emotional, etc.) that might prevent them from giving their all to their studies.

For years librarians have been chanting that “failure is good” because it is a signal of attempted innovation, creative practice, and learning (particularly when applied to information literacy instruction). We want our students to learn from their mistakes, which means they have to make them first. Math education is no different. There’s a small but mighty push for experiential and problem-based learning within the discipline that wants students to learn from their mistakes. As my partner and I discussed this we couldn’t help but wonder:

At what point is the struggle too much?

Earlier in the day he’d met with a student who claimed she was working on one homework problem for 4 hours. Earlier that semester I’d met with a student who spent an entire weekend looking for research in the wrong places with the wrong search terms. I’m all for giving it the old college try, but in both cases, this just plain excessive struggle for little reward. As a librarian who has been doing this job for a while, I have a good sense of when I’ve tapped my intellectual well. I know when to ask for help. My partner does, too. Most academics know when to take a step back, take another approach, or ask a colleague for suggestions. But this is a learned skill. We like to think of it as tacit knowledge–students have to experience failure to know when they are failing the right way as opposed to just struggling unnecessarily–but is it really? Does the experience alone help them gain this knowledge? Or can the struggle just be too real for some students, leading them to eventually equate math or research with pointless stress?

I think the key in the library classroom is not to focus on failure but to focus on process: Model, practice, repeat–over and over again. It’s a challenge when so much of students’ grades depend on a final product (an exam, a paper, a presentation, etc.) and often requires a shift in emphasis from the professor. By modeling a process–a step I think we (and I know I) often overlook in our attempts to make our classrooms spaces for active learning–we give students a sense of what struggle can look like. Granted, there’s no one standard process for research, and we don’t want to imply that there is one, but making our thinking and doing visible to our students can go a long way towards demystifying research. We get stuck, we back-track, we try again, we struggle, but we are never alone when we do so. It’s something I try to stress to all my students in hopes that they too feel like they never have to struggle alone.

If At First You Don’t Assess, Try, Try Again

ACRLog welcomes a guest post from Katelyn Tucker & Alyssa Archer, Instruction Librarians at Radford University.

Instruction librarians are always looking for new & flashy ways to engage our students in the classroom. New teaching methods are exciting, but how do we know if they’re working? Here at Radford University, we’ve been flipping and using games for one-shot instruction sessions for a while, and our Assessment Librarian wasn’t going to accept anecdotal evidence of success any longer. We decided that the best way to see if our flipped and gamified lessons were accomplishing our goals was to evaluate the students’ completed assignments. We tried to think of every possible issue in designing the study. Our results, however, had issues that could have been prevented in hindsight. We want you to learn from our mistakes so you are not doomed to repeat them.

Our process

Identifying classes to include in this assessment of flipped versus gamified lessons was a no-brainer for us. A cohort of four sections of the same course that use identical assignment descriptions, assignment sheets, and grading rubrics meant that we had an optimal sample population. All students in the four sections created annotated bibliographies based on these same syllabi and assignment instructions. We randomly assigned two classes to receive flipped information literacy instruction and two to play a library game. After final grades had been submitted for the semester, the teaching faculty members of each section stripped identifying information from their students’ annotated bibliographies and sent them to us. We assigned each bibliography a number and then assigned two librarian coders to each paper. We felt confident that we had a failsafe study design.

Using a basic rubric (see image below, click to enlarge), librarians coded each bibliography for three outcomes using a binary scale. Since our curriculum lists APA documentation style, scholarly source evaluation, and search strategy as outcomes for the program, we coded for competency in these 3 areas. This process took about two months to complete, as coding student work is a time-consuming process.

assessmentchart

The challenges

After two librarians independently coded each bibliography, our assessment librarian ran inter-rater reliability statistics, and… we failed. We had previously used rubrics to code annotated bibliographies for another assessment project, so we didn’t spend any time explaining the process with our experienced coders. As we only hit around 30% agreement between coders, it is obvious that we should have done a better job with training.

Because we had such low agreement between coders, we weren’t confident in our success with each outcome. When we compared the flipped sections to the gamified ones, we didn’t find any significant differences in any of our outcomes. Students who played the game did just as well as those who were part of the flipped sections. However, our low inter-rater reliability threw a wrench in those results.

What we’ve learned

We came to understand the importance of norming, discussing among coders what the rubric means, and incorporating meaningful conversations on how to interpret assessment data into the norming process. Our inter-rater reliability issues could have been avoided with detailed training and discussion. Even though we thought we were safe on this project, because of earlier coding projects, the length of time between assessments created some large inconsistencies.

We haven’t given up on norming: including multiple coders may be time-intensive, but when done well, gives our team confidence in the results. The same applies to qualitative methodologies. As a side part of this project, one librarian looked at research narratives written by some participants, and decided to bravely go it alone on coding the students’ text using Dedoose. While it was an interesting experiment, the key point learned was to bring in more coders! While qualitative software can help identify patterns, it’s nothing compared to a partner looking at the same data and discussing as a team.

We also still believe in assessing output. As librarians, we don’t get too many opportunities to see how students use their information literacy skills in their written work. By assessing student output, we can actually track competency in our learning outcomes. We believe that students’ papers provide the best evidence of success or failure in the library classroom, and we feel lucky that our teaching faculty partners have given us access to graded work for our assessment projects.

Incorporating Failure Into Library Instruction

Failure is what’s getting a fair amount of attention right now, especially when the conversation turns to learning. I wouldn’t necessarily describe it as a growing consensus, but I’m hearing and reading more about the importance of allowing students to learn through authentic practice, what some call experiential learning, that puts them into situations where they can succeed or fail – and learn by doing so themselves or from the experiences of their fellow students. Educators have known for many years that students have better learning experiences when there is a hands-on component which enables them to learn through their own mistakes and by coming to their own conclusions; what then need is less lecturing and demonstration. Think back to the days when the vast majority of trades were learned through apprenticeships. It was all about having authentic practice, and learning from one’s own mistakes.

One good example that promotes the value of failure for learning is a TED Talk by Diana Laufenberg on the topic of “How to Learn? From Mistakes.” In this talk Laufenberg, who is a teacher at a progressive school in Philadelphia, describes how she creates projects that promote constructivism in the classroom. Traditional education, as she describes it, is focused entirely on getting things right – and never being wrong. How do you get an A grade? You always give the right answers on tests. The problem associated with test taking is that it rarely results in real learning (a permanent change in behavior/thinking). I really like the point that the traditional methods are based on a world of information scarcity when you had to sit in a classroom to have an expert pour it into your head. In a world of information abundance, the answers and possibilities are all around contemporary students. They know how to find it. What they need are learning activities that enable them to hone their thinking skills to enable them to sort, synthesize, evaluate and create from the information they find (sounds familiar, right). What Laufenberg discovered through her learning projects was how much more effectively students learned when there were no hard and fast rules, and they learned through experience and the making of mistakes.

And when you move outside the world of education into business there is growing evidence of a “there’s value in failure” movement. Again, nothing particularly new when you consider that in the 1991 Deep Dive video the IDEO group emphasizes the important of trying lots of different ways to accomplish tasks knowing that many will fail, but that out of the failure will come learning and eventual success. More recently I came across this column titled “The Role of Failure in Learning” – you can’t get much more direct than that – that provides a corporate perspective. The author writes:

society tends to reward performers, rather than learners. All through school and life, it is not the person who learned the most who is rewarded but rather the person who came in first — the person who scored the highest. High performance is what is valued, not high learning. The downside to this is that high performers, without balancing high learning, will ultimately quit trying when they aren’t successful. They may leave avenues with an obstacle to success unexplored. Learners see the obstacle for what it is — a momentary blip to be dealt with. Failure is never failure; for the learner, it is simply an opportunity to learn.

I could point to other examples, but you get the message. It’s better perhaps to remind ourselves that there are different levels of failure, some are good for learning while others are just…bad failure. What do I mean? You don’t want the engineer who designs your car’s steering system or the factory worker who installs the parts to fail. They might learn something from the catastrophic failure, but at what cost? In this post about innovation, Michael Schrage makes the point that the best kind of failure for learning and innovation is some sort of partial failure – where you fail enough to have things go wrong but not so destructively that there’s nothing to learn from at all – the type of failure likely to result in quitting. I think what he’s saying is that we need the type of failure that takes us from version one to version two. Our libraries already have enough examples of systems that are so poorly designed that they lead to the type of failure that makes the students and faculty just want to give up on them. We don’t need more of that.

With so much discussion about the importance of failure for learning and innovation, how are academic library educators incorporating these lessons into research instruction? Are we still spending more time on making sure that students get things right, rather than on designing an experiential learning situation that allows them to make mistakes and learn from them? Admittedly, this is difficult to achieve in one course session. Laufenberg’s examples are week or month long projects that allow students to learn through experience at a more reasonable pace, as is often the case in the real world. The literature of information literacy contains good examples of problem-based learning where students are put into more realistic situations that require them to locate and use information to help solve a problem. That does get more to the point of experiential learning, but I’m not sure how the making of mistakes is capitalized upon to develop a more intense learning experience. My quick and dirty search of the library literature finds a few articles on the importance of learning from failure, but these are mostly geared to the profession – encouraging risk taking. What I don’t find are articles providing good examples of instruction designed with some intentional failure component that is there to ultimately aid students in learning how to think for themselves when they are dealing with information overload. If it appears that employers may be looking for the type of people who are comfortable with failure as long as they learn from it, and we’re about lifelong learning, perhaps that’s a skill we need to help our students develop.

In our rush to develop new “learning from failure” methods, let’s remember to recognize that as great as all this constructivist learning from mistakes sounds, there are times when a good old behaviorist approach might be better suited to the learning task at hand. If I have a room full of students who need to learn how to use the MediaMark (MRI) Plus database for a marketing assignment, I need to get them to learn the 15 steps they’ll need to get to the right data for their project. It would be pointless to get them to the first screen and then tell them to “experience” the interface and expect them to learn from the many, many mistakes they would no doubt make. This is a case where the reward for getting it done right is a successfully completed assignment.

In 2011 I will hope to see better examples of risk taking in the classroom, and models of library learning where there is an intentional element of “learning through failing” designed into the instruction. Despite my limited instruction opportunities, I’ll be giving more thought to this myself. Speaking of 2011, this may very well be the last ACRLog post of the year. If so, allow me to say thanks for your continued readership of ACRLog. It is greatly appreciated. On behalf of the blogging team, I hope you’ve found ACRLog worthwhile and have enjoyed the presence of new bloggers and guest bloggers throughout the year. As always, ACRLog is always open to ideas for guest posts. If you have something to share about academic librarianship, get in touch with us in 2011.