Narrative as Evidence

This past week I attended the MLGSCA & NCNMLG Joint Meeting in Scottsdale, AZ. What do all these letters mean, you ask? They stand for the Medical Library Group of Southern California and Arizona and Northern California and Nevada Medical Library Group. So basically it was a western regional meeting of medical librarians. I attended sessions covering topics including survey design, information literacy assessment, National Library of Medicine updates, using Python to navigate e-mail reference, systematic reviews, and so many engaging posters! Of course, it was also an excellent opportunity to network with others and learn what different institutions are doing.

The survey design course was especially informative. As we know, surveys are a critical tool used by librarians. I learned how certain question types (ranking, for example) can be misleading, how to avoid asking double-barreled questions, and how to not ask a leading question (i.e. Do you really really love the library?!?) Of course, these survey design practices reduce bias and attempt to represent the most accurate results. The instructor, Deborah Charbonneau, reiterated that you can only do the best you can with surveys. And while this seems obvious, I feel that librarians can be a little perfectionistic. But let’s be real. It’s hard to know exactly what everyone thinks and wants through a survey. So yes, you can only do the best you can.

The posters and presentations about systematic reviews covered evidence-based medicine. As I discussed in my previous post, the evidence-based pyramid prioritizes research that reduces bias. Sackett, Rosenberg, Gray, Haynes, and Richardson (1996) helped to conceptualize the three-legged stool of evidence based practice. Essentially, evidence-based clinical decisions should consider the best of (1) the best research evidence, (2) clinical expertise, and (3) patient values and preferences. As medical librarians we generally focus on delivering strategies for the best research evidence. Simple enough, right? Overall, the conference was informative, social, and not overwhelming – three things I enjoy.

On my flight home, my center shifted from medical librarianship to Joan Didion’s Slouching Towards Bethlehem. The only essay I had previously read in this collection of essays was “On Keeping a Notebook”. I had been assigned this essay for a memoir writing class I took a few years ago. (I promise this is going somewhere.)  In this essay, Didion discusses how she has kept a form of a notebook, not a diary, since she was a child. Within these notebooks were random notes about people or things she saw, heard, and perhaps they included a time/location. These tidbits couldn’t possibly mean anything to anyone else except her. And that was the point. The pieces of information she jotted down over the years gave her reminders of who she was at that time. How she felt.

I took this memoir class in 2015 at Story Studio Chicago, a lofty spot in the Ravenswood neighborhood of Chicago. It was trendy and up and coming. At the time, I had just gotten divorced, my dad had died two years prior, and I discovered my passion for writing at the age of 33. So, I was certainly feeling quite up and coming (and hopefully I was also trendy). Her essay was powerful and resonated with me (as it has for so many others). After I started library school, I slowed down with my personal writing and focused on working and getting my degree, allowing me to land a fantastic job at UCLA! Now that I’m mostly settled in to all the newness, I have renewed my commitment to writing and reading memoir/creative non-fiction. I feel up and coming once again after all these new changes in my life.

As my plane ascended, I opened the book and saw that I had left off right at this essay. I found myself quietly verbalizing “Wow” and “Yeah” multiples times during my flight. I was grateful that the hum of the plane drowned out my voice, but I also didn’t care if anyone heard me. Because if they did, I would tell them why. I would say that the memories we have are really defined by who we were at that time. I would add that memory recall is actually not that reliable. Ultimately, our personal narrative is based upon the scatterplot of our lives: our actual past, present, future; our imagined past, present, future; our fantasized past, present, and future. As Didion (2000) states:

I think we are well advised to keep on nodding terms with the people we used to be, whether we find them attractive company or not. Otherwise they turn up unannounced and surprise us, come hammering on the mind’s door at 4 a.m. of a bad night and demand to know who deserted them, who betrayed them, who is going to make amends. We forget all too soon the things we thought we could never forget. We forget the loves and the betrayals alike, forget what we whispered and what we screamed, forget who we were. (p. 124)

What does this have to do with evidence-based medicine? Well, leaving a medical library conference and floating into this essay felt like polar opposites. But were they? While re-reading this essay, I found myself considering how reducing bias (or increasing perspectives) in research evidence and personal narrative can be connected. They may not seem so, but they are really part of a larger scholarly conversation. While medical librarians focus upon the research aspect of this three-legged stool, we cannot forget that clinical expertise (based upon personal experience) and patient perspective (also based upon personal experience) provide the remaining foundation for this stool.

I also wonder about how our experiences are reflected. Are we remembering who we were when we decided to become librarians? What were our goals? Hopes? Dreams? Look back at that essay you wrote when you applied to school. Look back at a picture of yourself from that time. Who were you? What did you want? Who was annoying you? What were you really yearning to purchase at the time? Did Netflix or Amazon Prime even exist?? Keeping on “nodding terms” with these people allows us to not let these former selves “turn up unannounced”. It allows us to ground ourselves and remember where we came from and how we came to be. And it is a good reminder that our narratives are our personal evidence, and they affect how we perceive and deliver “unbiased” information. I believe that the library is never neutral. So I am always wary to claim a lack of bias with research, no matter what. I prefer to be transparent about the strengths of evidence-based research and its pitfalls.

A couple creative ways I have seen this reflected in medicine is through narrative medicine, JAMA Poetry and Medicine, and Expert Opinions, the bottom of the evidence-based pyramid, in journals. Yes, these are biased. But I think it’s critical that we not forget that medicine ultimately heals the human body which is comprised of the human experience. Greenhalgh and Hurwitz (1999) propose:

At its most arid, modern medicine lacks a metric for existential qualities such as the inner hurt, despair, hope, grief, and moral pain that frequently accompany, and often indeed constitute, the illnesses from which people suffer. The relentless substitution during the course of medical training of skills deemed “scientific”—those that are eminently measurable but unavoidably reductionist—for those that are fundamentally linguistic, empathic, and interpretive should be seen as anything but a successful feature of the modern curriculum. (p. 50)

Medical librarians are not doctors. But librarians are purveyors of stories, so I do think we reside in more legs of this evidence-based stool. I would encourage all types of librarians to seek these outside perspectives to ground themselves in the everyday stories of healthcare professionals, patients, and of ourselves.

 

References

  1. Didion, J. (2000). Slouching towards Bethlehem. New York: Modern Library.
  2. Greenhalgh, T., & Hurwitz, B. (1999). Why study narrative? BMJ: British Medical Journal, 318(7175), 48–50.
  3. Sackett D.L., Rosenberg W.M., Gray J.A., Haynes R.B., & Richardson W.S. (1996). Evidence based medicine: What it is and what it isn’t. BMJ: British Medical Journal, 312(7023), 71–2. doi: 10.1136/bmj.312.7023.71.

 

Information Literacy and Fake News

ACRLog welcomes a guest post from Candice Benjes-Small, Head of Information Literacy and Faculty Outreach, and Scott Dunn, Associate Professor of Communication, at Radford University.

One day in September, a relative emailed me a link and asked, “Should I share this on Facebook?”  I took a look at the linked article, which had an extremely loaded-language headline and made some brutal accusations about one of the presidential candidates.  I didn’t recognize the news source hosting the article, and none of the more mainstream news sites mentioned the story. I visited my go-to fact checkers, like PolitiFact and Snopes, but found nothing about the article topic or the site. I told my relative that I couldn’t verify anything in the story or the site, so I recommended she not share it further through social media.

I didn’t know it at the time, but this was my first real engagement with what came to be called “fake news.”  Since the election, much has been written about this phenomena, with Politifact calling it the 2016 Lie of the Year.  Librarians have pointed out that acceptance of fake news shows a weakness of information literacy skills, and have published suggestions on how libraries can counteract fake news here and here (to name just a few). The Stanford study has added fuel to the discussion, suggesting university students have very weak evaluation skills.

Of course, as just about any instruction librarian will tell you, source evaluation is a complex skill. As Mike Caulfield so eloquently argues in his piece, Yes, Digital Literacy. But Which One?,  an information seeker needs a certain amount of subject expertise to truly judge whether a source on the topic is credible. And in this NSFW article, Chuck Wendig explores some of the problems of convincing people to read an article that goes against their worldview with an open mind.

But as an instruction librarian, I’m not ready to throw in the towel. Our students are going to read fake news, and I think we can encourage them to approach sources critically. As I posted to the ILI-Listserv in September 2016:

We have a solid lesson plan for evaluating Web sites  but I’m looking for one that focuses on news sites.  For example, there were a lot of conflicting reports about what actually happened during Trump’s visit to Flint last week. How could the average person figure out which story to trust?  What can we teach in a one-shot that would help students to evaluate the media?… My ideal lesson plan could be taught to freshmen in a 50-minute workshop, would be very hands-on, and would not leave them thinking, “All media are biased, therefore you can’t trust any of them.”

I discussed my quest with a few colleagues. My conversation with Dr. Scott Dunn, professor of communication, was the one that gave me the most traction. Scott’s research interests include politics and mass media, so he had been watching the fake news about the presidential election with interest. He understood my concerns that common suggestions for evaluating sources often centered on superficial characteristics, such as whether the site looked professional, or used criteria which were not as appropriate for news sites, like the URL’s top-level domain name (.com, .edu, .org). I proposed that readers needed to analyze the content of the stories themselves and look for hallmarks of quality, but I wasn’t sure what those might be, or what would be realistic to expect from your average, non-expert reader.

We first grappled with a definition for “fake news.” While it initially seemed to mean hyperpartisan stories, did it also include intentionally fake ones, like the satirical Onion? What about stories that turned out to be false, such as The Washington Post’s (now corrected) story about Russians hacking into the electric grid?  More recently, people have begun using the phrase “fake news” whenever a story doesn’t fit their world view. As Margaret Sullivan wrote in her piece, It’s time to retire the tainted term fake news, “Faster than you could say ‘Pizzagate,’ the label has been co-opted to mean any number of completely different things: Liberal claptrap. Or opinion from left-of-center. Or simply anything in the realm of news that the observer doesn’t like to hear.”

Rather than focus on identifying fake news, then, we decided it made more sense to teach students how to recognize good journalism. This dovetailed well with my initial instinct to focus on the quality of the content. Scott and I, with some help from the Stony Brook University’s Center for News Literacy, developed these tips:

  1. Avoid judgments based solely on the source. Immediately following the election, there were numerous attempts to quantify which sites were trustworthy, such as Melissa Zimdars’ False, Misleading, Clickbait-y, and/or Satirical “News” Sources and infographics that attempted to showcase media outlets’ biases. The methodology used to classify sources is often opaque, and it’s impossible for anyone to keep up with all the Websites purporting to be news. Many sites may also have a range of credibility. Buzzfeed has published some strong political pieces, but it also pushes listicles and silly quizzes, making it hard to say it’s always an authoritative source.
  2. Refer to the Society of Professional Journalists’ Code of Ethics. While it is written for journalists, many of the principles are ones a reader can identify in a story, such as whether the author seemed to verify facts; took care not to oversimplify or sensationalize a story, even in its headline; and explained why anonymous sources needed to be unnamed.
  3. Differentiate between perspective and bias. Having and writing from a point of view is not the same as cherry picking your facts and twisting a story unfairly. We should be able to read something that doesn’t fit our own world view with an open mind, and not automatically reject it as “biased.” We should also help learners understand the difference between editorials and commentaries, which are intended to be argumentative and express strong opinions, and news stories, which should not. Good news journalism will not mix the two.
  4. Find the original source of the story. Many sites will harvest news stories and then repackage them without any additional research or reporting. Like a game of telephone, the farther away you get from the original report, the more mangled and corrupted the story becomes. Often the original story will be linked, so you can just click to access it.  Encourage students to read this story, rather than relying on the secondary telling.
  5. Check your passion. If a story incites you, it may be too good or too outrageous to be true. For example, the pope did not endorse Trump OR Bernie Sanders. These stories can be created by satirical sites and then picked up by other outlets, which treat them as straight news; or they can emerge from the darker Web, feeding conspiracy theories like Pizzagate. Fact checking is essential for readers of these stories, using all of the above best practices.

Now how could I put all of this into a one-shot? In addition to my online research, I talked through my (somewhat stream of consciousness) thoughts with the other members of the library instruction team, who provided strong feedback and guidance. I collaborated with my colleague, Alyssa Archer, who brought her experience with critical pedagogy to the final lesson plan.  All that was left for us to try teaching it! I’m pleased to share that Alyssa and I taught the class multiple times in the fall, and have shared the resulting lesson plan, Evaluating news sites: Credible or clickbait? on Project CORA. We weren’t able to include all of the tips, but we continue to discuss how to incorporate them in future workshops.

I feel like the “fake news” phenomena is one that just keeps morphing and growing. I could probably write a whole lot more about this but I’m more interested in hearing what you think. How do you think information literacy can counteract the post-fact narratives- if it can at all? What tools and techniques do you recommend?