Information Literacy and Fake News

ACRLog welcomes a guest post from Candice Benjes-Small, Head of Information Literacy and Faculty Outreach, and Scott Dunn, Associate Professor of Communication, at Radford University.

One day in September, a relative emailed me a link and asked, “Should I share this on Facebook?”  I took a look at the linked article, which had an extremely loaded-language headline and made some brutal accusations about one of the presidential candidates.  I didn’t recognize the news source hosting the article, and none of the more mainstream news sites mentioned the story. I visited my go-to fact checkers, like PolitiFact and Snopes, but found nothing about the article topic or the site. I told my relative that I couldn’t verify anything in the story or the site, so I recommended she not share it further through social media.

I didn’t know it at the time, but this was my first real engagement with what came to be called “fake news.”  Since the election, much has been written about this phenomena, with Politifact calling it the 2016 Lie of the Year.  Librarians have pointed out that acceptance of fake news shows a weakness of information literacy skills, and have published suggestions on how libraries can counteract fake news here and here (to name just a few). The Stanford study has added fuel to the discussion, suggesting university students have very weak evaluation skills.

Of course, as just about any instruction librarian will tell you, source evaluation is a complex skill. As Mike Caulfield so eloquently argues in his piece, Yes, Digital Literacy. But Which One?,  an information seeker needs a certain amount of subject expertise to truly judge whether a source on the topic is credible. And in this NSFW article, Chuck Wendig explores some of the problems of convincing people to read an article that goes against their worldview with an open mind.

But as an instruction librarian, I’m not ready to throw in the towel. Our students are going to read fake news, and I think we can encourage them to approach sources critically. As I posted to the ILI-Listserv in September 2016:

We have a solid lesson plan for evaluating Web sites  but I’m looking for one that focuses on news sites.  For example, there were a lot of conflicting reports about what actually happened during Trump’s visit to Flint last week. How could the average person figure out which story to trust?  What can we teach in a one-shot that would help students to evaluate the media?… My ideal lesson plan could be taught to freshmen in a 50-minute workshop, would be very hands-on, and would not leave them thinking, “All media are biased, therefore you can’t trust any of them.”

I discussed my quest with a few colleagues. My conversation with Dr. Scott Dunn, professor of communication, was the one that gave me the most traction. Scott’s research interests include politics and mass media, so he had been watching the fake news about the presidential election with interest. He understood my concerns that common suggestions for evaluating sources often centered on superficial characteristics, such as whether the site looked professional, or used criteria which were not as appropriate for news sites, like the URL’s top-level domain name (.com, .edu, .org). I proposed that readers needed to analyze the content of the stories themselves and look for hallmarks of quality, but I wasn’t sure what those might be, or what would be realistic to expect from your average, non-expert reader.

We first grappled with a definition for “fake news.” While it initially seemed to mean hyperpartisan stories, did it also include intentionally fake ones, like the satirical Onion? What about stories that turned out to be false, such as The Washington Post’s (now corrected) story about Russians hacking into the electric grid?  More recently, people have begun using the phrase “fake news” whenever a story doesn’t fit their world view. As Margaret Sullivan wrote in her piece, It’s time to retire the tainted term fake news, “Faster than you could say ‘Pizzagate,’ the label has been co-opted to mean any number of completely different things: Liberal claptrap. Or opinion from left-of-center. Or simply anything in the realm of news that the observer doesn’t like to hear.”

Rather than focus on identifying fake news, then, we decided it made more sense to teach students how to recognize good journalism. This dovetailed well with my initial instinct to focus on the quality of the content. Scott and I, with some help from the Stony Brook University’s Center for News Literacy, developed these tips:

  1. Avoid judgments based solely on the source. Immediately following the election, there were numerous attempts to quantify which sites were trustworthy, such as Melissa Zimdars’ False, Misleading, Clickbait-y, and/or Satirical “News” Sources and infographics that attempted to showcase media outlets’ biases. The methodology used to classify sources is often opaque, and it’s impossible for anyone to keep up with all the Websites purporting to be news. Many sites may also have a range of credibility. Buzzfeed has published some strong political pieces, but it also pushes listicles and silly quizzes, making it hard to say it’s always an authoritative source.
  2. Refer to the Society of Professional Journalists’ Code of Ethics. While it is written for journalists, many of the principles are ones a reader can identify in a story, such as whether the author seemed to verify facts; took care not to oversimplify or sensationalize a story, even in its headline; and explained why anonymous sources needed to be unnamed.
  3. Differentiate between perspective and bias. Having and writing from a point of view is not the same as cherry picking your facts and twisting a story unfairly. We should be able to read something that doesn’t fit our own world view with an open mind, and not automatically reject it as “biased.” We should also help learners understand the difference between editorials and commentaries, which are intended to be argumentative and express strong opinions, and news stories, which should not. Good news journalism will not mix the two.
  4. Find the original source of the story. Many sites will harvest news stories and then repackage them without any additional research or reporting. Like a game of telephone, the farther away you get from the original report, the more mangled and corrupted the story becomes. Often the original story will be linked, so you can just click to access it.  Encourage students to read this story, rather than relying on the secondary telling.
  5. Check your passion. If a story incites you, it may be too good or too outrageous to be true. For example, the pope did not endorse Trump OR Bernie Sanders. These stories can be created by satirical sites and then picked up by other outlets, which treat them as straight news; or they can emerge from the darker Web, feeding conspiracy theories like Pizzagate. Fact checking is essential for readers of these stories, using all of the above best practices.

Now how could I put all of this into a one-shot? In addition to my online research, I talked through my (somewhat stream of consciousness) thoughts with the other members of the library instruction team, who provided strong feedback and guidance. I collaborated with my colleague, Alyssa Archer, who brought her experience with critical pedagogy to the final lesson plan.  All that was left for us to try teaching it! I’m pleased to share that Alyssa and I taught the class multiple times in the fall, and have shared the resulting lesson plan, Evaluating news sites: Credible or clickbait? on Project CORA. We weren’t able to include all of the tips, but we continue to discuss how to incorporate them in future workshops.

I feel like the “fake news” phenomena is one that just keeps morphing and growing. I could probably write a whole lot more about this but I’m more interested in hearing what you think. How do you think information literacy can counteract the post-fact narratives- if it can at all? What tools and techniques do you recommend?

Information is Power – Even When it’s Wrong

Here is a guest post from Amy Fry, a San Diego-based librarian with whom I’ve done some research on aggregated databases. She was struck by the way a sloppy mistake in handling information led to a plunge in a company’s stock prices – and what the implications might be for information literacy. If you’re low on energy and thinking a cup of strong coffee might wake you up – hang on; this post might just do the trick.

—–
On September 8, a reporter for Income Securities Advisors, using Google, found a 2002 article from the South Florida Sun-Sentinel about United Airlines’ bankruptcy. The article was undated in the paper’s archive, but used a site header displaying the current date, so the Google News crawler, indexing the site Saturday night, applied the date of September 6, 2008 to the story. Mistakenly identifying the article as current, the reporter summarized it and sent it to her editor, who posted it to the ISA wire service. Aggregated by Bloomberg (though independent from Bloomberg News), the headline was seen by Wall Street traders, and even though the company caught the mistake and removed the headline within 13 minutes (and Bloomberg itself posted a correction), a trading frenzy had already caught hold causing United to lost 75% of its stock value in under an hour.

This story contains a powerful lesson about information literacy.

One: Proper metadata is important.

Metadata experts have been trying for years to promote universal standards for describing and applying information about content objects, online and elsewhere, and this is why. Why was this article undated when other articles from the same news archive were dated, and how can a header date be mistaken for the date of unaffiliated content? The answer is: improper application and use of metadata. One reason we teach students to use library resources is that we believe that properly indexed information, with standard subject headings and descriptive metadata that is uniformly formatted and properly mapped, aids the user in finding and evaluating information. As this story shows, such indexing can also help information seekers avoid costly mistakes. The problem of universal metadata standards is complicated, but our hard work as information scientists is not wasted in solving it.

Two: There is no substitute for critical thinking about sources.

The reporter, and her editor, did not think critically about where her information was coming from and why it might require a second glance. Even if she didn’t have the background to already know that United had declared and emerged from bankruptcy within the last 10 years, proper critical thinking about sources should have caused her to ask why this story was being fed to her first through Google News from a south Florida Tribune-affiliate instead of the Wall Street Journal or another primary information source of financial news. We teach students to examine a variety of points to determine the authority of an information source, like an identifiable author, author affiliation, publisher and publisher affiliation, traceable references, and external peer review. All of these can help them ascertain if sources they find are reliable, even if they do not have extensive prior exposure to the subject of their research question. This story proves that there are no shortcuts to determining the authority of sources, and no substitute for critical thinking.

Three: Sometimes aggregators are misleading.

Aggregators play a valuable, but complicated, role in information provision. Bloomberg not only provides information to its subscribers – it also aggregates information from other services and packages this information with its own. Operating under the “more is more” and “bigger is better” philosophy has become commonplace in the world of information aggregation, and librarians tend to agree (see Fister, Gilbert and Fry in the July 2008 issue of portal). But, as this story shows, it comes with certain pitfalls. Aggregators have neither the means nor the desire to vet every item of information they provide in their products, but the distinction between their role as aggregator and their role as authoritative information provider is blurred. Often their own status lends authority to the information they package – touted as unintended when that information proves to be faulty. As this story demonstrates, more oversight of aggregators and by aggregators, and a demand of quality over quantity, should be a priority for librarians, especially in this age of information overload.

Four: Google is more powerful than we even realized.

If any one of you has been underestimating the role of Google in the information food chain, STOP. Google has enormous power to direct culture through the control of information. While the company sticks to its mantra of “Don’t Be Evil,” this story proves that high-stakes real-world results can be achieved in moments through Google without Google’s knowledge or intervention and even without intentional sabotage. Google has changed the way we find, use and even produce information – but with great power comes even greater responsibility. Librarians have raised important points about the ethical dimensions of private information ownership in conjunction with the Google Books digitization project. We have warned students to be careful when using Google as a research tool. A private company is not required to act in the public interest. Academic librarians, as educators, are. As more and more information is accessed through and archived by private companies (for example, despite its content, EEBO is still a proprietary resource), librarians must take on greater responsibilities as watchdogs for the public interest. Even if our roles are changing, our mission must not.