In Google They Trust

An interesting article swam through my Twitterstream recently that’s a perfect complement to the Project Information Literacy report that Barbara mentioned last week. It’s a recent publication of research by the Web Use Project led by Eszter Hargittai, a professor of Communication Studies at Northwestern University. The article, Trust Online: Young Adults’ Evaluation of Web Content, appears in the latest issue of the International Journal of Communication (which is open access, hooray!), and reports on the information-seeking behavior of college freshmen at the University of Illinois at Chicago. Specifically, the researchers examine how students search for, locate, and evaluate information on the web.

Surveys were administered to 1,060 students, then a subset of 102 students were observed and interviewed as they searched for information on the internet. In the survey students were asked to rate criteria they use for evaluating websites and how often they use those criteria when doing research for their coursework. Students rated several criteria as important to consider when searching for information for school assignments, including currency/timeliness, checking additional sources to verify the information, identifying opinion versus fact, and identifying the author of the website.

However, while students surveyed and interviewed know that they should assess the credibility of information sources they find on the web, in practice this didn’t always hold true. When researchers observed students searching for information, the students rarely assessed the credibility of websites using what faculty and librarians would consider appropriate criteria, e.g., examining author credentials, checking references, etc. Instead, they placed much trust in familiar brands: Google, Yahoo!, SparkNotes, MapQuest, and Microsoft, among others.

Students also invested their trust in search engines to provide them with the “best” results for their research needs. While some acknowledged that search engine results are not ranked by credibility or accuracy, they asserted that in their experience the top results returned by search engines were usually the most relevant for them. Adding to the confusion, some students went right to the sponsored links on the search engine results page, which are not organic results at all but paid advertising.

Some of the students interviewed were able to differentiate between the types of information usually found on websites based on domain name, remarking that websites with .edu and .gov addresses are most trustworthy. But students were less clear on the differences between .org and .com. Many regard .org websites as more trustworthy, probably because originally that domain was reserved for non-profit organizations, a restriction which no longer exists.

I highly recommend giving this article a read, as it’s full of additional data and details that I’m sure will resonate with academic librarians. For me reading this article was like stepping into one of my English Comp instruction sessions. I always devote a portion of the class to discussing doing research on the internet, often ask students these same questions, and (usually) get the same responses. It’s great to see published data on these issues, and I hope the article is widely read throughout higher ed. My one wish is that there were a way to comment directly on the article and remind faculty that librarians can collaborate with them to strengthen their students’ website evaluation skills.