Category Archives: Information Literacy

Not as simple as “click-by-click”

One of the projects I inherited as emerging technologies librarian is managing our library’s collection of “help guides.” The online learning objects in this collection are designed to provide asynchronous guidance to students when completing research-related tasks. Over the last few months, my focus has been on updating existing guides to reflect website and database interface changes, as well ensuring compliance with federal accessibility standards. With those updates nearly complete, the next order of business is to work with our committee of research and instruction librarians to create new content. The most requested guide at the top of our list? How to use the library’s discovery service rolled out during the Fall 2012 semester.

Like many other libraries, we hope the discovery service will allow users to find more materials across the library’s collections and beyond. Previously, our library’s website featured a “Books” search box to search the catalog, as well as an “Articles” search box to search one of our interdisciplinary databases. To ease the transition to the discovery system, we opted to keep the “Books” and “Articles” search boxes, in addition to adding the “one search box to rule them all”; however, these format search boxes now search the discovery tool using the appropriate document type tag. Without going into the nitty gritty details, this method has created certain “quirks” in the system that can lead sub-optimal search results.

This back-story leads to my current question about creating instructional guides for our discovery system – how do we design screencasts to demonstrate simple searches by format?

So far, this has boiled down to two options:

  1. Address the way students are most likely to interact with our system. We know users are drawn to cues with high information scent to help them find what they need; if I’m looking for a book, I’m more likely to be drawn to anything explicitly labeled “Books.” We also know students “satisfice” when completing research tasks, and many are unfortunately unlikely to care if their searches do not retrieve all possible results. Additionally, whatever we put front-and-center on our homepage is, I think, a decision we need to support within our instructional objects.
  2. Provide instruction demonstrating the way the discovery system was designed to be used. If we know our system is set up in a less-than-optimal way, it’s better to steer students away from the more tempting path. In this case, searching the discovery system as a whole and demonstrating how to use the “Format” limiters to find a specific types of materials. While this option requires ignoring the additional search options on our website, it will also allow us to eventually phase out the “Books” and “Articles” search boxes on the website without significant updates to our screencasts.

While debating these options with my colleagues, it’s been interesting to consider how this decision reflects the complexities of creating  standalone digital learning objects. The challenge is that these materials are often designed without necessarily knowing how, when, or why they will be used; our job is to create objects that meet students at a variety of point-of-need moments. Given that objects like screencasts should be kept short and to-the-point, it’s also difficult to add context that explains why the viewer should complete activities as-shown. And library instruction are not usually designed to make our students “mini-librarians.” Our advanced training and interest in information systems means it is our job to be the experts, but our students to not necessarily need to obtain this same level of knowledge to be successful information consumers and creators.

Does this mean we also engage in a bit of “satisficing” to create instructional guides that are “good enough” but not, perhaps, what we know to be “best?” Or do we provide just enough context to help students follow us as we guide them click-by-click from point A to point B, while lacking the complete “big picture” required to understand why this is the best path? Do either of these options fulfill our goals toward helping students develop their own critical information skills?

No instruction interaction is ever perfect. In person or online, synchronous or asynchronous, we’re always making compromises to balance idealism with reality. And in the case of creating and managing a large collection of online learning objects, it’s been interesting to have conversations which demonstrate why good digital learning objects are not synonymous with “click-by-click” instructions. How do we extend what we know about good pedagogy to create better online learning guides?

 

Building a Pedagogy

Lately I’ve been thinking a lot about pedagogy. To tell you the truth, throughout graduate school I thought very infrequently about pedagogy, assuming that even as an instruction librarian, something as theoretical as pedagogy would be outside of my professional bounds. Though the instruction course offered at my university did touch on the aspects of designing an information literacy curriculum, it was a far cry from being a course in pedagogy. In fact, as librarians, we often become so overworked in our day-to-day tasks of making sure our resources and services are accessible, we can forget that first and foremost, we are educators. And like any highly skilled educators, having a strong grounding in pedagogy is essential to our job.

Pedagogy is, simply, the art of education. It is how we teach, how we connect students to the curriculum, and how we position students to be learners. Pedagogy is the beating heart of the teaching professions. I come from a strong social science background, particularly one poised to challenge and investigate systems of the status quo. I spent all of my undergraduate years studying the prison industrial complex from a gender perspective and my favorite courses in library school were on the politics of classification and knowledge production. Not surprisingly, then, I tend to frame my own librarian practice within a framework of social progress and have only recently begun to consider how to use this framework in library instruction. Yes, I want my students to be skilled in information seeking, but I also want them to be willing and able to think critically about information and the politics through which it’s produced. I take my pedagogy cues from the likes of Freire, hooks, Zinn – in other words, I want my students to be rabble rousers.

I am extremely lucky to be part of an institution with which I share these strong social convictions. My university’s commitment to social justice and radical learning is at the core of all it does, including its library instruction. I, along with the library director, have recently begun developing a comprehensive information literacy curriculum for the library. How can we reframe the ACRL Information Literacy Standards to a more critical perspective? We always have and will continue to have the one-shot in-class library workshops, but we are starting to strategically envision what skills and concepts we want to consistently deliver. In addition to the traditional keyword-forming, full-text finding skills, how can we give students the skills to think critically about the information they both find and can’t find? How can I open the discussion about the problematic nature of academic publishing? Where is the room for this agenda? It’s a lot to fit in the 50-minute one-shot.

I am in no way the first person to think about this. Many, many books have been published on this topic and continue to be published. And, indeed, many of the student-centered, critical strategies involve very few bells and whistles. A few ideas that have left me inspired:

  • include critical reading skills in every workshop. As simple as that! It is as important as knowing how to properly cite a resource or construct a search term.
  • Have students search for articles on a purposefully controversial topic, like the link between autism and vaccines. Have them note what information is in the peer-reviewed literature, what stance it tends to take, the methodologies it tends to employ, and where alternatives may exist.
  • Show students how to find and use open-access journals and repositories. The few times I’ve done this, I’ve vetted these sources to ensure they are of high quality and repute (and explain that I’ve done so, using which criteria).
  • Change the way I organize my lessons. Instead of PowerPoints, I try to structure the lesson according to student suggestions and examples.
  • Leave the more traditional information literacy skills to Lib Guides and other digital learning objects. I’d rather spend my precious face-to-face time on the more nuanced aspects of information seeking and point them to videos and other online resources to do the more mundane tasks, like how to find full-text.

Where do you draw your pedagogical inspiration? Does your library have an comprehensive information literacy curriculum? Share your thoughts, resources, and inspirations in the comments section, or tweet me @beccakatharine.

“Power Searching” with Google

Google, common “frenemy” of academic librarians everywhere, has put together a short online class called Power Searching. The course is designed to teach you how to find good, quality information more quickly and easily while searching Google.  When I first heard about this course, my first thought was “Ah, Google is stealing my job!” After I calmed down a bit, I read over the description for the course and decided to enroll. I wanted to check out our potential competition and I hoped I might be inspired by new ideas and tools to incorporate into my teaching.

The course is divided into six classes and each class is further broken down into short videos. Each class totals approximately 50 minutes of video content. Following each short video there is an optional opportunity to test the skills demonstrated by David Russel, Senior Research Assistant, through an activity or quiz. The course contains a pre, mid, and post class assessment.  After successfully passing both the mid and post class assessments, you receive an official certificate or completion. To supplement the concepts taught in the classes, Google search experts also offer forums and Google Hangouts. When I took the course, it lasted about two weeks and a new class was released every three days or so. The classes could be completed any time prior to the specific due date.

The classes themselves definitely hit on topics that we usually cover in our library workshops, such as choosing good keywords and thinking critically about the source of the information. But for the most most part, it was about more about clicking this and then clicking that…similar to a typical electronic resource demonstration.  I did get bored a few times and skipped some of the activities. Also, I never had the motivation or desire to participate in any of the forums or Hangouts, but that was mainly due to my busy schedule. Despite all of this, I’m not too proud to admit that I also learned a few things–specifically on how to specific operators and how to do an image search.

So, is Google stealing our jobs? No. (At least not right now.) What academic librarians do that Google cannot is work with researchers on the gray, messy stuff like choosing a research topic, determining what types of info are needed, and figuring out the best way to use information. If more first-year and non-traditional students took the initiative to enroll in Google’s Power Searching class, I think it would help me as a librarian to focus more on those gray areas and less on the logistics of doing a simple search. While from a pedagogical stand point I didn’t have any “Aha!” moments, I may incorporate some of their search examples into my future library sessions.

I think it would be awesome of Google collaborated with a college or university library and did this same type of class for effectively using Google Scholar for research. (If you’re reading this, Google–I’m available!)

Have any other librarians taken Google’s Power Searching class? I’d love to hear what you think of the course and its content.

Active Learning and Teaching the Teacher

Ever since I attended ACRL’s Immersion Teacher Track about a year ago, I’ve been trying to incorporate more active learning strategies into my classes—and surprisingly, it’s been a lot of fun! One unintended benefit of these activities has been the opportunity for me to see inside the minds of students by seeing and hearing how they reason their way through this crazy journey we call research.

A couple weeks ago, I did a workshop for juniors and seniors enrolled in a music management course that requires students to write a large research paper. One problem the professor and I encountered in prior semesters was that students struggled with assessing and using their sources properly. For example, they sometimes have problems discerning when a source is heavily biased.

In an effort to get students thinking critically about assessing their sources I came up with a group activity. Each group had a different research topic with three sources. The sources ranged from peer-reviewed articles to a search in Twitter for “Skrillex” and “dubstep.” I asked students to incorporate the CRAAP test (–the brainchild of the brilliant librarians at CSU Chico), which stands for currency, relevancy, authority, accuracy and purpose. After utilizing the CRAAP test, students were instructed to decide if (1) they would use the source for each paper and (2) how they would use it. Following the activity, each group presented their topic and three sources to the rest of the class.

I’m hopeful that this activity will eventually prove to have any effect on this group’s ability to assess and use their sources. We shall see. Nevertheless, I can definitely say that it taught me a lot about students’ perception of sources of information. Here were a few of my notable observations:

  1. Autobiographies and interviews: While students were able to recognize the value of these as primary sources, they didn’t seem to understand how a musician’s statements regarding his/her own success could not be completely trusted.
  2. Blog posts: Students were really suspicious of blog posts—and they should be! But they didn’t immediately see the utility of a blog post as evidence of public opinion.
  3. Twitter: Musicians use Twitter to connect with their fans, but students didn’t recognize the potential for using it to monitor trends in music genres or musicians.

This activity made me realize how I subconsciously make assumptions about how students think. For whatever reason, I thought students would be better at discerning how to effectively use unconventional sources. I also wonder to what extent their responses were informed by what they thought I (the librarian) wanted to hear. Regardless, I am pleasantly surprised to discover how active learning activities can be used to teach the teacher.

Leaves of Graph

ACRLog welcomes a guest post from Pete Coco, the Humanities Liaison at Wheaton College in Norton, MA, and Managing Editor at Each Moment a Mountain.

Note: This post makes heavy use of web content from Google Search and Knowledge Graph. Because this content can vary by user and is subject to change at anytime, this essay uses screenshots instead of linking to live web pages in certain cases. As of the completion of this post, these images continue to match their live counterparts for a user from Providence, RI not logged in to Google services.

This That, Not That That

Early this July, Google unveiled its Knowledge Graph, a semantic reference tool nestled into the top right corner of its search results pages. Google’s video announcing the product makes no risk of understating Knowledge Graph’s potential, but there is a very real innovation behind this tool and it is twofold. For one, Knowledge Graph can distinguish between homonyms and connect related topics. For a clear illustration of this function, consider the distinction one might make between bear and bears. Though the search results page for either query include content related to both grizzlies and quarterbacks, Knowledge Graph knows the difference.

Second, Knowledge Graph purports to contain over 500 million articles. This puts it solidly ahead of Wikipedia, which reports having about 400 million, and lightyears ahead of professionally produced reference tools like Encyclopaedia Brittanica Online, which comprises an apparently piddling 120,000 articles. Combine that almost incomprehensible scope with integration into Google Search, and without much fanfare suddenly the world has its broadest and most prominently placed reference tool.

For years, Google’s search algorithm has been making countless, under-examined choices on behalf of its users about the types of results they should be served. But at its essence, Knowledge Graph presents a big symbolic shift away from (mostly) matching it to web content — content that, per extrinsic indicators, the search algorithm serves up and ranks for relevance — toward the act of openly interpreting the meaning of a search query and making decisions based in that interpretation. Google’s past deviations from the relevance model, when made public, have generally been motivated by legal requirements (such as those surrounding hate speech in Europe or dissent in China) and, more recently, the dictates of profit. Each of these moves has met with controversy.

And yet in the two months since its launch, Knowledge Graph has not been a subject of much commentary at all. This is despite the fact that the shift it represents has big implications that users must account for in their thinking, and can be understood as part of larger shifts the information giant has been making to leverage the reputation earned with Search toward other products.

Librarians and others teaching about internet media have a duty to articulate and problematize these developments. Being in many ways a traditional reference tool, Knowledge Graph presents a unique pedagogic opportunity. Just as it is critical to understand the decisions Google makes on our behalf when we use it to search the web, we must be critically aware of the claim to a newly authoritative, editorial role Google is quietly staking with Knowledge Graph — whether it means to be claiming that role or not.

Perhaps especially if it does not mean to. With interpretation comes great responsibility.

Some Questions

The value of the Knowledge Graph is in its ability to authoritatively parse semantics in a way that provides the user with “knowledge.” Users will use it assuming its ability to do this reliably, or they will not use it at all.

Does Knowledge Graph authoritatively parse semantics?

What is Knowledge Graph’s editorial standard for reliability? What constitutes “knowledge” by this tool’s standard? “Authority”?

What are the consequences for users if the answer to these questions is unclear, unsatisfactory, or both?

What is Google’s responsibility in such a scenario?

He Sings the Body Electric

Consider an example: Walt Whitman. As of this writing, the poet’s entry in Knowledge Graph looks like this (click the image to enlarge):

You might notice the most unlikely claim that Whitman recorded an album called This is the Day. Follow the link and you are brought to a straight, vanilla Google search for this supposed album’s title. The first link in that result list will bring you to a music video on Youtube:

Parsing this mistake might bring one to a second search: “This is the Day Walt Whitman.” The results list generated by that search yield another Youtube video at the top, resolving the confusion: a second, comparably flamboyant Walt Whitman, a choir director from Chicago, has recorded a song by that title.

Note the perfect storm of semantic confusion. The string “Walt Whitman” can refer to either a canonical poet or a contemporary gospel choir director while, at the same time, “This is the Day” can refer either to a song by The The or that second, lesser-known Walt Whitman.

Further, “This is the Day” is in both cases a song, not an album.

Knowledge Graph, designed to clarify exactly this sort of semantic confusion, here manages to create and potentially entrench three such confusions at once about a prominent public figure.

Could there be a better band than one called The The to play a role in this story?

Well Yeah

This particular mistake was first noted in mid-July. More than a month later, it still stands.

At this new scale for reference information, we have no way of knowing how many mistakes like this one are contained within Knowledge Graph. Of course it’s fair to assume this is an unusual case, and to Google’s credit, they address this sort of error in the only feasible way they could, with a feedback mechanism that allows users to suggest corrections. (No doubt bringing this mistake the attention of ACRLog’s readers means Walt Whitman’s days as a time-traveling new wave act are numbered.)

Is Knowledge Graph’s mechanism for correcting mistakes adequate? Appropriate?

How many mistakes like this do there need to be to make a critical understanding of Knowledge Graph’s gaps and limitations crucial to even casual use?

Interpreting the Gaps

Many Google searches sampled for this piece do not yield a Knowledge Graph result. Consider an instructive example: “Obama birth certificate.” Surely, there would be no intellectually serious challenge to a Knowledge Graph stub reflecting the evidence-based consensus on this matter. Then again, there might be a very loud one.

Similarly not available in Knowledge Graph are stubs on “evolution,” or “homosexuality.” In each case, it should be noted that Google’s top ranked search results are reliably “reality-based.” Each is happy to defer to Wikipedia.

In other instances, the stub for topics that seem to reach some threshold of complexity and/or controversy defers to “related” stubs in favor of making nuanced editorial decisions. Consider the entries for “climate change” and the “Vietnam war,” here presented in their entirety.

In moments such as these, is it unreasonable to assume that Knowledge Graph is shying away from controversy and nuance? More charitably, we might say that this tool is simply unequipped to deal with controversy and nuance. But given the controversial, nuanced nature of “knowledge,” is this second framing really so charitable?

What responsibility does a reference tool have to engage, explicate or resolve political controversy?

What can a user infer when such a tool refuses to engage with controversy?

What of the users who will not think to make such an inference?

To what extent is ethical editorial judgment reconcilable with the interests of a singularly massive, publicly traded corporation with wide-ranging interests cutting across daily life?

One might answer some version of the above questions with the suggestion that Knowledge Graph avoids controversy because it is programmed only to feature information that meets some high standard of machine-readable verification and/or cross-referencing. The limitation is perhaps logistical, baked into the cake of Knowledge Graph’s methodology, and it doesn’t necessarily limit the tool’s usefulness for certain purposes so long as the user is aware of the boundaries of that usefulness. Perhaps in that way this could be framed as a very familiar sort of challenge, not so different from the one we face with other media, whether it’s cable news or pop-science journalism.

This is all true, so far as it goes. Still, consider an example like the stub for HIV:

There are countless reasons to be uncomfortable with a definition of HIV implicitly bounded by Ryan White on one end and Magic Johnson on the other. So many important aspects of the virus are omitted here — the science of it, for one, but even if Knowledge Graph is primarily focused on biography, there are still important female, queer or non-American experiences of HIV that merit inclusion in any presentation of this topic. This is the sort of stub in Knowledge Graph that probably deserves to be controversial.

What portion of useful knowledge cannot — and never will — bend to a machine-readable standard or methodology?

Ironically, it is Wikipedia that, for all the controversy it has generated over the years, provides a rigorous, deeply satisfactory answer to the same problem: a transparent governance structure guided in specific instances by ethical principle and human judgment. This has more or less been the traditional mechanism for reference tools, and it works pretty well (at least up to a certain scale). Even more fundamental, length constraints on Wikipedia are forgiving, and articles regularly plumb nuance and controversy. Similarly, a semantic engine like Wolfram Alpha successfully negotiates this problem by focusing on the sorts of quantitative information that isn’t likely to generate so much political controversy. The demographics of its user-base probably help too.

Of course, Google’s problem here is that it searches everything for every purpose. People use it everyday to arbitrate contested facts. Many users assume that Google is programmatically neutral on questions of content itself, intervening only to organize results for their relevance to our questions; Google, then, has no responsibility for the content itself. This assumption is itself complicated and, in many ways, was problematic even before the debut of Knowledge Graph. All the same, it is a “brand” that Knowledge Graph will no doubt leverage in a new direction. Many users will intuitively trust this tool and the boundaries of “knowledge” enforced by its limitations and the prerogatives of Google and its corporate actors.

So:

Consider the college freshman faced with all these ambiguities. Let’s assume that she knows not to trust everything she reads on the internet. She has perhaps even learned this lesson too well, forfeiting contextual, critical judgment of individual sources in favor of a general avoidance of internet sources. Understandably, she might be stubbornly loyal to the internet sources that she does trust.

Trading on the reputation and cultural primacy of Google search, Knowledge Graph could quickly become a trusted source for this student and others like her. We must use our classrooms to provide this student with the critical engagement of her professors, librarians and peers on tools like this one and the ways in which we can use them to critically examine the gaps so common in conventional wisdom. Of course Knowledge Graph has a tremendous amount of potential value, much of which can only proceed from a critical understanding of its limitations.

How would this student answer any of the above questions?

Without pedagogical intervention, would she even think to ask them?