Monthly Archives: August 2012

Failing Forward, Supporting Students

My son starts middle school in a week, so I’ve been more susceptible than usual to headlines about how parents can help their kids succeed academically. A couple of recent articles in the New York Times caught my eye. First was an opinion piece by psychologist Madeline Levine called Raising Successful Children. Levine is the author of Teach Your Children Well: Parenting for Authentic Success, and she encourages parents stand back and let children make mistakes (within reasonable safety parameters, of course), rather than jump in to fix problems that kids should learn how to solve themselves. More recently I read a review of a new book called How Children Succeed by journalist Paul Tough. He echoes many of Levine’s points about giving kids the space to try, fail, and try again, but cautions that unless children are supported in their efforts it will be difficult for them to pick themselves up and keep going. The reviewer refers to this as a “character-building combination of support and autonomy.”

It’s easy to consider strategies to use to encourage students to try, fail, and try again in a college course, as there’s time over the semester for students to work on problems and concepts that may initially elude them. I’m interested in games-based learning and this is a familiar theme in all good games; noted education scholar James Paul Gee calls it “failing forward.” In a videogame, for example, I usually don’t finish the boss level in my first try, but I learn its attributes and weaknesses so that I can apply what I’ve learned in my next attempt (and repeat until victorious).

In academic libraries we don’t usually have the semester-length relationship with students that classroom faculty have. How can academic librarians allow — or even encourage — students to fail, but be there to support and encourage them when they do?

  • As an instruction librarian, one obvious strategy that leaps to mind is giving students the space to practice their research and library skills during our instruction sessions and workshops. I still struggle with my tendency to want to tell students every single thing about the library, but I’m getting better about keeping my presentation short and preserving time for students to search on their own as I make myself available to answer their questions (and watch closely so I can offer help to students who don’t explicitly ask). And if I happen to fail when demonstrating a search to students, so much the better.
  • At the Reference Desk, we can allow students to “drive” their search for information by turning the computer keyboard over to them so they can type their search query. We can support them as they sort through their results, and offer suggestions of strategies for revising their search to produce better results. This might be tricky at busy times, of course, so we might not always be able to use this approach with students. We can also think of roving Reference as an opportunity to help students fail forward: librarians can roam the study areas in search of students who look like they may have a question or be in need of assistance.
  • On our websites, we can embed instructional text, tutorials, and ask a librarian links within our electronic resources and services, or on the web pages that link to them. Ideally students will try to use these research tools themselves, but, if they run into trouble or don’t find what they need, they can easily find support or can reach out and ask for our help. One caveat is that it may be difficult to determine whether students are taking advantage of the support offered rather than just failing and moving on, though usability studies and web analytics could be employed to gather information about usage.

I’m sure there are lots of other ways that academic librarians can help students try, fail, and try again — I’d be interested to hear about them. And what about students who won’t or can’t seek encouragement, how can we support them when they try and fail?

Leaves of Graph

ACRLog welcomes a guest post from Pete Coco, the Humanities Liaison at Wheaton College in Norton, MA, and Managing Editor at Each Moment a Mountain.

Note: This post makes heavy use of web content from Google Search and Knowledge Graph. Because this content can vary by user and is subject to change at anytime, this essay uses screenshots instead of linking to live web pages in certain cases. As of the completion of this post, these images continue to match their live counterparts for a user from Providence, RI not logged in to Google services.

This That, Not That That

Early this July, Google unveiled its Knowledge Graph, a semantic reference tool nestled into the top right corner of its search results pages. Google’s video announcing the product makes no risk of understating Knowledge Graph’s potential, but there is a very real innovation behind this tool and it is twofold. For one, Knowledge Graph can distinguish between homonyms and connect related topics. For a clear illustration of this function, consider the distinction one might make between bear and bears. Though the search results page for either query include content related to both grizzlies and quarterbacks, Knowledge Graph knows the difference.

Second, Knowledge Graph purports to contain over 500 million articles. This puts it solidly ahead of Wikipedia, which reports having about 400 million, and lightyears ahead of professionally produced reference tools like Encyclopaedia Brittanica Online, which comprises an apparently piddling 120,000 articles. Combine that almost incomprehensible scope with integration into Google Search, and without much fanfare suddenly the world has its broadest and most prominently placed reference tool.

For years, Google’s search algorithm has been making countless, under-examined choices on behalf of its users about the types of results they should be served. But at its essence, Knowledge Graph presents a big symbolic shift away from (mostly) matching it to web content — content that, per extrinsic indicators, the search algorithm serves up and ranks for relevance — toward the act of openly interpreting the meaning of a search query and making decisions based in that interpretation. Google’s past deviations from the relevance model, when made public, have generally been motivated by legal requirements (such as those surrounding hate speech in Europe or dissent in China) and, more recently, the dictates of profit. Each of these moves has met with controversy.

And yet in the two months since its launch, Knowledge Graph has not been a subject of much commentary at all. This is despite the fact that the shift it represents has big implications that users must account for in their thinking, and can be understood as part of larger shifts the information giant has been making to leverage the reputation earned with Search toward other products.

Librarians and others teaching about internet media have a duty to articulate and problematize these developments. Being in many ways a traditional reference tool, Knowledge Graph presents a unique pedagogic opportunity. Just as it is critical to understand the decisions Google makes on our behalf when we use it to search the web, we must be critically aware of the claim to a newly authoritative, editorial role Google is quietly staking with Knowledge Graph — whether it means to be claiming that role or not.

Perhaps especially if it does not mean to. With interpretation comes great responsibility.

Some Questions

The value of the Knowledge Graph is in its ability to authoritatively parse semantics in a way that provides the user with “knowledge.” Users will use it assuming its ability to do this reliably, or they will not use it at all.

Does Knowledge Graph authoritatively parse semantics?

What is Knowledge Graph’s editorial standard for reliability? What constitutes “knowledge” by this tool’s standard? “Authority”?

What are the consequences for users if the answer to these questions is unclear, unsatisfactory, or both?

What is Google’s responsibility in such a scenario?

He Sings the Body Electric

Consider an example: Walt Whitman. As of this writing, the poet’s entry in Knowledge Graph looks like this (click the image to enlarge):

You might notice the most unlikely claim that Whitman recorded an album called This is the Day. Follow the link and you are brought to a straight, vanilla Google search for this supposed album’s title. The first link in that result list will bring you to a music video on Youtube:

Parsing this mistake might bring one to a second search: “This is the Day Walt Whitman.” The results list generated by that search yield another Youtube video at the top, resolving the confusion: a second, comparably flamboyant Walt Whitman, a choir director from Chicago, has recorded a song by that title.

Note the perfect storm of semantic confusion. The string “Walt Whitman” can refer to either a canonical poet or a contemporary gospel choir director while, at the same time, “This is the Day” can refer either to a song by The The or that second, lesser-known Walt Whitman.

Further, “This is the Day” is in both cases a song, not an album.

Knowledge Graph, designed to clarify exactly this sort of semantic confusion, here manages to create and potentially entrench three such confusions at once about a prominent public figure.

Could there be a better band than one called The The to play a role in this story?

Well Yeah

This particular mistake was first noted in mid-July. More than a month later, it still stands.

At this new scale for reference information, we have no way of knowing how many mistakes like this one are contained within Knowledge Graph. Of course it’s fair to assume this is an unusual case, and to Google’s credit, they address this sort of error in the only feasible way they could, with a feedback mechanism that allows users to suggest corrections. (No doubt bringing this mistake the attention of ACRLog’s readers means Walt Whitman’s days as a time-traveling new wave act are numbered.)

Is Knowledge Graph’s mechanism for correcting mistakes adequate? Appropriate?

How many mistakes like this do there need to be to make a critical understanding of Knowledge Graph’s gaps and limitations crucial to even casual use?

Interpreting the Gaps

Many Google searches sampled for this piece do not yield a Knowledge Graph result. Consider an instructive example: “Obama birth certificate.” Surely, there would be no intellectually serious challenge to a Knowledge Graph stub reflecting the evidence-based consensus on this matter. Then again, there might be a very loud one.

Similarly not available in Knowledge Graph are stubs on “evolution,” or “homosexuality.” In each case, it should be noted that Google’s top ranked search results are reliably “reality-based.” Each is happy to defer to Wikipedia.

In other instances, the stub for topics that seem to reach some threshold of complexity and/or controversy defers to “related” stubs in favor of making nuanced editorial decisions. Consider the entries for “climate change” and the “Vietnam war,” here presented in their entirety.

In moments such as these, is it unreasonable to assume that Knowledge Graph is shying away from controversy and nuance? More charitably, we might say that this tool is simply unequipped to deal with controversy and nuance. But given the controversial, nuanced nature of “knowledge,” is this second framing really so charitable?

What responsibility does a reference tool have to engage, explicate or resolve political controversy?

What can a user infer when such a tool refuses to engage with controversy?

What of the users who will not think to make such an inference?

To what extent is ethical editorial judgment reconcilable with the interests of a singularly massive, publicly traded corporation with wide-ranging interests cutting across daily life?

One might answer some version of the above questions with the suggestion that Knowledge Graph avoids controversy because it is programmed only to feature information that meets some high standard of machine-readable verification and/or cross-referencing. The limitation is perhaps logistical, baked into the cake of Knowledge Graph’s methodology, and it doesn’t necessarily limit the tool’s usefulness for certain purposes so long as the user is aware of the boundaries of that usefulness. Perhaps in that way this could be framed as a very familiar sort of challenge, not so different from the one we face with other media, whether it’s cable news or pop-science journalism.

This is all true, so far as it goes. Still, consider an example like the stub for HIV:

There are countless reasons to be uncomfortable with a definition of HIV implicitly bounded by Ryan White on one end and Magic Johnson on the other. So many important aspects of the virus are omitted here — the science of it, for one, but even if Knowledge Graph is primarily focused on biography, there are still important female, queer or non-American experiences of HIV that merit inclusion in any presentation of this topic. This is the sort of stub in Knowledge Graph that probably deserves to be controversial.

What portion of useful knowledge cannot — and never will — bend to a machine-readable standard or methodology?

Ironically, it is Wikipedia that, for all the controversy it has generated over the years, provides a rigorous, deeply satisfactory answer to the same problem: a transparent governance structure guided in specific instances by ethical principle and human judgment. This has more or less been the traditional mechanism for reference tools, and it works pretty well (at least up to a certain scale). Even more fundamental, length constraints on Wikipedia are forgiving, and articles regularly plumb nuance and controversy. Similarly, a semantic engine like Wolfram Alpha successfully negotiates this problem by focusing on the sorts of quantitative information that isn’t likely to generate so much political controversy. The demographics of its user-base probably help too.

Of course, Google’s problem here is that it searches everything for every purpose. People use it everyday to arbitrate contested facts. Many users assume that Google is programmatically neutral on questions of content itself, intervening only to organize results for their relevance to our questions; Google, then, has no responsibility for the content itself. This assumption is itself complicated and, in many ways, was problematic even before the debut of Knowledge Graph. All the same, it is a “brand” that Knowledge Graph will no doubt leverage in a new direction. Many users will intuitively trust this tool and the boundaries of “knowledge” enforced by its limitations and the prerogatives of Google and its corporate actors.

So:

Consider the college freshman faced with all these ambiguities. Let’s assume that she knows not to trust everything she reads on the internet. She has perhaps even learned this lesson too well, forfeiting contextual, critical judgment of individual sources in favor of a general avoidance of internet sources. Understandably, she might be stubbornly loyal to the internet sources that she does trust.

Trading on the reputation and cultural primacy of Google search, Knowledge Graph could quickly become a trusted source for this student and others like her. We must use our classrooms to provide this student with the critical engagement of her professors, librarians and peers on tools like this one and the ways in which we can use them to critically examine the gaps so common in conventional wisdom. Of course Knowledge Graph has a tremendous amount of potential value, much of which can only proceed from a critical understanding of its limitations.

How would this student answer any of the above questions?

Without pedagogical intervention, would she even think to ask them?

The end of the book as we know it, and I feel (mostly) fine.

I’m packing for an upcoming vacation and assembling my reading material. In addition to a backlog of unread New Yorkers, I’ll bring novels (mostly new fantasy and speculative fiction) that will keep me company in airports and at the lake. I’m trying to spend as little money as possible, and so I’m gathering Kindle books borrowed from friends, Kindle and ePub books borrowed from our local public library, and one eagerly awaited 561-page print book from my library’s collection.

As a librarian, I’m comfortable navigating the library eBook universe (or is it a minefield? asteroid belt? black hole?) for personal reading. Not all of our patrons find it easy, and not all libraries can make eBooks available to the extent that they would like. The subject inspires great pride in libraries, prejudice against publishers, common sense, and passionate sensibility –

  • In June, the Pew Internet and American Life Project reported that 12% of Americans who read (and what percentage of all Americans is that?) have borrowed ebooks from their public libraries, but half of those surveyed didn’t know that libraries offered that service. Those who do borrow ebooks from public libraries report frustration – with limited selection, long waits, and incompatible formats. If more patrons are going to use ebook lending services, we’ll have to have better relationships with publishers, and more titles and formats available.
  • New York Times financial columnist Ann Carrns describes her experience trying to save money by borrowing ebooks from her local library. She reports many of the same frustrations as the subjects in the Pew survey, but had more success when she stopped searching her library’s ebook collection for known items and instead browsed what titles were available. (I have also found this a great way to discover new authors, if you have patience to wade through the dross.)
  • Patrons are frustrated because, according to Barbara Fister, “large trade publishers think sharing is a bug, not a feature.” Ebook publishing models don’t value the culture of collaboration and cooperation that libraries are built upon. Academic libraries may have a slight advantage here, since we tend to work with academic and nonprofit publishers, who, like scholars, “think sharing is pretty much the point of publishing.”
  • Are we better off with ebooks or without them? Librarian in Black thinks we should break up with ebooks, because they are a bad boyfriend: “Libraries and eBooks aren’t shacking up anytime soon, not for real…not as long as publishers continue to falsely view us as a threat instead of a partner.” In contrast, Steven Harris argues that our relationship with print books is just as dysfunctional and codependent.
  • Is this the end of the book as we know it? Or do ebooks represent reading’s future? Speculative fiction has always contemplated the death of the book, according to English professor Leah Price, but “what [writers] never seem to have imagined was that the libraries housing those dying volumes might themselves disappear.” Let’s hope they’re right.

Summer Projects

Ah, summer! A time when we all get to take a deep breath and work on all those things we put off during the school year. I’ve always thought that summer at an academic library is sort of a strange time. Even though it feels more relaxed in and around campus, we’re still  quite busy getting things ready before the students return. Last week when I realized that it was already August, I had to stifle a feeling of panic—the summer feels like its slipping away along with the time to work on all my projects.

Three projects that I’ve been working on over the summer include:

  • Reviewing the collection: Our library is doing a massive and much needed inventory and collection review project. This has involved the efforts of practically every person in the building. For my part, I’ve been looking at each of our music and theatre arts holdings and determining what could be withdrawn (–teaching faculty will get the final say). There have been endless book trucks coming in and out of my office. Nevertheless, it has been a great opportunity for me to see the strengths and weaknesses of the collection.
  • Processing opera scores: A few years ago my institution received a large donation of hundreds of music scores from the wife of a former opera professor. Most of these are opera scores. The collection has sat untouched awaiting cataloging and processing. Thankfully I was able to hire a music cataloger this summer and we are almost finished with cataloging the entire collection. Some items include incredibly rare 18th century first edition opera scores. In the future, I would like to apply for a grant to digitize some of these rare materials. But for now, I’ll just be relieved and satisfied once they officially join our collection.
  • Combining the Olympics and information literacy: While I am not a huge sports fan, whenever the Olympics roll around, I find myself glued to the television practically every night—especially for gymnastics, swimming, and track and field. Lately I’ve been thinking that there must be a way for me to incorporate some sort of Olympic-themed activity or research inquiry into one of my information literacy sessions this Fall. So far nothing has come to me, but I have had a lot of fun perusing the official website for the Olympics—including their photo gallery which contains over a hundred galleries based on year and sport.  The photos go as far back as the 1896 games in Athens.

What huge projects are you working on this summer and will you actually finish them?

Calling All New Academic Librarians!

Longtime readers may remember our First Year Academic Librarian Experience series that ran a few years ago. With the Fall semester just around the corner, it’s likely that lots of folks will be stepping into a new position as an academic librarian soon, or have recently. While our ACRLog regular bloggers include both seasoned and newer academic librarians, and I’m sure each of us remembers her first library job vividly, it’s enlightening for newbies and veterans alike to read about settling in to the first job as an academic librarian from those who are currently experiencing it.

With that in mind, we’d like to restart our First Year Academic Librarian Experience series and bring 2 new bloggers aboard for monthly posts during the 2012-2013 academic year. If you started in your first job as an academic librarian anytime from July 1, 2012, on and are interested in becoming a First Year Academic Librarian blogger for ACRLog, please get in touch! Use the ACRLog Tip Page to send us:

– a sample blog post
– a brief note describing your job and your interest in blogging at ACRLog

Applications will be accepted through Friday, September 7, 2012. Questions? Leave a comment or drop us a line on the Tip Page. We look forward to hearing from you!