Monthly Archives: August 2006

Are Web Searchers Getting Better

Some new research coming out of the University of Indiana in Bloomington suggests that search engine users are improving their results as evidenced by their use of more search terms. This would seem to contradict earlier research that indicates that 6 out of 10 search engine users never use more than one word in their searches. This new study was designed to determine if a true “Googlearchy” exists. This refers to a popular notion that engines that rank results by the popularity of sites provide inherently unfair results because they favor the most popular sites and help them to grow even more popular, which may prevent far better sites from being retrieved in search results.

So the researchers set up a study where they examined the results obtained by two different types of searchers, those who only used search engines and those who browsed without search engines, instead following links from one page to another. So what happened?

[The researchers] expected the real-world data to fall somewhere between the two extremes: targeted searching and haphazard surfing. Instead, it turned out that typical Web use — presumably a combination of searching and surfing — concentrated less on popular Web sites than either model had predicted. In other words, real-world Web searching does not fuel the Googlearchy nor does it keep less-popular sites from being found.

The researchers said the outcome appears to be based on the trend that:

more and more people are searching for more specific information. If someone submits a general query, say, “bird flu,” the results at the top of a search-engine’s results page will indeed list high-traffic websites, for example, the Centers for Disease Control site. And that site’s popularity will be reinforced. But Web searches are becoming increasingly more complex, according to Menczer. A search for “bird flu Turkey 2005″ will bring up far fewer results, and lead to more obscure pages.

So I’m questioning if searchers really are getting more sophisticated in the way they do their searches? I still tend to see many of our students using just one word or typing in rather long, formally structured sentences (usually something taken right out of an assignment). Of course, other researchers questioned the results of the Indiana studying, suggesting there were some issues with the data used and whether the searchers in the study really represented average Internet searchers. Those issues aside, as academic librarians we should be eager to promote the gist of the research findings. As one of the researchers put it, “the message here is that as soon as you become a slightly more sophisticated searcher, then you’re breaking the spell of the Web,” meaning that when you take the time to develop a more thoughtful search strategy you take greater control over the search results rather than just settling for the most popular sites that an engine like Google spits back. And even if we can convince students about the benefits of using a more sophisticated search (i.e., more than one word), we still need to contend with earlier studies that indicate only 3% of searchers tie words together with quote marks, and a mere 1% use other advanced search techniques to get better results. Just another reason why a little user education could go a long way towards helping our user communities get better, less biased search results.

The Lush and Vibrant Library

Though it likely didn’t get the readership that “The Deserted Library” did, the Chronicle’s Scott Carlson followed up with a good and thoughtful overview of what makes new libraries work and how different libraries conceptualize what they’re trying to accomplish, originally published last fall but featured recently in the Chronicle’s e-mail alert. It’s still a timely piece, well worth reading.

For example, the University of Chicago, in keeping with its traditions, is interested in exposing students to as many physical volumes as possible. They don’t want the library to be a student center. According to sociology professor Andrew Abbott, “The faculty is united in thinking that this building is supposed to be the research center of one entire wing of intellectual life at the campus, and we can’t afford to let it turn into an Internet cafe.”

Hal Shill at Penn State Harrisburg conducted a survey that found fascinating results.

The responses from about 180 institutions revealed surprising patterns. For example, Mr. Shill found that the location of a library on a campus made little difference in its popularity among students. Library size did not matter, nor did the number of study rooms in a building or the availability of wireless access. “The presence of a cybercafe — that was a wash,” he says. “It was not a statistically significant feature, but I would recommend it as a creature comfort.”

More basic comforts rated highly: the quality of natural lighting, the quality of work spaces, the quality of the heating and air-conditioning system, and the overall ambiance of the building. Computer and Internet access — such as the number of data ports, the quality of the telecommunication system, and the quality of the public-access workstations — were also vital to the success of a building.

I recall the collib-l list discussions when the “Deserted Library” article came out. Many academic librarians feared their presidents would read it and conclude “great, we don’t need to spend all that money on a black hole after all.” This is an article you don’t want them to miss. Send it to your president, your provost, your physical plant director, and your advancement office. Right now.

Google Jockeys For Conference Sessions

If you haven’t heard about Google Jockeys, the basic idea is that an instructor assigns a student to search Google during a class session so that the class can be alerted to material found on the Internet that relates to the class content. I guess to make this work you need at least two monitors or screens in the class room, one to show the instructor’s material and one to show the Google Jockey’s search results. I suppose it could be done with a single monitor or screen depending on how it’s handled. Since this is a relatively new practice there is no research on the impact of Google Jockeys in the classroom.

So I found it interesting to read that at the next Masie Center Learning2006 conference, many of the presentations will feature a Google jockey. According to the latest LearningTrends newsletter from the Masie Center:

During every Keynote/General Session, you will be able to see a screen with the results of on-going real time Google searches, based on the speech or interview. For example, as I am interviewing Lucy Carter from Apple, we might talk about the role of PodCasts for Blended Learning. Our Google Team will do a real time search on key elements,
display it for your interest and provide an edited search list for all participants.

Can something like this really help conference attendees or is it a trendy gimmick? Personally, I think I’d find it distracting to have a screen spewing Google results while there’s a presentation going on. Assuming the idea has merit how do I know the Google Jockey is an effective searcher. Maybe his or her searches are really missing some of the best information on the topic – and it may even be that a search engine other than Google could do a better job retrieving information on the speaker’s topic. And what’s with providing an edited list of the Google results? I could certainly do my own Google search if I was that interested in the topic. I’d much prefer the presenters to develop a resource list in advance and have it for me when I got to the presentation. I can see some merits of Google Jockeying in the classroom, but I’m just not sure it’s going to work all that well at a conference.

The Learning2006 conference is also going to offer real-time mindmap development:

You will be able to watch the development of a graphical MindMap. Every concept, metaphor and conversation thread will be captured in a linkable MindMap. References to books, links and research will be added by our MindMap team. At the end of the speech, the Thought Leader, myself and a team from our CONSORTIUM will edit and expand the MindMap to give to each participant.

Now this sounds like an interesting idea. It could be a great way to obtain a visual conceptualization of a presentation, along with relevant resources. I hope the Masie Center will make some of the mindmaps available to the public. No matter how things turn out I have to hand it to the Masie Center folks for their innovative ideas.

Who knows, maybe we’ll see a few Google Jockeys at the 13th National ACRL conference in Baltimore. Who wants to go first?

Moving Beyond Beginner’s Level

Creating Passionate Users is a popular blog, and I came across one or two other bloggers that mentioned this post that appeared there last week. The gist of the post is feature overload in electronic devices that causes their owners to simply stick with the basic default settings (sound familiar?). It made me think about our feature-laden aggregator databases. How many academic libraries stick with the default basic search screen? Basic mode hides many good features from the searcher. The author says:

If users are stuck in permanent beginner mode, and can’t really do anything interesting or cool with a thing they’re not likely to become passionate. They grow bored or frustrated and the “tool” turns into shelfware.

That part of the post really reasonated with me because I think we tend to convince ourselves that shielding our user community from some of the complexities of our library databases somehow benefits them. But that is apparently a good strategy for encouraging apathy and a lack of intellectual curiousity. These same library resources do offer features that could support the author’s other advice which is to “help passionate users learn to do something cool.” Okay, library databases are generally the opposite of cool, but in what ways can they open students’ eyes and get then interested, activated, and on the road to developing some passion.

Like what, for example. Well, I’ve always had good success getting undergrads to sit up and pay attention when I show them those databases that incorporate tools for creating formatted citations. They tend to think that is pretty cool because deep down no one really likes writing citations. [click on thumbnail to go to enlarged image]

proquestexample

What else don’t academic searchers like? Well they tend to love Lexis/Nexis but they don’t care much for getting loads and loads of real short and unsubstantial articles (you know the ones I mean). So they find it pretty cool when I show them how to use the “length>wordcount” command (as in length>500) to limit retrieval to those articles that exceed the required number of words. It’s easy to remember, takes no great skill, you don’t have to use the word “boolean”, and it’s easy to do because of the FOCUS feature, which they also find to be a revelation. Sure, it would be great if these things were more intuitive. Yes, there should be a prompt that says “would you like to remove all the articles with less than (insert number) words?” But the reality is that we’re not there yet. Learning something like “length>” is partially about exposure to more features, but it’s also about understanding what makes some information better than other information – and how to get it more quickly and efficiently. [click on thumbnail to go to enlarged image]

lexisexample

There are probably dozens of other ways in which we can move our users beyond the beginner’s level. But to take the first step in that direction we need to put some faith into user education. Those who claim library users don’t want to learn how to search, who advocate eliminating user education, and who will tell you it all needs to be simple are the same ones who want to keep the users on automatic mode where they’ll remain bored and void of passion. As the author of the post said, “it’s not that we couldn’t learn how to use anything but the automatic mode…the problem was that we didn’t know why or when to use anything else.” That strikes me as a good mission for library instructors, which is to move beyond the how and instead focus more on the why and when of our resources’ cool features. All we’ve got to lose are bored and passionless users.

Try Avoiding The “A” Word

With growing attention being paid to higher education accountability at the national level as evidenced by the recent final report of the U.S Commission on Higher Education, our institutions are increasingly focused on building a variety of assessment methods into the curriculum. Of course, accomplishing effective assessment across the institution is easier said than done. There are no sure-fire or easy ways to get the job done, and it often is met with resistance at different levels across the institution. Academic librarians are aware of the need for assessment, and as a profession we have made some significant contributions to the assessment movement at our institutions.

But even with the many articles, programs and standards related to the assessment of library services, it is something we still find difficult to grasp. In an effort to help institutions in my neck of the woods improve their understanding of and ability to conduct assessment, a regional higher education association conducted a full-day assessment workshop which I had the good fortune to attend. A theme repeated throughout the workshop was that part of the assessment challenge is the word itself. Either people don’t get it or they are adverse to being a part of the process. The experts’ advice was to avoid using the “A” word at all. Instead, frame discussions about assessment in terms of the simple question “What do you want students to be able to do?” The answers to that question can then form learning outcomes for individuals courses, the institution as a whole or for skill attainment areas such as information literacy. Other basic questions that can contribute to both the identification of outcomes and ways to measure them include:

  • Do we meet or exceed accreditation standards?
  • Do we compare well to others?
  • Are we meeting goals?
  • Are we getting better?
  • Are we getting the most out of our investments?
  • Perhaps when we replace our assessment jargon with some simple questions we might actually make more progress in determining the extent to which the academic library contributes to students achieving institutional learning outcomes. Just coincidentally, later in the week, Pace University issued an assessment report that provides some interesting ideas for assessing student learning. While it’s an institutional blueprint for assessment it makes good reading for those who wish to learn more about higher education assessment challenges and approaches. The demand for greater accountability in higher education is likely to only grow in strength. It would benefit academic librarians to develop methods to both quantitatively and qualitatively demonstrate how their libraries contribute to students’ academic success. Oh, and a final benefit of attending an assessment workshop – finding out that your peers are just as challenged by it as you are.