IFLA 2006 Papers Of Interest to Academic Librarians

The International Federation of Library Associations is meeting in South Korea in August 2006. Some of the papers being presented at the conference are already available online. So if you are not planning on going to the conference, some of these papers may be of interest:

  • Directories of Institutional Repositories: Research Results and Recommendations(from USA)
  • Enabling Library and Information Skills: Foundations for Entering Students (from Australia)
  • Integrating Information Literacy in a First-Year Course: A Case Study (from Canada)
  • Sudden Thoughts And Second Thoughts

    Kudos To Educause

    I’ve previously taken higher education associations to task for not inviting us to the table when it seems clear we can contribute to the discussion and action. So it’s only fair that I commend those organizations that are getting it right. In reading an article about the top ten IT issues in the latest EDUCAUSE Review I saw that Barbara Dewey, Dean of Libraries at University of Tennessee, is the current Chair of the EDUCAUSE Current Issues Committee. So not only is an academic librarian on this committee of IT experts, but she’s running the Committee. That’s impressive. So I commend EDUCAUSE for their forward thinking.

    Does A Google Jockey Have To Jockey Only At Google

    While we’re talking about EDUCAUSE, their “7 Things You Should Know About…” series is something I find quite useful, not only for my own education about new instructional technologies but also for pointing our faculty to these new pedagogies. The latest in the series is on “Google Jockeying”. What is that? A Google jockey is a participant in a presentation or class who surfs the Internet for terms, ideas, Web sites, or resources mentioned by the presenter or related to the topic. The jockey’s searches are displayed simultaneously with the presentation, helping to clarify the main topic and extend learning opportunities. It’s an interesting idea, and perhaps something that librarians could use in library instruction to get or keep students activated. Just one quibble. While there is a passage that suggests that an instructor, while taking the role of Google Jockey, could show students other search engines, it concerns me that the choice of “Google Jockeying” may send a message that this teaching method can only be completed with Google – and that’s just not the case. Why not call it something like “Surfing Assistant” or just plain old “Web Jockeying.” It’s not that I have a problem with Google, but anything we can do to discourage Google-centricity will help students in the long run.

    Reading Across The Web

    I came across a few worthwhile articles/posts last week. Tomorrow’s Professor Blog carried a story about “The Lecture Club” that describes an effort by a group of faculty to encourage the peer review of teaching. Those of us who teach could probably benefit more from peer analysis of our instruction, but it’s not an easy thing to develop. This story may provide some incentive to give it a try. A columnist for the Atlanta Journal-Constitution discusses the “post-literacy” era in which today’s students just don’t read books. What struck me was the author’s reflecting on the simplicity-complexity conundrum, as characterized by students being able to digest information in only tiny, fragmentary bits. The author asks if this is the price we are paying for technology and instant access to too much information. Though a bit longer I found the text of a commencement speech by Tim O’Reilly did a nice job of explaining an interesting perspective on Web 2.0. He states that the real heart of Web 2.0 is harnessing collective intelligence. Seems that libraries have been gathering the collective intelligence of civilization for a long time, but in our collections each book is its own silo. Users cannot navigate between them following link trails. Perhaps what we need to explore further is how to tap the collective intelligence of faculty and students to enable users to find information, not by search alone, but through the guidance of the collective researchers within our communities. There is a wealth of collective intelligence on and among our campuses, and we’re perhaps just at the beginning of an era in which any individual within the community can exploit what the collective know.

    Commencement 2006

    And speaking of commencement speeches, I listed to a few yesterday at my own son’s college graduation. Something that the university president said in his remarks resonated with me. Among the points of advice he gave to the graduates he included “Do not be scornful of complexity.” We challenge our students too infrequently in their undergraduate education for fear that we will alienate them. I like that the president reminded the students that anything worthwhile they’ll achieve in their lives is going to take hard work and devotion – and certainly some complexity will be encountered. While academic librarians should endeavor to avoid making using their libraries unnecessarily complicated or complex, what more can they do to challenge students and prepare them for the complexities of life after college.

    More On XC From David Lindahl

    ACRLog recently posted about an intriguing new project at the University of Rochester’s River Campus Libraries to develop a new system known as the eXtensible Catalog (XC). To learn more about the project I submitted several questions to David Lindahl, Director of Digital Library Initiatives at the River Campus Libraries and co-principal investigator of the XC project. Many thanks to David for responding to these questions:

    What was the original impetus for this new project? Mainly a desire to improve on existing cataloging systems or something else?

    Our main goals for the development project were:

  • To investigate the benefits of the Functional Requirements for Bibliographic Records (FRBR) model in the context of a user centered design software project. We wanted to learn whether FRBR would address real end-user needs
  • To design a catalog that would deal with other metadata formats beyond MARC, and that would have an architecture that could evolve with future standards
  • To address the long list of known usability issues that exist with current web OPAC components of most major integrated library systems
  • To not build an entire ILS, but to focus on the metadata platform and the user interface. XC would interoperate with any existing ILS
  • To leverage our experience in work-practice study methodology to uncover unmet user needs related to library catalog functionality, and to address those needs.
  • This year, we will be conducting a survey of related projects, reaching out to other institutions doing similar work, creating a set of software requirements, and analyzing existing user study data to inform our collaborative XC development project. There is no shortage of forward thinking project work in this area, and we hope to leverage what has been done and set up a broad but effective collaboration to build XC.

    Can you describe, for the non-techies among us, just what exactly an extensible catalog is (or would be)?

    XC will be a single entry point into all of the resources (print and electronic) that a library has to offer. It will be easy to learn and powerful for experienced users. No training will be necessary, and the software will deliver initial results in one click.

    XC incorporates a metadata infrastructure that would support the full range of metadata standards. This would allow libraries to connect XC with their library webpages, digital and institutional repositories, and subscription databases. As libraries move forward and offer new types of repositories, they continue to create new silos of information with separate interfaces. XC will consolidate these disparate systems with appropriately integrated user interface(s).

    The XC will be easy to download, and install in any library. We will be developing a set of software requirements over the next year.

    Is it possible this project could result in a different type of OPAC? If so, can you please describe that OPAC?

    A goal of XC is to develop a new type of OPAC. We hope that the end result will be a software system that works alongside an existing ILS/OPAC, and offers an alternative to the built-in web interface.

    We want the XC to be extensible, so that it can handle new types of metadata, we want other libraries to add functionality to the XC (open-source), we want to have APIs into all the functionality to enable libraries to experiment and create a range of user-interfaces, not just one, and then work with users and evolve with them.

    Libraries have invested a great deal of their resources in creating and maintaining metadata, but ILS software does not allow the average user to take advantage of that metadata. We hoped that a more powerful search interface would help our patrons to become more successful with finding and using library resources.

    Instead of building a search interface that burdens the user with complex language, multiple search boxes, cryptic choices, and overwhelming result sets, XC will offer an interface that anyone can use without training.

    The user interface will work with users to guide them to precise, comprehensive results. This might include a single search box that would search across resources that are today found in the catalog, digital repository, and subscription databases. The interface might check spelling, decrypt journal title abbreviations, connect expressions and manifestations with their respective works, and offer faceted browsing of result sets to interactively guide users to appropriate results.

    What do you think about the criticism of the OPAC (e.g., “OPACs suck”) in recent years? Is the project in any way a response to these critiques of the OPAC?

    Yes. OPACs have not evolved. They have changed, but I think the missing piece has been the lack of user-centered-design process to surround the OPAC development process. The reason for XC is the same at the reason ExLibris is working on Primo, and the same reason that NCSU implemented ENDECA for their catalog.

    How might the XC differ from what NCSU has developed with their Endeca OPAC? Is there any similarity at all or will your project take the catalog in some other direction?

    NCSU’s catalog is a huge step forward in catalog interface design and usability. NCSU is at the forefront in this area. The NCSU interface is built on top of the Endeca product.

    The XC project is similar to the Endeca product in that it will provide faceted browsing of catalog records and other types of records (on key facets like author, subject, and material type)

    The XC project is different in the following ways:

  • Available for download at no cost
  • Designed to be easily adopted, customized, and extended by any academic library
  • Guided by an open-source software model encouraging user-centered enhancements from participating libraries
  • Designed to act as a metadata repository (Endeca is more like an fast index of metadata stored elsewhere)
  • Offers support for a variety of metadata formats, and will be extensible for future formats
  • Easy to integrate into a metasearch environment
  • True FRBR data model support (for example: interface groups by work, expression, manifestation, and item level metadata)
  • Based on user-centered design methodology and work-practice study of library users
  • At least two recent national reports on how cataloging systems need to change suggested moving away from LC subject headings. Would the extensible catalog be different in the way it describes library materials?

    XC will facilitate access to the metadata that libraries already have, including LC Subject Headings, but will also be able to incorporate changing metadata practices and standards in the future – which could include a move away from LCSH, should that occur. We also see the potential that our work on XC may inform future decisions on metadata practices and standards.

    ACRLog thanks David Lindahl for sharing with us more details about XC. We are sure that the academic library community will be hearing more about this exciting project in the near future.

    Book Soup

    Kevin Kelly has a manifesto, “Scan This Book,” in the New York Times Magazine. He suggests readers are about to enter paradise as books are digitized. Not only will the third world have access to the world’s greatest libraries (terrific; can we have reliable electricity with that?) but by swimming in a liquid sea of book soup everything written will be reinscribed, shared, modified in creative new ways, uncovered, linked, reborn. “In the clash between the conventions of the book, and the protocols of the screen, the screen will prevail.”

    There is always in these utopian dreams the assumption that books within covers are separate and unsearchable and readers who read printed books have never shared their experiences, as if sharing and blending can only happen if the texts are digital. Libraries don’t lock books up; they put them together so they can be discovered. And readers have always shuffled, even when what they’re shuffling is on paper pages. Will discovery be easier with an electronic search engine? Certainly – if you’re looking for something specific. But a small library is sometimes better for discovery than a huge one. It’s just a different kind of discovery.

    Digitizing books does highlight the problem of using “copy” as a key concept of law in a digital age. But being able to search the content of books online won’t fundamentally change the way we read or write. We already tag, shuffle, share, and reinscribe. The only real change in how we do this will come if publishers react to the “threat” of access (or the opportunity of licensing access) by moving to a pay-per-view model. Or if the invitation to modify texts means erasing the embarrassing bits of history.

    The “revolutionary” affordances that Kelly describes of the digital library are true of the traditional library. Let’s hope we don’t lose them as publishers overreact to manifestos like these – or as people in power try to rewrite the record or sell it to the highest bidder. As Linda Kerber, President of the American Historical Association, points out in a frightening essay in the Chron, this utopian dream has a dark side. If we’re not vigilant, we could lose our national memory.

    To Improve What You Do – Study People

    Academic librarians are no strangers to the process of asking our users “how are we doing?” Conducting user surveys, either for measuring satisfaction or service quality, are traditional methods for gauging how well the library meets the needs of its users. The results, we hope, will better inform us on how to improve library services, operations, and resources. The challenge with user surveys is that we don’t really know how accurately they measure our success. Usability studies have gained popularity more recently, but those efforts tend to focus solely on the library web site. But the idea is correct. Learn to improve by watching what people do when they use your systems, services, or resources. ACRLog has previously reported on how librarians at the University of Rochester are using anthropological techniques to study their user community. Clearly, the popularity of using such techniques is growing.

    The latest issue of PC Magazine has a lengthy article on “corporate anthropology.” It discusses how computer makers are hiring anthropologists who spend time with product users to better understand how consumers are actually using the products. From the article:

    Product development has historically been predicated on a “build it and they will come” basis. But times are changing, consumer choice is increasing and the game plan has evolved. Ethnography, a branch of anthropology, uses a variety of research methods to study people in a bid to understand human culture. Since top companies across several industries are treating ethnography as a means of designing for and connecting with potential customers, technology companies have recently begun investing significantly more research time and money into the field. At chip giant Intel, for example, the company spent approximately $5 billion on ethnographic research and development during 2004.

    The reference to “build it and they will come” should resonate with academic librarians because that is frequently how innovation occurs in our libraries. We tend to put new services or resources out there for our user communities, and then we wait to see if anyone uses it. In those situations where new efforts flop we lack the methods to better understand why and what corrections to make. And even if these new resources or services are used, without a design approach there is no formative evaluation in place to identify where improvements can be made. I see the use of anthropological techniques as fitting into a design process in that it is a more thoughtful approach to the planning and implementation of services. But I also see connections between the use of “library anthropology” and “non-library professionals” in that most smaller university and college libraries, those with greater resource constraints and the inability to add folks like anthropologists to their staffs, will be more challenged to improve their libraries using these innovative techniques.