Category Archives: Simplicity vs. Complexity

Use this category for items that relate to an ongoing discussion of finding a balance between givin users simplicity or expecting them to deal with complexity.

Managing E-Resources For Users, 100%

I returned to electronic resources librarianship – and full-time work – 16 months ago in a brand-new e-resources coordinator position at an academic library. The catch? It was in public services.

Not many e-resources librarians live among the folks in reference and instruction – link resolvers, proxy servers, A-Z lists, COUNTER compliance, and ERMs usually keep us pretty close to our colleagues in acquisitions, serials and IT. Public services librarians, who spend their days building relationships with teaching faculty, performing classroom instruction, and juggling reference questions don’t have time to worry about the circuitous, detailed process involved in e-resources acquisitions and maintenance. Likewise, technical services and technology staff don’t necessarily see the daily impact their work and decisions have on users. Feeling caught in the middle, my transition was difficult. As a public services librarian, I got to do things like teach and work reference in a way most e-resources librarians don’t. But I also had limited opportunities to connect with my colleagues on the technical side, leaving me out of the decision making loop at crucial points.

Despite its necessary involvement in technical processing, I feel that electronic resources librarianship is actually very well suited to being located in public services. My previous e-resources position, at a small college, meant I managed e-resources from a public services position because we all did public services, and our close contact with students, faculty and each other helped us stay focused on making decisions that we thought were good for users even if for collections they were only good enough. How did that affect my approach to e-resources management? For one, I didn’t get into our systems from the back-end – I used the front end, the way our students did, and still do. I didn’t care at all how our records were constructed and linked in the ILS – in fact, most of our e-resources weren’t in the ILS at all, because that’s not how our users found them. Instead, I cared about how items were labeled and displayed so people could understand what they were and what they did. I was never preoccupied with usage statistics but more interested in promoting use. Those concerns were at the forefront of my mind because they were on the minds of the people I interacted with most often – other reference and instruction librarians.

Early job ads for e-resources librarians emphasized public services skills like reference and instruction (Fisher 2003: “the position title of Electronic Resources Librarian has been pre-empted by the public service sector of the profession”); over the years, these changed to emphasize more specialized technical skills – licensing, web development and customization (Albitz & Shelburne 2007). Why the shift? My guess is that early e-resources required a lot of instruction to use, even for other librarians (I remember trying to use Infotrac as a frustrated undergraduate in 1998 – a lot of librarian intervention was required before I got it), and public services librarians became the early adopters of a lot of the first online resources. But as CD-ROM databases were replaced by more and more online journals (and the platforms to search these in aggregate), we tried to mainstream them into existing workflows. Only these workflows, created to acquire print objects and hold on to them forever, have proven difficult to adapt.

At the Electronic Resources & Libraries Conference in Austin, Texas, last February, Rick Lugg of R2 Consulting talked about how models for approaching e-resources management have changed. First there was the “hub,” or expert model, in which one person in an organization was the point person for all the specialized processes and expertise required for e-resources management. This worked for small collections, but, as e-resources encompassed more and more of libraries’ content and budgets and became our most-used resources, the lack of scalability of this model demanded another approach. The next management model has tried to place e-resources into traditional workflows. This is the model most of us still try to adhere to, and is, in my opinion, another reason most e-resources work has come to rest in technical services. As one of my colleagues explained, many librarians whose jobs previously revolved around print materials feel it is essential that they have some responsibility for electronic materials; otherwise, what would their jobs become? Thus, selection and licensing of e-resources at my institution has stayed with collection development, acquisitions has handled processing, serials has handled e-journals, and IT has worked on access issues.

Rick, however, also suggested a model for the future in which libraries push much of the technical work associated with e-resources management up the food chain to consortia and collectives, freeing local librarians to deal more with acquiring, synthesizing and communicating information about virtual materials. Some libraries are further along this model than others: in Ohio, OhioLINK (for a long time the gold standard for library consortia, in my opinion) handles licensing, acquisition, payment, and sometimes even search interface customization for many of our e-resources, though not all: about a third are still processed locally, meaning that staff and workflows for all aspects of e-resources management must be maintained locally. Smaller consortia can absorb more of the work: the  California Digital Library, for example, is focused on just the 10 UCs, which have more in common (from programs to missions to administrative systems) than the 89 OhioLINK libraries. I am interested in seeing what models the enormous new LYRASIS will adopt – it is well positioned to fulfill Rick’s prediction for the future of e-resources management, though I imagine its challenges in doing so will prove to be as huge as the collective itself.

For someone in a public services e-resources position like mine, tracking information about e- resources and the issues that affect every stage of their lifecycles (from technology developments to budget pressures, staff changes, and trends in user behavior) was an important, if not the most important, part of my work. This is supported by Joan Conger & Bonnie Tijerina’s assessment of e-resources management in “Collaborative Library-wide Partnerships: Managing Electronic Resources Through Learning and Adaptation” (in Collins & Carr 2008). The dynamic process of managing e-resources “requires effective incorporation of information from a rich array of sources,” they write (97). The information it is important to pursue is most often stored in experiences – of vendors, library professionals, and patrons. To get to this contextual information, they say, librarians must keep current, particularly with users. They suggest “usability tests, library advisory groups, focus groups, direct observation,” as well as informal assessment to learn new things about user behavior (99). They also remind their readers that it is important to communicate what you learn.

Interfacing between the user experience and the information required to improve it proved to be the part of my job best suited to my location in public services, and in my first year at Bowling Green I focused on user issues. I participated in web and OPAC redesign projects, resource re-description, customization, usability testing, and training. I also made an effort to stay informed: I read (Don’t Make Me Think!, Studying Students, Online Catalogs: What Users and Librarians Want), I talked to vendors, I attended conferences and sat in on webinars.  But no matter how much e mail I sent, how many meetings I attended, or how many blogs and wikis I used, I couldn’t seem to find a way to merge the information I had together with the information from my colleagues so that together we could make our management of e-resources more effective for users. I discovered, during this period, that it’s not enough to recognize that lots of people are involved in making e-resources available; it’s also about having a seat at the right tables so you can advocate for these materials and their users, and, in my library at least, I was sitting at the wrong table.

After a retirement incentive program was completed last fiscal year, our technical services department found itself down five people, two of them faculty librarians. Library-wide, we discussed reorganization, and a number of staff changed locations, but I was the only one who actually changed departments: officially, my position is now split, and I am now 51% technical services – no longer with reference and instruction, for the first time in my career.

I’m excited about this change – everyone involved thought it would be best for the library and collections. Many of my new tech services colleagues started their careers in reference, so a focus on the patron is embedded in all of their approaches to processing, cataloging and collection management. But I also feel a little like I’ve given up a good fight. Why did I have to move to technical services? I know the answer is because that’s where a lot of e-resources work is still located. The model we had been trying, while I am convinced it is viable and know it worked at my previous job, wasn’t scalable for a large academic library with broadly distributed functions. Not yet. However, while my location has changed, it’s promising that my job description retains many of my public services functions. I will still work reference, teach, work on public web interfaces, and participate in usability efforts. These things may officially only be 49% of my job now, but I still want everything I do to be for users, 100%.

Must Scheduling be Sisyphean?

I was planning to post last week about something interesting I’d read in the library or higher ed news and literature, but I haven’t kept up with my reading as much as usual recently. The task that’s been occupying my time? Scheduling our English Comp library instruction sessions. It’s not the most glamorous or fun part of my job, but it’s one of the most important. Every semester the scheduling process seems to drag on and on, and I find myself thinking that there has to be a better way. But once the schedule is set my grumpiness fades away, conveniently forgotten until the beginning of the next semester. I always intend to spend time between semesters researching scheduling alternatives, but there’s usually a project that’s so much more interesting that it elbows scheduling out of the way.

We use Google Calendar to keep track of the library’s schedule (not just instruction, but reference, meetings, etc.), and I’m reasonably satisfied with it. It’s the process of scheduling classes and librarian instructors that I think could use some tweaking. In the past I’ve waited until a few days into the semester to get the final list of classes from the English Department (sometimes sections are added or canceled at the last minute, depending on enrollment). Then I’ve taken the class list and our calendar and slotted all of the sections into our library classroom schedule. And then I’ve tentatively assigned instruction librarians to the schedule, trying to make sure that no one is responsible for too many early morning, evening or weekend sessions. Once the instruction librarians have approved their schedules, each of us has contacted the English instructors for the library sessions we’re teaching. Occasionally there’s a bit of horsetrading when an English instructor requests a date change, but usually not too much.

This semester we tried something a bit different and asked the English faculty when in the semester they’d like their library session to be scheduled, emphasizing that we’d like their students to come to the session with a research topic in hand that they can use to practice searching for library and internet resources. I got a preliminary list of classes from the English department and contacted faculty a few days before classes began, but there were still a handful that I wasn’t able to get in touch with until the second week of classes. About two-thirds of the instructors responded with their preferred dates, and I was able to give most of them their first choice (I’d asked for 3 possibilities). I put the remainder of classes in our schedule as before and contacted those instructors to let them know. We also decided we’d try asking the instruction librarians to pick the classes they’d like to teach, so each of us chose our sections once the schedule was set.

I do think that scheduling went a bit smoother this semester, but it’s hard to know exactly why. We have significantly fewer sections of English Comp this spring than we had in the fall (64 rather than 126), which definitely impacts scheduling. But in some ways I feel like the amount of time spent scheduling hasn’t changed, it’s just been spread out more evenly: I’m fielding emails from faculty and putting sessions into the calendar in dribs and drabs over the course of the two weeks rather than in a couple of big, multi-hour scheduling binges. We’ll see if this method can hold up in the fall.

How does your library schedule instruction sessions? Are there any tips or tricks for streamlining the process that you can share?

Something Is Better Than Nothing

As you read and learn more about design a basic principle appears again and again. Design for simplicity. In fact one hallmark of great design is that it makes the complex simple. That said, as Garr Reynolds put it in a recent presentation, simplicity should not be confused with simplistic. Simplistic is about dumbing things down because it is easier for us. Simplicity is about creating clarity where there previously was confusion. The latter best serves the end user.

I got to thinking about this after attending a recent webcast presentation sponsored by Library Journal and Serials Solution. The point of the webcast, Returning the Researcher to the Library, was to share ideas about how librarians could create a better return on their investment in electronic resources. With all the money we spend on electronic resources, who doesn’t want to create greater awareness about their availability and gather evidence that documents how students and faculty use the library’s e-resources for their research. The presenters shared some good ideas and research findings. One of the speakers shared her library’s experience with a recently implemented catalog overlay – you’d know it from its graphic/visual search functionality. After examining search logs the presenter pointed out that searches getting zero results in the old catalog did get results in the new catalog. What was the difference? The simplicity of the new overlay.

A good question was asked. Was there any analysis of the results from the searches in the new catalog? In other words, there were results but were they relevant? Other than one example involving a search that looked more like something a librarian rather than an end user concocted, the answer was no – there was no analysis of the results. All we really know is that the new, simpler interface provided some results whereas the old, complicated interface provided no results. That lead to the conclusion that from the user’s perspective “it’s better to find something than nothing”. Do you agree with that? Isn’t is possible that the something you’ll find is so irrelevant or worthless that it may be worse than finding nothing. Or the something found may only be one miniscule sample from a much greater body of information that will be completely ignored. “Oh great. I found something. Now I’m done with my research”. What you miss can often be much more significant than what you find. The results only show there were zero result searches in the old catalog. It tells you nothing about whether or not the searcher tried again or went and asked for help. In some cases finding nothing may lead the searcher to re-think the search and achieve improved results. Maybe you think I’m guilty of wishful thinking here.

I suppose what mostly had me puzzled was the suggestion that simple search interfaces, rather than instruction for research skill building, is the ultimate solution to better searching and research results. It’s true that at large research institutions it will be difficult to reach every student with instruction, and there are some strategies to tackle that problem. But here’s my issue with the assumption that simple search interfaces are the solution. I don’t care how simple the interface is, if a student lacks the ability to think critically about the search problem and construct a respectable search query it doesn’t matter what sort of simple overlay you offer, the results are still likely to be poor. Garbage in is still garbage out. That’s why library instruction still has considerable potential for improving student research in the long run.

That said, I find it difficult to argue against the potential value of catalog and database search systems that will find something that can at least get someone started in their research. These simplified systems also offer potential for resource discovery, and we certainly want students and faculty to become aware of what may now be hidden collections. Despite the shortcomings we need to further explore these systems. At least one system I examined at ALA allows librarians to customize the relevancy ranking to continually fine tune the quality of the search results. But let’s not proceed to dismantle library instruction just yet. We need to constantly remind ourselves that creating simplicity is not the same as making search systems simplistic. Research is an inherently complex task. Instruction can help learners to master and appreciate complexity. Then, on their own they can achieve clarity when encountering complex research problems that require the use of complicated search systems. That, I think, is what we mean when we talk about lifelong learning.

Feeling Lost In A World Of Search Zombies

Maybe I’m getting more removed from mainstream search. I know that some aspects of online searching can be complex, and depending on the uniqueness of some disciplinary databases (think about using financial screening tools in NetAdvantage or ValueLine Research Center) search can reach the extremes of complexity. But I would never have thought to associate the word “complex” with three basic search functions: formulating a search question; evaluating the results; and revising the search strategy. True, these basic skils are hardly intuitive for college students, but it certainly seems within their ability to learn – and I know that many have. So I was surprised to read this in a recent Jakob Nielsen column:

How difficult is it to perform a search on Google? I’m not talking about the challenge of formulating a good query, interpreting the results, or revising your search strategy to reap better results. Those are all very complicated research skills, and few people excel at them.

Complicated research skills? If you take away those basic skills what is left to a search? Have we created a generation of search zombies who listlessly tap away at the keyboard with no strategy at all just hoping they’ll find some information, and then mindlessly settle for whatever their first Google page yields? On the positive side, this suggests to me that librarians are among the few professionals who do excel at these tasks. While it’s great to know we have an increasingly rare skill , I’d feel much better if, as a profession, we were making greater progress in helping more people to develop these basic search skills, or getting more recognition for what we can do.

This leaves me with two thoughts. First, if excellence in navigating the complexity of search (and mind you that Nielsen isn’t talking about library databases – he’s just referring to search engines) is a rarified skill, why the heck can’t we leverage our expertise to raise our profile in society. You would think that the ability to cut through the web wasteland would be a prized skill that people would seek out. Second, if everyone other than librarians lack these skills, then the state of searching and the public’s research ability must be far worse than we might have imagined. Perhaps the “good enough” (or is it now “barely good enough”) mentality has finally turned the masses into search zombies. What’s the cure for that?

Open and Closed Questions

Another way to introduce students to the idea of complexity in the research process is through open and closed questions. In Second-hand Knowledge: An Inquiry into Cognitive Authority, Patrick Wilson describes closed questions as matters which (for now) have been settled beyond practical doubt and open questions as questions on which doubt remains.

I suggest to my students that one way to focus their research is to pay attention to clues that suggest where the open questions are and to concentrate their efforts there. Wilson points out that previously closed questions can become open when new information comes to light. In class, you can illustrate this and attempt some humor with the line, “when I was your age, Pluto was a planet!” Then proceed to explain how the planetary status of Pluto became an open question with the discovery of the Trans-Neptunian objects Quaor, Sedna, and Eris. Then follow this up with an example of an open question in the subject matter of the class you are teaching.

The term “research” is ambiguous. For some it means consulting some oracle–the Internet, the Library, the encyclopedia–finding out what some authority has said on a topic and then reporting on it. Fine, sometimes that’s what research is. That kind of research can be interesting, but it can also be pretty boring. What makes higher education thrilling is discovering live controversies and trying to make progress on them. Academic libraries are not only storehouses of established wisdom, they also reflect ongoing debates on questions that are unsettled, in dispute, very open, and very much alive.