Monthly Archives: September 2013

ISO New Academic and Research Librarians!

With the onset of cardigan weather we’re settling into a new academic year, and that’s got us thinking about colleagues who are embarking on their first library jobs.

Did you just begin a position as an academic or research librarian? We’re looking for a few folks to blog about their experiences each month during the 2013-2014 academic year.

If you started in your first job as an academic or research librarian anytime after July 1, and are interested in becoming a First Year Academic Librarian blogger for ACRLog, let us know! Use the ACRLog Tip Page to send us:

- a sample blog post
- a brief note describing your job and your interest in blogging at ACRLog

Applications will be accepted through Monday, October 7th. Questions? Leave a comment or drop us a line on the Tip Page.

Library Research and the IRB: Is It Generalizable?

By Nicole Pagowsky and Maura Smale

There are generally two types of research that take place in the LIS field, one is more rare and is capital-R-Research, typically evidence or theory-based and generalizable; the other, more prevalent, is lowercase-r-research, typically anecdotal, immediate, and written in the style of “how we did it good.” The latter has historically been a defining quality of LIS research and receives much criticism, but as librarianship is a professional field, both theory and practice require documentation. Gorman (2004) notes how value and need have contributed to a mismatch in what is published, “[leading to] a gap in the library journal literature between arid and inaccessible reports of pure research and naive ‘how we did it good’ reports.” There are implications for these concerns both within and outside of the field: first, those within the field place less value on LIS research and might have lower confidence and higher anxiety when it comes to publishing, and second, those outside the field might take LIS research and librarians less seriously when we work to attain greater equality with faculty on campus. Understanding these implications and how human subjects research and the Institutional Review Board (IRB) fit into social sciences research can help frame our own perceptions of what we do in LIS research.

What is the IRB? The IRB regulations developed in the wake of the revelation of Nazi experimentation on humans during WWII, as well as the U.S. government’s infamous Tuskegee study in which black men with syphilis were allowed to go untreated so that researchers could examine the progression of the disease. All U.S. academic and research institutions that receive federal funding for research must convene an IRB to review and monitor research on human subjects and ensure that it remains ethical with no undue risk to participants. There are three levels of IRB approval — exempt, expedited, and full; a project is assigned its level of review based on the amount of risk to the subject and the types of data collected (informational, biological, etc.) (Smale 2010). For example, a project involving the need to draw blood from participants who are under 18 would probably be assigned a full review, while one featuring an anonymous online survey asking adults about their preferences for mobile communications devices would likely be exempt. It’s worth noting that many of the guidelines for IRB review are more relevant to biomedical and behavioral science research than humanities and social science research (for more discussion of these issues, see George Mason University History professor Zachary Schrag’s fascinating Institutional Review Blog).

Practically speaking, what is the process of going through IRB approval like for LIS researchers? We’ve both been through the process — here’s what we’ve learned.

Maura’s Experience

I’ve gone through IRB approval for three projects during my time as a library faculty member at New York City College of Technology (at City University of New York). My first experience was the most complex of the three, when my research partner and I sought IRB approval for a multiyear study of the scholarly habits of undergraduates. Our project involved interviews with students and faculty at six CUNY campuses about how students do their academic work, all of which were recorded and transcribed. We also asked students to photograph and draw objects, locations, and processes related to their academic work. While we did collect personal information from our participants, we’re committed to keeping our participants anonymous, and the risk involved for participants in our study was deemed low. Our research was classified by the IRB as expedited, which requires an application for continuing review each year that we were actively collecting data. Once we finished with interviews and moved to analysis (and writing) only, we were able secure an exempt approval, which lasts for three years before it must be renewed.

The other two projects I’ve sought IRB approval for — one a solo project and one with a colleague — were both survey-based. One involved a web-based survey of members of a university committee my colleague and I co-chaired, and the other a paper survey of students in several English classes in which I’d used a game for library instruction. Participation in the surveys was voluntary and respondents were anonymous. Both surveys were classified exempt by the IRB — the information we collected in both cases were participants’ opinions, and little risk was found in each study.

Comparing my experiences with IRB approval to those I’ve heard about at other colleges and universities, my impression is that my university’s approach to the IRB requirement is fairly strict. It seems that any study or project that is undertaken with the intent to publish is considered capital-R-research, and that the process of publishing the work confers on it the status of generalizable knowledge. Last year a few colleagues and I met with the Chair of the college’s IRB committee to seek clarification, and we learned that interviews and surveys of library patrons solely for the purpose of program improvement does not require IRB approval, as it’s not considered to be generalizable knowledge. However, the IRB committee frowns on requests for retroactive IRB approval, which could put us in a bind if we ever decide that results of a program improvement initiative might be worth publishing.

Nicole’s Experience

At the University of Arizona (UA), I am in the process of researching the impact of digital badges on student motivation for learning information literacy skills in a one-credit course offered by the library. I detailed the most recent meeting with our representative from IRB on my blog, where after officially filing for IRB approval and having much back-and-forth over a few months, it was clarified that we in fact did not exactly need IRB approval in the first place. As mentioned above, each institution’s IRB policies and procedures are different. According to the acting director of the UA’s IRB office, our university is on the more progressive end of interpreting research and its federal definition. Previous directors were more in line with the rest of the country in being very strict, where if a researcher was just talking with a student, IRB approval should be obtained. Because their office is constantly inundated with research studies, a majority of which would be considered exempt or even little-r research, it is a misuse of their time to oversee studies where there is essentially no risk. A new trend is burgeoning to develop a board comprised of representatives from different departments to oversee their own exempt studies; when the acting director met with library faculty recently, she suggested we nominate two librarians to serve on this board so that we would have jurisdiction over our own exempt research to benefit all parties.

Initially, because the research study I am engaging in would be examining student success in the course through grades and assessments, as well as students’ own evaluation of their motivation and achievement, we had understood that to be able to publish these findings, we would be required to obtain IRB approval since we are working with human subjects. Our IRB application was approved and we were ranked as exempt. This means our study is so low-risk that we require very little oversight. All we would need to do is follow guidelines for students to opt-in to our study (not opt-out), obtain consent for looking at FERPA-related and personally identifiable information, and update the Board if we modify any research instruments (surveys, assessments, communications to students about the study). We found out, however, that we actually did not even need to apply for IRB in the first place because we are not necessarily setting out to produce generalizable knowledge. This is where “research” and “Research” come into play. We are in fact doing “research” where we are studying our own program (our class) for program evaluation. Because we are not saying that our findings apply to all information literacy courses across the country, for example, we are not producing generalizable “Research.” As our rep clarified, this does not imply that our research is not real, it just means that according to the federal definition (which oversees all Institutional Review Boards), we are not within their jurisdiction. Another way to look at this is to consider if the research is replicable; because our study is specific to the UA and this specific course, if another librarian at another university attempted to replicate the study, it’s not guaranteed that results will be the same.

With our revised status we can go more in depth in our study and do better research. What does “better” mean though? In this sense, it could be contending with fewer restrictions in looking for trends. If we are doing program evaluation in our own class, we don’t need to anonymize data, request opt-ins, or submit revised research instruments for approval before proceeding because the intent of the research is to improve/evaluate the course (which in turn improves the institution). Essentially, according to our rep, we can really do whatever we want however we want so long as it’s ethical. Although we would not be implying our research is generalizable, readers of our potentially published research would still be able to consider how this information might apply to them. The research might have implications for others’ work, but because it is so specific, it doesn’t provide replicable data that cuts across the board.

LIS Research: Revisiting Our Role

As both of our experiences suggest, the IRB requirement for human subjects research can be far from straightforward. Before the review process has even begun, most institutions require researchers to complete a training course that can take as long as 10 hours. Add in the complexity of the IRB application, and the length of time that approval can take (especially when revisions are needed), and many librarians may hesitate to engage in research involving human subjects because they are reluctant to go through the IRB process. Likewise, librarians might be overzealous in applying for IRB when it is not even needed. With the perceived lower respect that comes in publishing program evaluation or research skewed toward anecdotal evidence, LIS researchers might attempt big-R Research when it does not fit with the actual data they are assessing.

What implications can this have for librarians, particularly on the tenure track? The expectation in LIS is to move away from little-r research and be on the same level as other faculty on campus engaging in big-R Research, but this might not be possible. If other IRB offices follow the trend of the more-progressive UA, many more departments (not just the library) may not need IRB oversight, or will be overseeing themselves on a campus-based board reviewing exempt studies. As the acting IRB director at the UA pointed out to library faculty, publication should not be the criterion for assuming generalizability and attempting IRB approval, but rather intent: what are you trying to learn or prove? If it’s to compare/contrast your program with others, suggest improvements across the board, or make broad statements, then yes, your study would be generalizable, replicable, and is considered human subjects research. If, on the other hand, you are improving your own library services or evaluating a library-based credit course, these results are local to your institution and will vary if replicated. Just because one does not need IRB approval for a study does not mean it is any less important, it simply does not fall under the federal definition of research. Evidence-based research should be the goal rather than only striving for research generalizable to all, and anecdotal research has its place in exploring new ideas and experimental processes. Perhaps instead of focusing on anxiety over how our research is classified, we need to re-evaluate our understanding of IRB and our profession’s self-confidence overall in our role as researchers.

Tl;dr — The Pros and Cons of IRB for Library Research

Pros: allows researchers to make generalizable statements about their findings; bases are covered if moving from program evaluation to generalizable research at a later stage; seems to be more prestige in engaging in big-R research; journals might have a greater desire for big-R research and could pressure researchers for generalizable findings

Cons: limits researchers’ abilities to drill down in data without written consent from all subjects involved (can be difficult with an opt-in procedure in a class); can be extremely time-intensive to complete training and paperwork required to obtain approval; required to regularly update IRB with any modifications to research design or measurement instruments

What Do You Think?

References

Gorman, M. (2004). Special feature: Whither library education? New Library World, 105(9), 376-380.

Smale, M. A. (2010). Demystifying the IRB: Human subjects research in academic libraries. portal: Libraries and the Academy, 10(3), 309-321.

Other Resources / Further Reading

Examples of activities that may or may not be human research (University of Texas at Austin)
Lib(rary) Performance blog
Working successfully with your institutional review board, by Robert V. Labaree

Nicole Pagowsky is an Instructional Services Librarian at the University of Arizona, and Tweets @pumpedlibrarian.

Searching For the Answers

An updated website is one of the most useful tools that academic libraries have to communicate with the students, faculty, and staff we serve at our colleges and universities. Our websites offer access to information sources, provide help with research, and list our policies and basic information about the library: where we’re located, when we’re open, how to get in touch with us. It’s 2013 — libraries (and colleges) have had websites for a long time, so surely our website is the first place to look to learn more about the library, right?

Maybe, but maybe not. While I always check the website when I need more information about a library, often arriving there via the college or university website, I’m not sure that all of our patrons do. More often than not I’d guess that they use a search engine to find the library website. Assuming that Google is the search engine of choice for most of our patrons, what do they see when they search for our libraries?

(Feel free to go ahead and try a Google search with your own college or university library. I’ll wait.)

I tend to search with Google, but I must not search that often for businesses or other specific locations on Google’s web search, because it took me a while to notice that Google had added a box on the right side of the search results page populated with details about a business or location. The box includes a photo, a map (which links to Google Maps for directions), and some basic information about the place: a description from Wikipedia (if one exists), the address, phone number, and hours. There’s also a space for people to rate and review the business or location, as well as links to other review websites. It seems that the information in the box is populated automatically by Google from the original websites.

This is great news, right? This Google feature can get the information our patrons need to them without having to click through to the library website. On the other hand, what happens when the information is wrong?

At my library we first learned about the Google info box last winter. A student approached the Reference Desk to verify the library’s opening hours. It seems that she’d found the library hours on Google, and was upset to learn that we’d extended our hours the prior semester. While there’s a happy ending to this story — it’s delightful when a student wants to come to the library earlier than she thinks she can! — this experience was frustrating for both of us. Since we hadn’t realized that Google added the info box to its search results, we didn’t know to check whether the information was correct. The student naturally assumed that we were in control of the information in that box, and was angry when it seemed that we hadn’t kept it up to date.

Just a month ago we encountered another issue with the Google info box for our library. I don’t know that I would expect there to be reviews of a college library on business or location review websites, but our library’s info box does have one review website listed under the Reviews heading. Following the link leads to a review that has nothing to do with the library (or the college), and is instead a post criticizing the city’s police department. While a bit jarring, it only takes a minute of reading the review site to realize that the review isn’t actually about the library, just a false hit on the review website.

While there are definitely advantages to having basic information about our library available quickly for our patrons, some aspects of the Google info box are troubling from a user experience perspective. It’s unclear how often Google updates the information in that box automatically — our experience with the incorrect library hours suggests that it’s not updated frequently. Also, it’s challenging to edit some of the information in this box. There’s a link for business owners to claim and edit their profile which does offer the opportunity to change some details displayed in the box. But we weren’t able to remove the erroneous review website from our listing; our only option was to use the Feedback link to request that the link be removed, and who knows how long that will take?

My biggest takeaway has been the reminder that we should periodically research our libraries as if we were patrons looking for information. Google offers search alerts, which can be helpful to learn when our libraries are being mentioned on other websites, but I don’t know that there’s any way to automatically learn what information has been added or changed in the Google info box. I’d be interested to know if anyone has figured out a quick and easy way to keep track of this sort of thing — please share your experiences in the comments!