By Nicole Pagowsky and Maura Smale
There are generally two types of research that take place in the LIS field, one is more rare and is capital-R-Research, typically evidence or theory-based and generalizable; the other, more prevalent, is lowercase-r-research, typically anecdotal, immediate, and written in the style of “how we did it good.” The latter has historically been a defining quality of LIS research and receives much criticism, but as librarianship is a professional field, both theory and practice require documentation. Gorman (2004) notes how value and need have contributed to a mismatch in what is published, “[leading to] a gap in the library journal literature between arid and inaccessible reports of pure research and naive ‘how we did it good’ reports.” There are implications for these concerns both within and outside of the field: first, those within the field place less value on LIS research and might have lower confidence and higher anxiety when it comes to publishing, and second, those outside the field might take LIS research and librarians less seriously when we work to attain greater equality with faculty on campus. Understanding these implications and how human subjects research and the Institutional Review Board (IRB) fit into social sciences research can help frame our own perceptions of what we do in LIS research.
What is the IRB? The IRB regulations developed in the wake of the revelation of Nazi experimentation on humans during WWII, as well as the U.S. government’s infamous Tuskegee study in which black men with syphilis were allowed to go untreated so that researchers could examine the progression of the disease. All U.S. academic and research institutions that receive federal funding for research must convene an IRB to review and monitor research on human subjects and ensure that it remains ethical with no undue risk to participants. There are three levels of IRB approval — exempt, expedited, and full; a project is assigned its level of review based on the amount of risk to the subject and the types of data collected (informational, biological, etc.) (Smale 2010). For example, a project involving the need to draw blood from participants who are under 18 would probably be assigned a full review, while one featuring an anonymous online survey asking adults about their preferences for mobile communications devices would likely be exempt. It’s worth noting that many of the guidelines for IRB review are more relevant to biomedical and behavioral science research than humanities and social science research (for more discussion of these issues, see George Mason University History professor Zachary Schrag’s fascinating Institutional Review Blog).
Practically speaking, what is the process of going through IRB approval like for LIS researchers? We’ve both been through the process — here’s what we’ve learned.
Maura’s Experience
I’ve gone through IRB approval for three projects during my time as a library faculty member at New York City College of Technology (at City University of New York). My first experience was the most complex of the three, when my research partner and I sought IRB approval for a multiyear study of the scholarly habits of undergraduates. Our project involved interviews with students and faculty at six CUNY campuses about how students do their academic work, all of which were recorded and transcribed. We also asked students to photograph and draw objects, locations, and processes related to their academic work. While we did collect personal information from our participants, we’re committed to keeping our participants anonymous, and the risk involved for participants in our study was deemed low. Our research was classified by the IRB as expedited, which requires an application for continuing review each year that we were actively collecting data. Once we finished with interviews and moved to analysis (and writing) only, we were able secure an exempt approval, which lasts for three years before it must be renewed.
The other two projects I’ve sought IRB approval for — one a solo project and one with a colleague — were both survey-based. One involved a web-based survey of members of a university committee my colleague and I co-chaired, and the other a paper survey of students in several English classes in which I’d used a game for library instruction. Participation in the surveys was voluntary and respondents were anonymous. Both surveys were classified exempt by the IRB — the information we collected in both cases were participants’ opinions, and little risk was found in each study.
Comparing my experiences with IRB approval to those I’ve heard about at other colleges and universities, my impression is that my university’s approach to the IRB requirement is fairly strict. It seems that any study or project that is undertaken with the intent to publish is considered capital-R-research, and that the process of publishing the work confers on it the status of generalizable knowledge. Last year a few colleagues and I met with the Chair of the college’s IRB committee to seek clarification, and we learned that interviews and surveys of library patrons solely for the purpose of program improvement does not require IRB approval, as it’s not considered to be generalizable knowledge. However, the IRB committee frowns on requests for retroactive IRB approval, which could put us in a bind if we ever decide that results of a program improvement initiative might be worth publishing.
Nicole’s Experience
At the University of Arizona (UA), I am in the process of researching the impact of digital badges on student motivation for learning information literacy skills in a one-credit course offered by the library. I detailed the most recent meeting with our representative from IRB on my blog, where after officially filing for IRB approval and having much back-and-forth over a few months, it was clarified that we in fact did not exactly need IRB approval in the first place. As mentioned above, each institution’s IRB policies and procedures are different. According to the acting director of the UA’s IRB office, our university is on the more progressive end of interpreting research and its federal definition. Previous directors were more in line with the rest of the country in being very strict, where if a researcher was just talking with a student, IRB approval should be obtained. Because their office is constantly inundated with research studies, a majority of which would be considered exempt or even little-r research, it is a misuse of their time to oversee studies where there is essentially no risk. A new trend is burgeoning to develop a board comprised of representatives from different departments to oversee their own exempt studies; when the acting director met with library faculty recently, she suggested we nominate two librarians to serve on this board so that we would have jurisdiction over our own exempt research to benefit all parties.
Initially, because the research study I am engaging in would be examining student success in the course through grades and assessments, as well as students’ own evaluation of their motivation and achievement, we had understood that to be able to publish these findings, we would be required to obtain IRB approval since we are working with human subjects. Our IRB application was approved and we were ranked as exempt. This means our study is so low-risk that we require very little oversight. All we would need to do is follow guidelines for students to opt-in to our study (not opt-out), obtain consent for looking at FERPA-related and personally identifiable information, and update the Board if we modify any research instruments (surveys, assessments, communications to students about the study). We found out, however, that we actually did not even need to apply for IRB in the first place because we are not necessarily setting out to produce generalizable knowledge. This is where “research” and “Research” come into play. We are in fact doing “research” where we are studying our own program (our class) for program evaluation. Because we are not saying that our findings apply to all information literacy courses across the country, for example, we are not producing generalizable “Research.” As our rep clarified, this does not imply that our research is not real, it just means that according to the federal definition (which oversees all Institutional Review Boards), we are not within their jurisdiction. Another way to look at this is to consider if the research is replicable; because our study is specific to the UA and this specific course, if another librarian at another university attempted to replicate the study, it’s not guaranteed that results will be the same.
With our revised status we can go more in depth in our study and do better research. What does “better” mean though? In this sense, it could be contending with fewer restrictions in looking for trends. If we are doing program evaluation in our own class, we don’t need to anonymize data, request opt-ins, or submit revised research instruments for approval before proceeding because the intent of the research is to improve/evaluate the course (which in turn improves the institution). Essentially, according to our rep, we can really do whatever we want however we want so long as it’s ethical. Although we would not be implying our research is generalizable, readers of our potentially published research would still be able to consider how this information might apply to them. The research might have implications for others’ work, but because it is so specific, it doesn’t provide replicable data that cuts across the board.
LIS Research: Revisiting Our Role
As both of our experiences suggest, the IRB requirement for human subjects research can be far from straightforward. Before the review process has even begun, most institutions require researchers to complete a training course that can take as long as 10 hours. Add in the complexity of the IRB application, and the length of time that approval can take (especially when revisions are needed), and many librarians may hesitate to engage in research involving human subjects because they are reluctant to go through the IRB process. Likewise, librarians might be overzealous in applying for IRB when it is not even needed. With the perceived lower respect that comes in publishing program evaluation or research skewed toward anecdotal evidence, LIS researchers might attempt big-R Research when it does not fit with the actual data they are assessing.
What implications can this have for librarians, particularly on the tenure track? The expectation in LIS is to move away from little-r research and be on the same level as other faculty on campus engaging in big-R Research, but this might not be possible. If other IRB offices follow the trend of the more-progressive UA, many more departments (not just the library) may not need IRB oversight, or will be overseeing themselves on a campus-based board reviewing exempt studies. As the acting IRB director at the UA pointed out to library faculty, publication should not be the criterion for assuming generalizability and attempting IRB approval, but rather intent: what are you trying to learn or prove? If it’s to compare/contrast your program with others, suggest improvements across the board, or make broad statements, then yes, your study would be generalizable, replicable, and is considered human subjects research. If, on the other hand, you are improving your own library services or evaluating a library-based credit course, these results are local to your institution and will vary if replicated. Just because one does not need IRB approval for a study does not mean it is any less important, it simply does not fall under the federal definition of research. Evidence-based research should be the goal rather than only striving for research generalizable to all, and anecdotal research has its place in exploring new ideas and experimental processes. Perhaps instead of focusing on anxiety over how our research is classified, we need to re-evaluate our understanding of IRB and our profession’s self-confidence overall in our role as researchers.
Tl;dr — The Pros and Cons of IRB for Library Research
Pros: allows researchers to make generalizable statements about their findings; bases are covered if moving from program evaluation to generalizable research at a later stage; seems to be more prestige in engaging in big-R research; journals might have a greater desire for big-R research and could pressure researchers for generalizable findings
Cons: limits researchers’ abilities to drill down in data without written consent from all subjects involved (can be difficult with an opt-in procedure in a class); can be extremely time-intensive to complete training and paperwork required to obtain approval; required to regularly update IRB with any modifications to research design or measurement instruments
What Do You Think?
References
Gorman, M. (2004). Special feature: Whither library education? New Library World, 105(9), 376-380.
Smale, M. A. (2010). Demystifying the IRB: Human subjects research in academic libraries. portal: Libraries and the Academy, 10(3), 309-321.
Other Resources / Further Reading
Examples of activities that may or may not be human research (University of Texas at Austin)
Lib(rary) Performance blog
Working successfully with your institutional review board, by Robert V. Labaree
Nicole Pagowsky is an Instructional Services Librarian at the University of Arizona, and Tweets @pumpedlibrarian.