Category Archives: Scholarly Communications

For postings related to scholarly communications issues, including open access, copyright management, and institutional repositories.

Happy Open Access Week!

AskmeaboutOpenAccessThe 6th annual international Open Access Week is here! This has been another banner year for open access publishing — as reported on Science Insider (a blog at Science), over half of all scholarly papers are now available open access and free of charge no later than 24 months after they’re first published. That’s a milestone worth celebrating!

I’m looking forward to the events this week happening at my college and university, as well as living vicariously through the events happening elsewhere via Twitter and the blogosphere. I’m sure there’s loads of great stuff going on all over; here are a couple of events and thoughts that have caught my eye.

Open Access Button

This project from a group of European students and researchers seems like a great one: channel the frustration we all feel when we hit a paywall into research and action. In their own words, here’s their goal for the open access button:

This idea was a browser-based tool which tracks how often readers are denied access to academic research, where in the world they were or their profession and why they were looking for that research. The tool would aggregate this information into one place and would create a real time, worldwide, interactive picture of the problem. The integration of social media and mapping technology would allow us to make this problem visible to the world. Lastly, we want to help the person gain access to the paper they’d been denied access to in the first place. Through incentivising use and opening the barriers to knowledge, this can be really powerful.

Today, in honor of Open Access Week, they announced their beta launch date: November 18th. Sign up to be a beta tester here.

DigiNole Upload-A-Thon

Florida State University Libraries are hosting an interesting event this year — a workshop to encourage and guide faculty and researchers through the process of uploading their work to the university’s institutional repository. Called the Upload-A-Thon, they’re striving to have at least one faculty member from each department at the university to upload at least one article that’s already been published. I really like this idea — in addition to the catchy name, it sets out a modest goal and aims to help demystify open access for those new to the concept. I’ll be interested to hear how it goes.

What about book chapters?

I eavesdropped on an interesting conversation on Twitter over the weekend. Most folks think of journal articles when they think of open access publishing, but what about book chapters? Books tend to be less of a focus of OA activism, though as some of the folks I listened in on pointed out, interlibrary loan isn’t always possible, so maybe books should play a bigger part in OA advocacy efforts.

Lots of publishing librarians publish their work as part of a book, myself included — can we make these chapters OA post publication as many articles are? It’s a great question and one that likely has many answers depending on which publishers we’re working with. I have several pieces that appear in books and have let this question go unanswered for myself for far too long, so this year for OA Week I’m going to take the time to dig out those old contracts and see what I can free.

What are you doing to celebrate Open Access Week this year? Are you attending or presenting in any workshops or programs? Share your thoughts and experiences in the comments!

Library Research and the IRB: Is It Generalizable?

By Nicole Pagowsky and Maura Smale

There are generally two types of research that take place in the LIS field, one is more rare and is capital-R-Research, typically evidence or theory-based and generalizable; the other, more prevalent, is lowercase-r-research, typically anecdotal, immediate, and written in the style of “how we did it good.” The latter has historically been a defining quality of LIS research and receives much criticism, but as librarianship is a professional field, both theory and practice require documentation. Gorman (2004) notes how value and need have contributed to a mismatch in what is published, “[leading to] a gap in the library journal literature between arid and inaccessible reports of pure research and naive ‘how we did it good’ reports.” There are implications for these concerns both within and outside of the field: first, those within the field place less value on LIS research and might have lower confidence and higher anxiety when it comes to publishing, and second, those outside the field might take LIS research and librarians less seriously when we work to attain greater equality with faculty on campus. Understanding these implications and how human subjects research and the Institutional Review Board (IRB) fit into social sciences research can help frame our own perceptions of what we do in LIS research.

What is the IRB? The IRB regulations developed in the wake of the revelation of Nazi experimentation on humans during WWII, as well as the U.S. government’s infamous Tuskegee study in which black men with syphilis were allowed to go untreated so that researchers could examine the progression of the disease. All U.S. academic and research institutions that receive federal funding for research must convene an IRB to review and monitor research on human subjects and ensure that it remains ethical with no undue risk to participants. There are three levels of IRB approval — exempt, expedited, and full; a project is assigned its level of review based on the amount of risk to the subject and the types of data collected (informational, biological, etc.) (Smale 2010). For example, a project involving the need to draw blood from participants who are under 18 would probably be assigned a full review, while one featuring an anonymous online survey asking adults about their preferences for mobile communications devices would likely be exempt. It’s worth noting that many of the guidelines for IRB review are more relevant to biomedical and behavioral science research than humanities and social science research (for more discussion of these issues, see George Mason University History professor Zachary Schrag’s fascinating Institutional Review Blog).

Practically speaking, what is the process of going through IRB approval like for LIS researchers? We’ve both been through the process — here’s what we’ve learned.

Maura’s Experience

I’ve gone through IRB approval for three projects during my time as a library faculty member at New York City College of Technology (at City University of New York). My first experience was the most complex of the three, when my research partner and I sought IRB approval for a multiyear study of the scholarly habits of undergraduates. Our project involved interviews with students and faculty at six CUNY campuses about how students do their academic work, all of which were recorded and transcribed. We also asked students to photograph and draw objects, locations, and processes related to their academic work. While we did collect personal information from our participants, we’re committed to keeping our participants anonymous, and the risk involved for participants in our study was deemed low. Our research was classified by the IRB as expedited, which requires an application for continuing review each year that we were actively collecting data. Once we finished with interviews and moved to analysis (and writing) only, we were able secure an exempt approval, which lasts for three years before it must be renewed.

The other two projects I’ve sought IRB approval for — one a solo project and one with a colleague — were both survey-based. One involved a web-based survey of members of a university committee my colleague and I co-chaired, and the other a paper survey of students in several English classes in which I’d used a game for library instruction. Participation in the surveys was voluntary and respondents were anonymous. Both surveys were classified exempt by the IRB — the information we collected in both cases were participants’ opinions, and little risk was found in each study.

Comparing my experiences with IRB approval to those I’ve heard about at other colleges and universities, my impression is that my university’s approach to the IRB requirement is fairly strict. It seems that any study or project that is undertaken with the intent to publish is considered capital-R-research, and that the process of publishing the work confers on it the status of generalizable knowledge. Last year a few colleagues and I met with the Chair of the college’s IRB committee to seek clarification, and we learned that interviews and surveys of library patrons solely for the purpose of program improvement does not require IRB approval, as it’s not considered to be generalizable knowledge. However, the IRB committee frowns on requests for retroactive IRB approval, which could put us in a bind if we ever decide that results of a program improvement initiative might be worth publishing.

Nicole’s Experience

At the University of Arizona (UA), I am in the process of researching the impact of digital badges on student motivation for learning information literacy skills in a one-credit course offered by the library. I detailed the most recent meeting with our representative from IRB on my blog, where after officially filing for IRB approval and having much back-and-forth over a few months, it was clarified that we in fact did not exactly need IRB approval in the first place. As mentioned above, each institution’s IRB policies and procedures are different. According to the acting director of the UA’s IRB office, our university is on the more progressive end of interpreting research and its federal definition. Previous directors were more in line with the rest of the country in being very strict, where if a researcher was just talking with a student, IRB approval should be obtained. Because their office is constantly inundated with research studies, a majority of which would be considered exempt or even little-r research, it is a misuse of their time to oversee studies where there is essentially no risk. A new trend is burgeoning to develop a board comprised of representatives from different departments to oversee their own exempt studies; when the acting director met with library faculty recently, she suggested we nominate two librarians to serve on this board so that we would have jurisdiction over our own exempt research to benefit all parties.

Initially, because the research study I am engaging in would be examining student success in the course through grades and assessments, as well as students’ own evaluation of their motivation and achievement, we had understood that to be able to publish these findings, we would be required to obtain IRB approval since we are working with human subjects. Our IRB application was approved and we were ranked as exempt. This means our study is so low-risk that we require very little oversight. All we would need to do is follow guidelines for students to opt-in to our study (not opt-out), obtain consent for looking at FERPA-related and personally identifiable information, and update the Board if we modify any research instruments (surveys, assessments, communications to students about the study). We found out, however, that we actually did not even need to apply for IRB in the first place because we are not necessarily setting out to produce generalizable knowledge. This is where “research” and “Research” come into play. We are in fact doing “research” where we are studying our own program (our class) for program evaluation. Because we are not saying that our findings apply to all information literacy courses across the country, for example, we are not producing generalizable “Research.” As our rep clarified, this does not imply that our research is not real, it just means that according to the federal definition (which oversees all Institutional Review Boards), we are not within their jurisdiction. Another way to look at this is to consider if the research is replicable; because our study is specific to the UA and this specific course, if another librarian at another university attempted to replicate the study, it’s not guaranteed that results will be the same.

With our revised status we can go more in depth in our study and do better research. What does “better” mean though? In this sense, it could be contending with fewer restrictions in looking for trends. If we are doing program evaluation in our own class, we don’t need to anonymize data, request opt-ins, or submit revised research instruments for approval before proceeding because the intent of the research is to improve/evaluate the course (which in turn improves the institution). Essentially, according to our rep, we can really do whatever we want however we want so long as it’s ethical. Although we would not be implying our research is generalizable, readers of our potentially published research would still be able to consider how this information might apply to them. The research might have implications for others’ work, but because it is so specific, it doesn’t provide replicable data that cuts across the board.

LIS Research: Revisiting Our Role

As both of our experiences suggest, the IRB requirement for human subjects research can be far from straightforward. Before the review process has even begun, most institutions require researchers to complete a training course that can take as long as 10 hours. Add in the complexity of the IRB application, and the length of time that approval can take (especially when revisions are needed), and many librarians may hesitate to engage in research involving human subjects because they are reluctant to go through the IRB process. Likewise, librarians might be overzealous in applying for IRB when it is not even needed. With the perceived lower respect that comes in publishing program evaluation or research skewed toward anecdotal evidence, LIS researchers might attempt big-R Research when it does not fit with the actual data they are assessing.

What implications can this have for librarians, particularly on the tenure track? The expectation in LIS is to move away from little-r research and be on the same level as other faculty on campus engaging in big-R Research, but this might not be possible. If other IRB offices follow the trend of the more-progressive UA, many more departments (not just the library) may not need IRB oversight, or will be overseeing themselves on a campus-based board reviewing exempt studies. As the acting IRB director at the UA pointed out to library faculty, publication should not be the criterion for assuming generalizability and attempting IRB approval, but rather intent: what are you trying to learn or prove? If it’s to compare/contrast your program with others, suggest improvements across the board, or make broad statements, then yes, your study would be generalizable, replicable, and is considered human subjects research. If, on the other hand, you are improving your own library services or evaluating a library-based credit course, these results are local to your institution and will vary if replicated. Just because one does not need IRB approval for a study does not mean it is any less important, it simply does not fall under the federal definition of research. Evidence-based research should be the goal rather than only striving for research generalizable to all, and anecdotal research has its place in exploring new ideas and experimental processes. Perhaps instead of focusing on anxiety over how our research is classified, we need to re-evaluate our understanding of IRB and our profession’s self-confidence overall in our role as researchers.

Tl;dr — The Pros and Cons of IRB for Library Research

Pros: allows researchers to make generalizable statements about their findings; bases are covered if moving from program evaluation to generalizable research at a later stage; seems to be more prestige in engaging in big-R research; journals might have a greater desire for big-R research and could pressure researchers for generalizable findings

Cons: limits researchers’ abilities to drill down in data without written consent from all subjects involved (can be difficult with an opt-in procedure in a class); can be extremely time-intensive to complete training and paperwork required to obtain approval; required to regularly update IRB with any modifications to research design or measurement instruments

What Do You Think?

References

Gorman, M. (2004). Special feature: Whither library education? New Library World, 105(9), 376-380.

Smale, M. A. (2010). Demystifying the IRB: Human subjects research in academic libraries. portal: Libraries and the Academy, 10(3), 309-321.

Other Resources / Further Reading

Examples of activities that may or may not be human research (University of Texas at Austin)
Lib(rary) Performance blog
Working successfully with your institutional review board, by Robert V. Labaree

Nicole Pagowsky is an Instructional Services Librarian at the University of Arizona, and Tweets @pumpedlibrarian.

Monograph Musings

As the scholarly communications landscape shifts and changes, what’s the role of traditional academic monograph publishing? That’s a question much on my mind of late for a number of reasons. About a week and a half ago was the American Association of University Press’s annual meeting, which filled my Twitter stream with the hashtag #aaup13. With the slower summer days I’ve been making time for weeding at work, considering which books should stay and which should go, and beginning to plan for purchasing new books starting in the fall. And I’m also thinking about academic books from the perspective of an author, as my research partner and I finish the draft of the book we’re writing and have sent out proposals to a couple of university presses.

Books are for reading — presumably anyone who writes a book feels that their book offers useful and insightful information that they want to share widely with others. But there are lots of possibilities for sharing our work, even a piece that’s as long as a monograph (rather than short like an article). There are websites and blogs, relatively easy to use tools for creating and formatting text into ereader- and print-friendly formats. Add in print on demand, and it’s easy to wonder about the role of scholarly presses. Having worked in publishing for a few years before I was a librarian I’m familiar with the huge amount of work that goes into preparing books for publication (not to mention publishing them). Academic presses definitely add value to monographs, from copy editing to layout and beyond. Scholarly books are also often peer reviewed, which for a book manuscript is a non-trivial undertaking, much more labor-intensive than for an article. I’m a firm believer in peer review — when done well, the resulting publication is much stronger for it.

But academic publishing, especially at university presses, has become more challenging — costs keep rising, and sales (to academic libraries and others) aren’t as strong as they once were. Jennifer Howard at the Chronicle of Higher Education wrote two good overviews of the AAUP meetings, in which presses discussed strategies for ensuring their survival in a time of lean budgets while expanding into new formats and modes of publishing. Facilitated by the meetings’ active Twitter presence, Ian Bogost, professor of Media Studies at Georgia Institute of Technology, who was not actually at the meetings, tweeted a 10 point “microrant” about academic publishing. Among other things, Bogost notes that publishers might put more resources into editorial development for their authors, because scholars are not necessarily the best writers. Bogost also points out that university presses could help fill the gap between highly scholarly works and popular publications.

The relationship between academic libraries and presses is changing, too. Collaborations are on the rise, as was discussed at the AAUP meetings, which has been exciting to watch — I think there are lots of natural affinities between the two. But as the scholarly book landscape changes I can’t help but think about my library, and the college and university we belong to. There’s no university press at the large, public institution my college is part of. I’m at a technical college that offers associates and baccalaureate degrees, and there’s also not a huge market for many of the more traditional university press publications at my college, the highly scholarly monographs. Not that university presses publish the works of their own faculty (though perhaps they should?), but of course we have faculty who write academic books at my college, too, as do faculty at lots of colleges that are unlikely to have presses, like community colleges.

Where does my college fit as scholarly monograph publishing evolves? I think the students I work with are a perfect audience for books that fill the gap that Bogost pointed out — academic works written without highly specialized language that are accessible to novices, something smarter and more interesting than a textbook, an overview that includes enough detail to be useful for the typical undergraduate research project. But what about getting into publishing ourselves? It’s easy to think of the differences in collections between large research university libraries and college libraries like where I work: they have more stuff (books, journals, etc.), and there are ways for us to get the stuff we don’t have if we need it. If university publishing and academic libraries become more closely tied together, where will that leave those universities and colleges without presses? And will that impact the opportunities that our faculty have for publication?

Ebooks Are not Electronic Journals

As a physical science librarian I know journals are the primary form of scholarly communication in the sciences. While the particle physicists have arXiv and some of the cool-kids will tout non-traditional knowledge transfer though social media, my chemists use journals and are pretty comfortable with that. Of course, electronic journals are greatly preferred – it’s easy to print and you can grab articles off the web and file them away for the rest of your career. No photocopying or waiting – and your graduate students can practically live in the lab.

This shouldn’t be news to any academic librarian (really, it shouldn’t be). But what might be news is the same scientists are not nearly as interested in ebooks. Ebooks take a text, put it online and allow scientists to access the information utilizing an Internet browser. So why have I had users asking me to purchase physical copies of ebooks in our collection?

Some of the problem is platform – by which I mean Ebrary. Most scientists don’t read articles online; they download them, print them, and then read. Most of the science monographs I purchase are edited works on a topic and each chapter is, effectively, like a journal article in terms of length and topic coverage. Ebrary presents the electronic text as a book and only allows users to download 60 pages as a PDF. This is a problem if you want a large review article or more than one chapter; then the ebook is suddenly less useful then a print book, because you can’t even copy it. When I polled my faculty earlier this year, some said they always prefer ebooks. But among those who conditionally preferred an ebook, all of them preferred chapters arranged as PDFs with unlimited downloads. The actual ebook – an electronic text meant to be viewed only on a screen – has very little support. So Ebrary is the main option I have for purchasing ebooks, but my patrons like Ebrary’s model the least.

Another platform problem is viewing platform; not everyone has a dedicated electronic reader to make ebooks pleasant and even if you have one, it may be a hassle to view. Ebrary for Kindles and iPads require additional software, but hey – it’s only a 14-16 step process. Without a tablet of some sort, you’re stuck with a laptop screen that cannot comfortably view a whole page at once or a desktop monitor that may be ill suited to reading. My real issue with the variety of experience ebooks provide is it makes your collection decisions inherently classist – your patrons with the wealth to afford a nice tablet have a better experience than your less privileged patrons. Print books have downsides, but using them doesn’t inherently reinforce inequality.

So as beloved as electronic journal are, I just cannot say the same for the ebook. And until the vendor platform offers ebooks my patrons want, I can’t say I’ll be buying many.

Evaluating Information: The Light Side of Open Access

Early last week I opened the New York Times and was surprised to see a front-page article about sham academic publishers and conferences. The article discussed something we in the library world have been aware of for some time: open access publishers with low (or no) standards for peer review and acceptance, sometimes even with fictional editorial boards. The publications are financed by authors’ fees, which may not be clear from their submission guidelines, and, with the relatively low cost of hosting an online-only journal, are presumably making quite a bit of money. The article included an interview with and photo of University of Colorado Denver librarian Jeffrey Beall, compiler of the useful Beall’s List guide to potentially predatory open access scholarly journals and publishers.

I’ve long been an admirer of Jeffrey Beall’s work and I’m glad to see him getting recognition outside of the library world. But the frankly alarmist tone of the Times article was disappointing to say the least, as was the seeming equation of open access with less-than-aboveboard publishers, which of course is not the case. As biologist Michael Eisen notes, there are lots of toll-access scholarly journals (and conferences) of suspicious quality. With the unbelievably high profits of scholarly publishing, it’s not surprising that the number of journals has proliferated and that not all of them are of the best quality. And there are many legitimate, highly-regarded journals — both open access and toll-access — that charge authors’ fees, especially in the sciences.

As I’ve bounced these thoughts around my brain for the past week, I keep coming back to one thing: the importance of evaluating information. Evaluating sources is something that faculty and librarians teach students, and students are required to use high quality sources in their work. How do we teach students to get at source quality? Research! Dig into the source: find out more about the author/organization, and read the text to see whether it’s comprehensible, typo-free, etc. Metrics like Journal Impact Factor can help make these determinations, but they’re far from the only aspects of a work to examine. In addition to Beall’s List, Gavia Libraria has a great post from last year detailing some specific steps to take and criteria to consider when evaluating a scholarly journal. I like to go by the classic TANSTAAFL: there ain’t no such thing as a free lunch. Get an email to contribute to a journal or conference out of the blue? It’s probably not the cream of the crop.

So if faculty and librarians teach our students to evaluate sources, why do we sometimes forget (or ignore?) to do so ourselves? I’d guess that the seemingly ever-increasing need for publications and presentations to support tenure and promotion plays into it, especially as the number of full-time faculty and librarian positions continue to decrease. I appreciate reasoned calls for quality over quantity, but I wonder whether slowing down the academic publishing arms race will end the proliferation of low quality journals.

The Times article last week notes that one danger of increasing numbers of fraudulent journals is that “nonexperts doing online research will have trouble distinguishing credible research from junk.” This isn’t the fault of the open access movement at all; if anything, open access can help determine the legitimacy of a journal. Shining a light on these sham journals makes it easier than ever to identify them. It’s up to us, both faculty and librarians: if the research and scholarship we do is work we should be proud of, prestigious work that’s worth publishing, then it stands to reason that we should share that work and prestige only with and via publications that are worth it.