Category Archives: Scholarly Communications

For postings related to scholarly communications issues, including open access, copyright management, and institutional repositories.

Library Research and the IRB: Is It Generalizable?

By Nicole Pagowsky and Maura Smale

There are generally two types of research that take place in the LIS field, one is more rare and is capital-R-Research, typically evidence or theory-based and generalizable; the other, more prevalent, is lowercase-r-research, typically anecdotal, immediate, and written in the style of “how we did it good.” The latter has historically been a defining quality of LIS research and receives much criticism, but as librarianship is a professional field, both theory and practice require documentation. Gorman (2004) notes how value and need have contributed to a mismatch in what is published, “[leading to] a gap in the library journal literature between arid and inaccessible reports of pure research and naive ‘how we did it good’ reports.” There are implications for these concerns both within and outside of the field: first, those within the field place less value on LIS research and might have lower confidence and higher anxiety when it comes to publishing, and second, those outside the field might take LIS research and librarians less seriously when we work to attain greater equality with faculty on campus. Understanding these implications and how human subjects research and the Institutional Review Board (IRB) fit into social sciences research can help frame our own perceptions of what we do in LIS research.

What is the IRB? The IRB regulations developed in the wake of the revelation of Nazi experimentation on humans during WWII, as well as the U.S. government’s infamous Tuskegee study in which black men with syphilis were allowed to go untreated so that researchers could examine the progression of the disease. All U.S. academic and research institutions that receive federal funding for research must convene an IRB to review and monitor research on human subjects and ensure that it remains ethical with no undue risk to participants. There are three levels of IRB approval — exempt, expedited, and full; a project is assigned its level of review based on the amount of risk to the subject and the types of data collected (informational, biological, etc.) (Smale 2010). For example, a project involving the need to draw blood from participants who are under 18 would probably be assigned a full review, while one featuring an anonymous online survey asking adults about their preferences for mobile communications devices would likely be exempt. It’s worth noting that many of the guidelines for IRB review are more relevant to biomedical and behavioral science research than humanities and social science research (for more discussion of these issues, see George Mason University History professor Zachary Schrag’s fascinating Institutional Review Blog).

Practically speaking, what is the process of going through IRB approval like for LIS researchers? We’ve both been through the process — here’s what we’ve learned.

Maura’s Experience

I’ve gone through IRB approval for three projects during my time as a library faculty member at New York City College of Technology (at City University of New York). My first experience was the most complex of the three, when my research partner and I sought IRB approval for a multiyear study of the scholarly habits of undergraduates. Our project involved interviews with students and faculty at six CUNY campuses about how students do their academic work, all of which were recorded and transcribed. We also asked students to photograph and draw objects, locations, and processes related to their academic work. While we did collect personal information from our participants, we’re committed to keeping our participants anonymous, and the risk involved for participants in our study was deemed low. Our research was classified by the IRB as expedited, which requires an application for continuing review each year that we were actively collecting data. Once we finished with interviews and moved to analysis (and writing) only, we were able secure an exempt approval, which lasts for three years before it must be renewed.

The other two projects I’ve sought IRB approval for — one a solo project and one with a colleague — were both survey-based. One involved a web-based survey of members of a university committee my colleague and I co-chaired, and the other a paper survey of students in several English classes in which I’d used a game for library instruction. Participation in the surveys was voluntary and respondents were anonymous. Both surveys were classified exempt by the IRB — the information we collected in both cases were participants’ opinions, and little risk was found in each study.

Comparing my experiences with IRB approval to those I’ve heard about at other colleges and universities, my impression is that my university’s approach to the IRB requirement is fairly strict. It seems that any study or project that is undertaken with the intent to publish is considered capital-R-research, and that the process of publishing the work confers on it the status of generalizable knowledge. Last year a few colleagues and I met with the Chair of the college’s IRB committee to seek clarification, and we learned that interviews and surveys of library patrons solely for the purpose of program improvement does not require IRB approval, as it’s not considered to be generalizable knowledge. However, the IRB committee frowns on requests for retroactive IRB approval, which could put us in a bind if we ever decide that results of a program improvement initiative might be worth publishing.

Nicole’s Experience

At the University of Arizona (UA), I am in the process of researching the impact of digital badges on student motivation for learning information literacy skills in a one-credit course offered by the library. I detailed the most recent meeting with our representative from IRB on my blog, where after officially filing for IRB approval and having much back-and-forth over a few months, it was clarified that we in fact did not exactly need IRB approval in the first place. As mentioned above, each institution’s IRB policies and procedures are different. According to the acting director of the UA’s IRB office, our university is on the more progressive end of interpreting research and its federal definition. Previous directors were more in line with the rest of the country in being very strict, where if a researcher was just talking with a student, IRB approval should be obtained. Because their office is constantly inundated with research studies, a majority of which would be considered exempt or even little-r research, it is a misuse of their time to oversee studies where there is essentially no risk. A new trend is burgeoning to develop a board comprised of representatives from different departments to oversee their own exempt studies; when the acting director met with library faculty recently, she suggested we nominate two librarians to serve on this board so that we would have jurisdiction over our own exempt research to benefit all parties.

Initially, because the research study I am engaging in would be examining student success in the course through grades and assessments, as well as students’ own evaluation of their motivation and achievement, we had understood that to be able to publish these findings, we would be required to obtain IRB approval since we are working with human subjects. Our IRB application was approved and we were ranked as exempt. This means our study is so low-risk that we require very little oversight. All we would need to do is follow guidelines for students to opt-in to our study (not opt-out), obtain consent for looking at FERPA-related and personally identifiable information, and update the Board if we modify any research instruments (surveys, assessments, communications to students about the study). We found out, however, that we actually did not even need to apply for IRB in the first place because we are not necessarily setting out to produce generalizable knowledge. This is where “research” and “Research” come into play. We are in fact doing “research” where we are studying our own program (our class) for program evaluation. Because we are not saying that our findings apply to all information literacy courses across the country, for example, we are not producing generalizable “Research.” As our rep clarified, this does not imply that our research is not real, it just means that according to the federal definition (which oversees all Institutional Review Boards), we are not within their jurisdiction. Another way to look at this is to consider if the research is replicable; because our study is specific to the UA and this specific course, if another librarian at another university attempted to replicate the study, it’s not guaranteed that results will be the same.

With our revised status we can go more in depth in our study and do better research. What does “better” mean though? In this sense, it could be contending with fewer restrictions in looking for trends. If we are doing program evaluation in our own class, we don’t need to anonymize data, request opt-ins, or submit revised research instruments for approval before proceeding because the intent of the research is to improve/evaluate the course (which in turn improves the institution). Essentially, according to our rep, we can really do whatever we want however we want so long as it’s ethical. Although we would not be implying our research is generalizable, readers of our potentially published research would still be able to consider how this information might apply to them. The research might have implications for others’ work, but because it is so specific, it doesn’t provide replicable data that cuts across the board.

LIS Research: Revisiting Our Role

As both of our experiences suggest, the IRB requirement for human subjects research can be far from straightforward. Before the review process has even begun, most institutions require researchers to complete a training course that can take as long as 10 hours. Add in the complexity of the IRB application, and the length of time that approval can take (especially when revisions are needed), and many librarians may hesitate to engage in research involving human subjects because they are reluctant to go through the IRB process. Likewise, librarians might be overzealous in applying for IRB when it is not even needed. With the perceived lower respect that comes in publishing program evaluation or research skewed toward anecdotal evidence, LIS researchers might attempt big-R Research when it does not fit with the actual data they are assessing.

What implications can this have for librarians, particularly on the tenure track? The expectation in LIS is to move away from little-r research and be on the same level as other faculty on campus engaging in big-R Research, but this might not be possible. If other IRB offices follow the trend of the more-progressive UA, many more departments (not just the library) may not need IRB oversight, or will be overseeing themselves on a campus-based board reviewing exempt studies. As the acting IRB director at the UA pointed out to library faculty, publication should not be the criterion for assuming generalizability and attempting IRB approval, but rather intent: what are you trying to learn or prove? If it’s to compare/contrast your program with others, suggest improvements across the board, or make broad statements, then yes, your study would be generalizable, replicable, and is considered human subjects research. If, on the other hand, you are improving your own library services or evaluating a library-based credit course, these results are local to your institution and will vary if replicated. Just because one does not need IRB approval for a study does not mean it is any less important, it simply does not fall under the federal definition of research. Evidence-based research should be the goal rather than only striving for research generalizable to all, and anecdotal research has its place in exploring new ideas and experimental processes. Perhaps instead of focusing on anxiety over how our research is classified, we need to re-evaluate our understanding of IRB and our profession’s self-confidence overall in our role as researchers.

Tl;dr — The Pros and Cons of IRB for Library Research

Pros: allows researchers to make generalizable statements about their findings; bases are covered if moving from program evaluation to generalizable research at a later stage; seems to be more prestige in engaging in big-R research; journals might have a greater desire for big-R research and could pressure researchers for generalizable findings

Cons: limits researchers’ abilities to drill down in data without written consent from all subjects involved (can be difficult with an opt-in procedure in a class); can be extremely time-intensive to complete training and paperwork required to obtain approval; required to regularly update IRB with any modifications to research design or measurement instruments

What Do You Think?

References

Gorman, M. (2004). Special feature: Whither library education? New Library World, 105(9), 376-380.

Smale, M. A. (2010). Demystifying the IRB: Human subjects research in academic libraries. portal: Libraries and the Academy, 10(3), 309-321.

Other Resources / Further Reading

Examples of activities that may or may not be human research (University of Texas at Austin)
Lib(rary) Performance blog
Working successfully with your institutional review board, by Robert V. Labaree

Nicole Pagowsky is an Instructional Services Librarian at the University of Arizona, and Tweets @pumpedlibrarian.

Monograph Musings

As the scholarly communications landscape shifts and changes, what’s the role of traditional academic monograph publishing? That’s a question much on my mind of late for a number of reasons. About a week and a half ago was the American Association of University Press’s annual meeting, which filled my Twitter stream with the hashtag #aaup13. With the slower summer days I’ve been making time for weeding at work, considering which books should stay and which should go, and beginning to plan for purchasing new books starting in the fall. And I’m also thinking about academic books from the perspective of an author, as my research partner and I finish the draft of the book we’re writing and have sent out proposals to a couple of university presses.

Books are for reading — presumably anyone who writes a book feels that their book offers useful and insightful information that they want to share widely with others. But there are lots of possibilities for sharing our work, even a piece that’s as long as a monograph (rather than short like an article). There are websites and blogs, relatively easy to use tools for creating and formatting text into ereader- and print-friendly formats. Add in print on demand, and it’s easy to wonder about the role of scholarly presses. Having worked in publishing for a few years before I was a librarian I’m familiar with the huge amount of work that goes into preparing books for publication (not to mention publishing them). Academic presses definitely add value to monographs, from copy editing to layout and beyond. Scholarly books are also often peer reviewed, which for a book manuscript is a non-trivial undertaking, much more labor-intensive than for an article. I’m a firm believer in peer review — when done well, the resulting publication is much stronger for it.

But academic publishing, especially at university presses, has become more challenging — costs keep rising, and sales (to academic libraries and others) aren’t as strong as they once were. Jennifer Howard at the Chronicle of Higher Education wrote two good overviews of the AAUP meetings, in which presses discussed strategies for ensuring their survival in a time of lean budgets while expanding into new formats and modes of publishing. Facilitated by the meetings’ active Twitter presence, Ian Bogost, professor of Media Studies at Georgia Institute of Technology, who was not actually at the meetings, tweeted a 10 point “microrant” about academic publishing. Among other things, Bogost notes that publishers might put more resources into editorial development for their authors, because scholars are not necessarily the best writers. Bogost also points out that university presses could help fill the gap between highly scholarly works and popular publications.

The relationship between academic libraries and presses is changing, too. Collaborations are on the rise, as was discussed at the AAUP meetings, which has been exciting to watch — I think there are lots of natural affinities between the two. But as the scholarly book landscape changes I can’t help but think about my library, and the college and university we belong to. There’s no university press at the large, public institution my college is part of. I’m at a technical college that offers associates and baccalaureate degrees, and there’s also not a huge market for many of the more traditional university press publications at my college, the highly scholarly monographs. Not that university presses publish the works of their own faculty (though perhaps they should?), but of course we have faculty who write academic books at my college, too, as do faculty at lots of colleges that are unlikely to have presses, like community colleges.

Where does my college fit as scholarly monograph publishing evolves? I think the students I work with are a perfect audience for books that fill the gap that Bogost pointed out — academic works written without highly specialized language that are accessible to novices, something smarter and more interesting than a textbook, an overview that includes enough detail to be useful for the typical undergraduate research project. But what about getting into publishing ourselves? It’s easy to think of the differences in collections between large research university libraries and college libraries like where I work: they have more stuff (books, journals, etc.), and there are ways for us to get the stuff we don’t have if we need it. If university publishing and academic libraries become more closely tied together, where will that leave those universities and colleges without presses? And will that impact the opportunities that our faculty have for publication?

Ebooks Are not Electronic Journals

As a physical science librarian I know journals are the primary form of scholarly communication in the sciences. While the particle physicists have arXiv and some of the cool-kids will tout non-traditional knowledge transfer though social media, my chemists use journals and are pretty comfortable with that. Of course, electronic journals are greatly preferred – it’s easy to print and you can grab articles off the web and file them away for the rest of your career. No photocopying or waiting – and your graduate students can practically live in the lab.

This shouldn’t be news to any academic librarian (really, it shouldn’t be). But what might be news is the same scientists are not nearly as interested in ebooks. Ebooks take a text, put it online and allow scientists to access the information utilizing an Internet browser. So why have I had users asking me to purchase physical copies of ebooks in our collection?

Some of the problem is platform – by which I mean Ebrary. Most scientists don’t read articles online; they download them, print them, and then read. Most of the science monographs I purchase are edited works on a topic and each chapter is, effectively, like a journal article in terms of length and topic coverage. Ebrary presents the electronic text as a book and only allows users to download 60 pages as a PDF. This is a problem if you want a large review article or more than one chapter; then the ebook is suddenly less useful then a print book, because you can’t even copy it. When I polled my faculty earlier this year, some said they always prefer ebooks. But among those who conditionally preferred an ebook, all of them preferred chapters arranged as PDFs with unlimited downloads. The actual ebook – an electronic text meant to be viewed only on a screen – has very little support. So Ebrary is the main option I have for purchasing ebooks, but my patrons like Ebrary’s model the least.

Another platform problem is viewing platform; not everyone has a dedicated electronic reader to make ebooks pleasant and even if you have one, it may be a hassle to view. Ebrary for Kindles and iPads require additional software, but hey – it’s only a 14-16 step process. Without a tablet of some sort, you’re stuck with a laptop screen that cannot comfortably view a whole page at once or a desktop monitor that may be ill suited to reading. My real issue with the variety of experience ebooks provide is it makes your collection decisions inherently classist – your patrons with the wealth to afford a nice tablet have a better experience than your less privileged patrons. Print books have downsides, but using them doesn’t inherently reinforce inequality.

So as beloved as electronic journal are, I just cannot say the same for the ebook. And until the vendor platform offers ebooks my patrons want, I can’t say I’ll be buying many.

Evaluating Information: The Light Side of Open Access

Early last week I opened the New York Times and was surprised to see a front-page article about sham academic publishers and conferences. The article discussed something we in the library world have been aware of for some time: open access publishers with low (or no) standards for peer review and acceptance, sometimes even with fictional editorial boards. The publications are financed by authors’ fees, which may not be clear from their submission guidelines, and, with the relatively low cost of hosting an online-only journal, are presumably making quite a bit of money. The article included an interview with and photo of University of Colorado Denver librarian Jeffrey Beall, compiler of the useful Beall’s List guide to potentially predatory open access scholarly journals and publishers.

I’ve long been an admirer of Jeffrey Beall’s work and I’m glad to see him getting recognition outside of the library world. But the frankly alarmist tone of the Times article was disappointing to say the least, as was the seeming equation of open access with less-than-aboveboard publishers, which of course is not the case. As biologist Michael Eisen notes, there are lots of toll-access scholarly journals (and conferences) of suspicious quality. With the unbelievably high profits of scholarly publishing, it’s not surprising that the number of journals has proliferated and that not all of them are of the best quality. And there are many legitimate, highly-regarded journals — both open access and toll-access — that charge authors’ fees, especially in the sciences.

As I’ve bounced these thoughts around my brain for the past week, I keep coming back to one thing: the importance of evaluating information. Evaluating sources is something that faculty and librarians teach students, and students are required to use high quality sources in their work. How do we teach students to get at source quality? Research! Dig into the source: find out more about the author/organization, and read the text to see whether it’s comprehensible, typo-free, etc. Metrics like Journal Impact Factor can help make these determinations, but they’re far from the only aspects of a work to examine. In addition to Beall’s List, Gavia Libraria has a great post from last year detailing some specific steps to take and criteria to consider when evaluating a scholarly journal. I like to go by the classic TANSTAAFL: there ain’t no such thing as a free lunch. Get an email to contribute to a journal or conference out of the blue? It’s probably not the cream of the crop.

So if faculty and librarians teach our students to evaluate sources, why do we sometimes forget (or ignore?) to do so ourselves? I’d guess that the seemingly ever-increasing need for publications and presentations to support tenure and promotion plays into it, especially as the number of full-time faculty and librarian positions continue to decrease. I appreciate reasoned calls for quality over quantity, but I wonder whether slowing down the academic publishing arms race will end the proliferation of low quality journals.

The Times article last week notes that one danger of increasing numbers of fraudulent journals is that “nonexperts doing online research will have trouble distinguishing credible research from junk.” This isn’t the fault of the open access movement at all; if anything, open access can help determine the legitimacy of a journal. Shining a light on these sham journals makes it easier than ever to identify them. It’s up to us, both faculty and librarians: if the research and scholarship we do is work we should be proud of, prestigious work that’s worth publishing, then it stands to reason that we should share that work and prestige only with and via publications that are worth it.

ACS Solutions: The Sturm und Drang

ACRLog welcomes a guest post from Sue Wiegand, Periodicals Librarian at St. Mary’s College in Notre Dame, IN.

A chemical storm recently blew up across the blogosphere, involving the American Chemical Society journals, the serials crisis of unsustainably high prices, and one brave librarian, Jenica Rogers at SUNY Potsdam, who said “Enough!” The atmospheric conditions that caused this storm: high journal prices, clashing with low library budgets. Not a surprise, as these storms blow up frequently before subsiding, but the response to Jenica’s blog post thundered through the online community of librarians and scholars. Why? Because she implemented an unusual solution. She cancelled the high-priced “Big Deal” ACS package, after consultation with their Chemistry Department. Others have cancelled Big Deals, but Jenica cancelled ACS journals, when ACS is also the accreditor for Chemistry. She made sure SUNY Potsdam Chemistry scholars and students would still get access to the research they needed, they would just get it in different ways. Controversy swirled like the winds of change.

Other “serials crisis” storms have come and gone over the years: in 2010, the University of California threatened to not renew Nature Publishing Group journals; in 2012, thousands of scholars and librarians signed a petition to boycott Elsevier. Going back further, decades of complaint from librarians resulted in, well, even higher prices. So, cancelling is the direct approach—the action alternative to what hasn’t worked.

As both Periodicals Librarian and liaison to the Chemistry Department, I knew that the answer at SUNY Potsdam would be different from what we could do with the resources we have available here at Saint Mary’s College. Our consortial arrangements are different, our mission is different—we’re a small liberal arts college, not part of a state-wide system. A suggestion from others here was to try to persuade the Chemistry Department to give up their ACS accreditation, but I didn’t want to do that. I’ve worked closely with Chemistry faculty, not only in collection development for their journals, but on college-wide committees—I know they are reasonable people, and they are also shocked at unsustainably high pricing for scholarly articles. I reckoned the department and the library could work together to figure something out. The other librarians agreed: the time was right. Discussion ensued.

Some history: way back in 2002, after an interesting discussion of the new digital era for journals, a senior Chemistry professor came to me with a scenario based on what I’d told him was possible if he wanted to make a deal: cancel some Chemistry journals to use the money available to get SciFinder Scholar, the indexing and abstracting database. ACS was offering a deal: a “3 for 2” split with 2 similar institutions, so we could pay 1/3 of the cost of the SciFinder index. So we worked out which journals to cancel, which to keep, and we added SciFinder, a client-server product at that time, while keeping the necessary number of print ACS journals to keep our accreditation. The scenario accomplished this at no cost increase because we cancelled some print titles they didn’t want as much as they wanted the comprehensive, discipline-specific indexing.

Soon after, our state consortium offered an ACS “Big Deal” package: convert our ACS journals from print to online at the same price we were paying for print (the “historical spend”) and get many more journals for every library in the consortium. We converted. As with all Big Deals in the beginning, we marveled that we could get so many online journals at the same price we had been paying for our print subscriptions. I configured SciFinder to link our new titles, closed the catalog holdings, and shelved the print on the lower level, with signs on the Current Periodicals shelves: “This title is now online!” We added links. For Chemistry journals and indexing, at least, we were set for the brave new millennium.

Every year, the consortium negotiated small price increases, and more journals were added. Every year our budget stayed stagnant or went down while, subscription prices to other periodicals also went up. Faculty members in Chemistry were happy with the access they could get to the high-quality ACS journals, and frequently told me soWhen SciFinder became a web product, replacing the client-server model—even better (I was happy about that, too, in spite of the hassle with passwords and creating accounts that it entailed.) But the librarians thought the cost per use was too high for our small Chemistry Department. Then came Jenica’s blog post.

At Potsdam, librarians and Chemistry faculty decided to continue the ACS Legacy Archive, plus use Interlibrary Loan, add journals from the Royal Society (the Royal Society Gold package), and continue both STNEasy and Elsevier’s ScienceDirect database, which we don’t have at Saint Mary’s. Our mix is slightly different—after much discussion with Chemistry faculty and my librarian colleagues, we kept only the subscription to Journal of Chemical Education from ACS. We renewed the ACS Legacy Archive, and also kept our one Royal Society title (Chemical Society Reviews). The department agreed to use Interlibrary Loan when needed (as Jenica notes, ILL is also not free, but it is doable). We had post-cancellation access rights to 10 years of ACS content (next year, we must subscribe to another ACS title or pay an access fee to continue that).

We also kept SciFinder Scholar, still the single most important element to our faculty in Chemistry—they made this very clear from the first meeting I had with them. SciFinder is the indexing piece of the puzzle—it searches the Chemistry literature as a whole, not just the ACS journals, so it’s one place for them to search, and they like that. They already get non-ACS, non-subscribed journals from ILL, and they know it works well. We also, as did Jenica and the SUNY Potsdam librarians, encouraged faculty to use their ACS membership titles first for needed full-text found via SciFinder, and to consider having students also become members, since Society membership includes 25 “free” ACS articles, and student memberships are inexpensive.

The other solution I explored to complete the picture for us was to try using a document delivery service called FIZ AutoDoc, from FIZ Karlsruhe. FIZ (Fachinformationszentrum) is a not-for-profit German company that partners with the ACS, provides their document delivery, and also provides the STN databases. Implementation of the FIZ AutoDoc service required an incredible amount of mind-boggling documentation-reading, collaboration, copious emails, technical discussions, a webinar demo, a trial, and much angst. The sturm und drang, was not FIZ’s fault—they were extremely easy to work with, even though based far away in Germany. We just needed to figure out what we wanted and how to configure it to work with SFX, our link resolver, and our ideas about how to do this—how our workflow should go, who should do what, should it be mediated or unmediated, how it would look to the end-user—required much discussion. Eventually, we thought we had it—mediated by ILL would be best. No, wait! Maybe there is another way… The debate raged.

Ultimately, we did go with mediated by ILL, with the SFX link also in SciFinder. We added an SFX note about using free ACS membership articles if possible, and provided a list of ACS titles for use by ILL student workers. The account was set up with 2 passwords so the ILL Department can experiment with unmediated seamless access through SFX, so there is room for further improvement when the technical details are worked out. Meanwhile, requests for ACS articles are passed through to the ILL form, which is handily pre-populated by SFX from wherever they originate (since some ACS titles are also indexed in Academic Search Premier). ILL takes it from there in their usual efficient way.

So where do the philosophical questions come in? Is it ok for a library to purchase an article for just one person? What about sharing library resources? What about Fair Use? What about Open Access?

I have to say, I love the idea of Open Access, always have. I told the Chemistry Department that chemists everywhere should get together and start a subject repository like arXiv for Physics—this was quite humorous, apparently. In 2010, the University of Prince Edward Island’s library director, wanting to cancel Web of Science because of the high price, proposed an even more radical idea: librarians collaborating to build an index to scholarly literature that would be free and maintained by librarians. We all know the scholarly communication story by now. No one should be constrained from scholarly work by lack of resources wherever they are or what resources are available. Libraries are about sharing, at no cost to the users. Scholarly collaboration and library sharing shouldn’t have to be in competition, with large amounts of money at stake for access to published research. Yet, those devilish arguments go on.

Meanwhile, the ACS says it wants to work with researchers: “In the future… publishers will deal more directly with contributors and rely less on libraries as middlemen.” They have introduced ACS ChemWorx for research, collaboration, and reference management. In another example from a scholarly society, the Modern Language Association (MLA) is also working with researchers, but by making their author agreements more friendly to authors’ rights to self-archive, and by developing a platform for sharing: “members join the association less in order to receive its communications than to participate in them, to be part of the conversation, and to have their work circulated with the work being done in their community of practice.” They plan to emphasize their society role in “validation and credentialing”, developing new forms of peer review and scholarship in the MLA Commons.

This is the kind of action we can endorse and applaud. As librarians, let’s encourage scholarly societies to share scholarly work as the communities of practice they are at their best. Other collaborative platforms in various stages of adaptation include Zotero, Mendeley, Academia.edu, ResearchGate.org. There are also repositories, institutional and subject-based. The world is converging toward networking and collaborative research all in one place. I would like the library to be the free platform that brings all the others together.

Coming full circle, my vision is that when researchers want to work on their research, they will log on to the library and find all they need—discovering research ideas, the ability for seamless literature searching, accessing and saving citations for books and articles of interest in one place, downloading what they need, finding research collaborators through a network of scholars all over the world with similar interests, finding project management, having the ability to write and cite their research in a seamless way, sharing it informally, having it peer reviewed then formally published in a archived scholarly version of record, having it showcased and celebrated at each institution, then preserved for future scholars to discover and continue to build on. Walk in or log on, we could say to scholars and students alike—the library is the one place that has all you need to get your scholarly work done.

Let’s all, like Jenica, say enough with the old way! Let’s try some new ways and keep trying until we find or create something that works. This storm could help clear the air.