Category Archives: Peer Review

Peer (to Peer) Review?

Gary Olsen raises an interesting issue in the Chron – as more scholars put their efforts into online scholarship, how can it factor into promotion and tenure decisions? His answer – devise a system whereby scholarly societies certify sites that are submitted for peer review, maintain a registry of certified sites, and check back often to make sure they haven’t fallen off in quality or been overtaken by pre-teen hackers.

He says the peer review system for vetting books and articles works pretty well, but P&T committees (and everyone else, apparently) are at a loss when confronting a website –

. . . since no vetting mechanism for scholarly sites exists, even those that are designed by reputable scholars typically undergo no formal review. Such uncertainty disrupts the orderly intercourse of scholarly activity and plays havoc with the tenure-and-promotion system.

Clearly, the scholarly community needs to devise a way to introduce dependability into the world of electronic scholarship. We need a process to certify sites so that we all can distinguish between one that contains reliable material and one that may have been slapped together by a dilettante. We need to be able to ascertain if we can rely on a site for our own scholarship and whether we should give credit toward a colleague’s tenure and promotion for a given site.

Gee, given that we’ve been evaluating and comparing websites and deciding which to highlight as useful ones for research for a couple of decades now . . . are we really incapable of making those choices without a disciplinary stamp of approval? And is peer review really so flawless that we need to replicate it for a new genre of scholarship? And what about all those sites that aren’t scholarly projects per se but are incredibly valuable – the Avalon Project, the Oyez site, or the Pew Research Center for the People and the Press, for example? Should we doubt their value because they aren’t vetted by scholars in the manner of journal articles or university press titles?

The fact is, we work hard to develop in our students the capability of judging quality, not just relying on a peer review stamp of approval. I mean, honestly – if we told students any peer reviewed source is guaranteed to be of high quality we’d be doing them a disservice. So why do P&T committees want to have their critical work so oversimplified for them? Can’t they learn to rely on their own capacity for critical thinking, or is that just for students?

The fact is, these are two entirely separate issues. The quality of websites can be evaluated – and peers already do that. Whether academics are willing to broaden their notions of what counts as scholarship and to consider electronic projects as serious work is another matter altogether. Replicating a cumbersome print-based peer review mechanism, flaws and all, is not the solution. Doing the real work of evaluating a colleague’s scholarship – without relying on university presses and journals to do the vetting for them – is what’s called for. Oh, and a more imaginative and open-minded definition of what scholarship is.

I thought this playful version of Rodin’s The Thinker that I saw on the Washington University of St. Louis campus last week seemed somehow appropriate.

rabbit thinker

ADDITION: I just bumped into This is Scholarship – an interesting response to the MLA task force report that questioned the dependence on the monograph as proof of scholarship over at the InfoFetishist (where there’s a terrific list of things to read and think about).

So maybe, as an addendum to our efforts to encourage more responsible use of open access research opportunitiies, libraries could also help scholars at their institutions think more creatively about what counts as scholarship? Hey, we could at least buy them lunch and let them talk through the issues.

Web 2.0 and Open Science

Drexel University Libraries’ annual Scholarly Communication Symposium focused on web 2.0 in general and open science in particular. This is fast becoming my favorite conference: I can walk there; it’s free; it’s well organized; everyone there is smart, friendly and from diverse backgrounds; you get to eat a great lunch and it’s all over by 1:30!

Keynoter Jean-Claude Bradley (Chemistry) described UsefulChem and his mash-up of technologies for disseminating his work. I was struck by his pragmatic approach–some articles are better suited for peer-reviewed journals, some items better suited for blog posts or wikis, some items for mailing lists. He described what he called “open science” which is making your lab notebooks and all data available to anyone who would like to look at it. He claimed the old way to evaluate information was to see if it was peer-reviewed, the new way is to make all the data available and let everyone look at it to see if they can find any problems.

A questioner in the Q and A asked about patents and giving up the power to exclude. Bradley responded, “if you are trying to get a patent, I wouldn’t recommend this approach. But if you have a project in which you don’t care about a patent, it’s a great way to find collaborators.” The costs of 2.0 may include giving up the power to exclude, but in return you often get feedback on your work where previously you wouldn’t get any and you get found more.

Bradley struck me as an example of the kind of scholar who has figured out how to mix the new tools with the old and use them both to advance his own work and to advance his field. Whether he has any life outside his work and posting to his blogs and wikis is harder to say.

He mentioned a few tools I hadn’t heard of. He described Friendfinder as some kind of friend feed app that informs you what your friends are doing, who finds you interesting, who finds you boring. (At this point an old Blondie song jumped to mind–once had friends it was a gas/ soon turned out/ to be a pain in the ass…) The point was this is how he keeps up with new information, through his social network. ChemSpider is a free hosting service of 20 million molecules, JSpecView allows people to look at fine details of your spectra, which is apparently very important in chemistry; and something in Google called InChiKey, again having to do with molecules. The overall idea was how the web was able to provide more detail that was formerly not available with just the peer reviewed journal article.

When asked if 2.0 is truly transformative, he said, “collaboration is not necessarily new or different, but now you can do it faster and with people all over the world. In a large enterprise like science this can make a big difference.” Well put.

Two other points. Bradley predicted open science would lead to the day when computers could do the number of experiments that took his students a year to do in one weekend; and when asked if he worried about the archiving of his work he said he tried to take care of this through 1. redundancy, and b. that in 5-10 years all his work will be obsolete anyway. (!)

The panelists and my roundtable were full of engaged people with lots to contribute. Banu Onaral, in particular, raised some provocative issues, including the idea that Asia will lead the way in the new certification of academic credentials, and she asked the question, what happens when another country (e.g. China?) that maybe doesn’t share our values buys up these formerly free hosting services (e.g. Google etc?) and they decide to restrict them and we trusted them with our collective genius?

Thanks to Drexel University Libraries for another stimulating scholarly communication symposium.

ts;db

I’ve noticed that several of my favorite writers have resolved to post more frequently in 2008. Dear favorite writers: at the risk of sounding ungrateful, would you be terribly offended if I begged you not to follow through on this resolution? The odds are, I like your writing because:

  • You publish relatively infrequently. I think you’re great, which is why I read your writing, but I don’t want to know everything that’s on your mind. Generally, somewhere between once a week to once a month is fine by me.
  • Your pieces tend to take me at least five minutes to read, though ideally you’ll allow me the privilege of spending 15-50 minutes on ideas that have taken you several hours to put into words.
  • You publish almost nothing that’s off-topic, in particular almost nothing that’s both off-topic and solely about you. Once or twice a year, at most, going off-topic or writing about yourself is actually endearing. And it can be useful in our post-postmodern world if you acknowledge personal reasons for your opinions. But I’m reading your writing in order to learn about the topic of your blog. Abandon that topic too often and I’ll mostly likely unsubscribe from your feed.

The above criteria were the “ah ha” I got from Steve Yegge’s “Blogging Theory 201: Size Does Matter,” in which he suggests that his website, Stevey’s Blog Rants, is popular not in spite of the fact that he posts long pieces more or less monthly, but because he does.

Let’s start with the obvious. People expect blogs to be short – at least, shorter than mine. They expect that because it’s pretty much how everyone does it. Short entries, and frequent. Here’s my cat today. Doesn’t he look sooo different from yesterday? No wonder so many people hate bloggers.

When I write my long blogs, I’m bucking established social convention, so it’s natural that some people will whine that they’re too long.

Well, how far off cultural expectations am I? Doing a quick print preview in my browser shows that my last entry, formatted at about 14 words per line (typical for a printed book) weighs in at about ten pages. So it’s roughly essay-sized. I’m not talking about those toy five-paragraph essays they made you write in high school. I’m talking about real-life essays by real-life essayists. Real essays can range from three pages to 30 or more, but ten pages is not an unusual length.

If I were attempting to publish these entries as books, publishers would laugh at me. They’re way too short to be books. Sure, I could bundle them, but that’s beside the point. The fact is, two different real-world audiences have entirely incompatible views on what the proper length for my writing should be.

Yegge’s interpretation of online publishing convention is how the notion of length in particular (and essays in general) relates to academic librarians. Steven Bell has written recently on ACRLog and (with David Murray) in College & Research Library News about faculty members who publish online and the importance of our reading their work. As a new academic librarian, this is the sort of idea that is both challenging (Where will I find the time?) and welcome (Cool! More great stuff to read!). He’s also written recently about the idea of tenure for librarians, which, naturally, leads back to what tenure is really all about, on what basis it should be awarded, and whether anyone should have it. Of course, this is interesting on a theoretical level for librarians who have cleared the tenure hurdle or amassed a body of work that would allow them to do so relatively easily if they end up working at an institution where librarians have faculty standing. For those of us new to the profession, discussions about tenure elicit somewhat more practical concerns.

My reading of these discussions is that it comes down to publishing: are we giving back to the profession, and to society, by publishing valuable new ideas and discoveries? Does the protection afforded by tenure foster more valuable writing? For some, peer review is the starting point in determining value, especially for tenure committees, which are often made up of faculty from many departments. Reading standard tenure candidate portfolios is arduous enough; expecting committee members to read the contents of a web-based archive could be interpreted as asking for trouble. After all, how much value could there be in something that was posted online, for free, without the benefit of a formal review process? It doesn’t take a Ph.D. to to notice the difference between the sort of entires you’re likely to find in someone’s LiveJournal and the investigations published in Nature.

Of course, if all non-peer-reviewed online writing were the academic equivalent of I Can Has Cheezburger or Alan Sokal’s parody, “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity,” then Steven’s posts probably would not have elicited the responses that they did. But we know better.

WordPress, MoveableType, and other software packages, by making it easy for people to publish their ideas, have helped create an Internet awash with mundane posts. But widespread use of these software packages by highly esteemed writers has also helped create not only an expectation that the best writers will make their ideas available online, but also an expectation that, with a little legwork, we’ll be able to find their work online for free.

That last part—the notion that non-digital or firewalled writing doesn’t exist—is beyond the scope of this piece. By way of extricating myself from that briar patch, I’ll invite you to imagine a world in which we could download podcasts of the “A Room of One’s Own” lectures Virginia Woolf gave at Cambridge in 1928, or subscribe to feeds of Dorothy Parker’s “Constant Reader” articles or Pauline Kael’s essays on cinema. Once you’re finished imagining that, I suggest that you subscribe to book reviews by Salon’s Laura Miller, Judith Martin’s Miss Manners column, and to join me in counting the minutes until someone offers an Alice Munro feed. Certainly, given the present state of copyright and OCR technology, we may be farther from a fully Googleable world than some of our constituents would like to believe. But we’re also a lot closer than some of our colleagues seem willing to acknowledge (e.g. Laura Miller, Judith Martin, and hundreds or thousands of other brilliant writers making some of all of their best work available not only for free, but via feeds). I think it would be great if we as academic librarians committed to doing our part to bringing a freer, more searchable online world closer and to making it better. One way to do it would be to sacrifice quantity in order to increase quality, at least in the work we’re sharing with peers.

Here’s the first point I’m trying to make: good, thoughtful prose is valuable no matter where or how it’s published. Grigor Perelman posted his groundbreaking work on the Poincaré Conjecture on the free, web-based arXiv.org in November 2002, March 2003, and July 2003, a repository that at the time was considerably easier to post to than ACRLog is now. Even though it has since introduced an endorsement system, arXiv.org remains close to barrier free—and full of indisputably valuable work. Committee members making tenure decisions, just like scientists making arXiv.org endorsements or mathematicians awarding the Fields Medal, are cheating everyone when they take shortcuts in deciding whether someone’s work has value. Peer review plays an important role in numerous situations, but there are times it is neither necessary, as with Perelman, nor sufficient, as with Sokal’s “Transgressing the Boundaries.” At the same time, you may be cheating yourself and your readers if you reserve your best work for peer-reviewed, subscription-only journals. Eventually, people will be rewarded for publishing good work online, and not just with popularity badges.

Here’s the second point I’m trying to make: good, thoughtful prose generally takes more than a few minutes a day to write and more than a couple of hundred words to express. I don’t think it’s a bad thing when people dismiss longer pieces with tl;dr (too long, didn’t read). Certainly, when we’re writing for undergraduates or Pierre Bayard, we need to take that wholly defensible sensibility into account. But if you’re writing for me, and for many other academic librarians, please understand that we’re likely to dismiss light, quick, frequent posts with ts;db: “Too short, didn’t bother.”

Peer Review Problems In Medicine

For all the commercial publishers’ (fake) crowing about peer review, turns out the peer review process in medicine is not working so well lately. At least that’s the conclusion one comes to after reading Robert Lee Hotz’s interesting article in today’s Wall Street Journal, “Most Science Studies Appear to Be Tainted.”

Hotz references John P. A. Ioannidis, who wrote “the most downloaded technical paper” at the journal PLoS Medicine, Why Most Published Research Findings Are False. Ioannidis claims that one problem is the pressure to publish new findings:

Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets. “People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual,” Dr. Ioannidis said.

But Hotz also points out that besides statistical manipulation, the pressures of competition, and good ol’ fraud, ordinary human error is also a problem. The peers, it seems, are kind of slackin on the reviewing:

To root out mistakes, scientists rely on each other to be vigilant. Even so, findings too rarely are checked by others or independently replicated. Retractions, while more common, are still relatively infrequent. Findings that have been refuted can linger in the scientific literature for years to be cited unwittingly by other researchers, compounding the errors.

Overall, technical reviewers are hard-pressed to detect every anomaly. On average, researchers submit about 12,000 papers annually just to the weekly peer-reviewed journal Science. Last year, four papers in Science were retracted. A dozen others were corrected.

Earlier this year, informatics expert Murat Cokol and his colleagues at Columbia University sorted through 9.4 million research papers at the U.S. National Library of Medicine published from 1950 through 2004 in 4,000 journals. By raw count, just 596 had been formally retracted, Dr. Cokol reported.

(Aren’t you glad you’re paying all that money for “high quality information?”)

It’s tempting to throw up one’s hands and say “don’t trust anything,” “there are no authorities,” or “evaluate everything for yourself.” But critical thinking by individuals, although important, cannot be the only solution to this problem. In an information saturated hyper-competitive capitalist economy, no one has the time or the expertise to evaluate everything. There has to be a system in place that saves people time and promotes trust in research. Here’s why:

Every new fact discovered through experiment represents a foothold in the unknown. In a wilderness of knowledge, it can be difficult to distinguish error from fraud, sloppiness from deception, eagerness from greed or, increasingly, scientific conviction from partisan passion. As scientific findings become fodder for political policy wars over matters from stem-cell research to global warming, even trivial errors and corrections can have larger consequences.

Hotz points to the US Office of Research Integrity and the European Science Foundation’s sponsorship of the First World Conference on Research Integrity: Fostering Responsible Research as an attempt to begin a search for solutions. Academics, a museum, and med schools are represented, it would be great if librarians get in on this conversation as well.