Peer Review Problems In Medicine

For all the commercial publishers’ (fake) crowing about peer review, turns out the peer review process in medicine is not working so well lately. At least that’s the conclusion one comes to after reading Robert Lee Hotz’s interesting article in today’s Wall Street Journal, “Most Science Studies Appear to Be Tainted.”

Hotz references John P. A. Ioannidis, who wrote “the most downloaded technical paper” at the journal PLoS Medicine, Why Most Published Research Findings Are False. Ioannidis claims that one problem is the pressure to publish new findings:

Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets. “People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual,” Dr. Ioannidis said.

But Hotz also points out that besides statistical manipulation, the pressures of competition, and good ol’ fraud, ordinary human error is also a problem. The peers, it seems, are kind of slackin on the reviewing:

To root out mistakes, scientists rely on each other to be vigilant. Even so, findings too rarely are checked by others or independently replicated. Retractions, while more common, are still relatively infrequent. Findings that have been refuted can linger in the scientific literature for years to be cited unwittingly by other researchers, compounding the errors.

Overall, technical reviewers are hard-pressed to detect every anomaly. On average, researchers submit about 12,000 papers annually just to the weekly peer-reviewed journal Science. Last year, four papers in Science were retracted. A dozen others were corrected.

Earlier this year, informatics expert Murat Cokol and his colleagues at Columbia University sorted through 9.4 million research papers at the U.S. National Library of Medicine published from 1950 through 2004 in 4,000 journals. By raw count, just 596 had been formally retracted, Dr. Cokol reported.

(Aren’t you glad you’re paying all that money for “high quality information?”)

It’s tempting to throw up one’s hands and say “don’t trust anything,” “there are no authorities,” or “evaluate everything for yourself.” But critical thinking by individuals, although important, cannot be the only solution to this problem. In an information saturated hyper-competitive capitalist economy, no one has the time or the expertise to evaluate everything. There has to be a system in place that saves people time and promotes trust in research. Here’s why:

Every new fact discovered through experiment represents a foothold in the unknown. In a wilderness of knowledge, it can be difficult to distinguish error from fraud, sloppiness from deception, eagerness from greed or, increasingly, scientific conviction from partisan passion. As scientific findings become fodder for political policy wars over matters from stem-cell research to global warming, even trivial errors and corrections can have larger consequences.

Hotz points to the US Office of Research Integrity and the European Science Foundation’s sponsorship of the First World Conference on Research Integrity: Fostering Responsible Research as an attempt to begin a search for solutions. Academics, a museum, and med schools are represented, it would be great if librarians get in on this conversation as well.

4 thoughts on “Peer Review Problems In Medicine

  1. Ioannidis’ article is thought-provoking and certainly worth considering. Although far from definitive (and not entirely convincing in my view) it has generated a great deal of worthwhile discussion since it was published over two years ago. But I’m a little puzzled by what you’re implying with the statement “the commercial publishers’ (fake) crowing…” or the notion that the peer review process is not working so well “lately.”

    The limitations of peer review have been well recognized for a very long time, as has the value. JAMA has been sponsoring an International Conference on Peer Review every 4 years since 1990 and there is now a considerable body of research as well as considerable debate within the publishing community about the value and limitations of peer review, the variety of forms that it can take, and various mechanisms that can be used to improve it.

    Librarians have, indeed, been a part of the conversation. I can particularly recommend Ann C. Weller’s book “Editorial Peer Review: Its Strengths and Weaknesses” published in 2001. Ann is a health sciences librarian at the University of Illinois Chicago and her book is an excellent in-depth discussion of the facts and issues involved with peer review. (I should mention that she served as an editorial board member during part of my six-year tenure as editor of the Journal of the Medical Library Association (JMLA).)

    No one, to my knowledge, has ever claimed that peer review, in any of its current manifestations, is sufficient to eliminate all cases of fraud or poor science. There is, however, wide agreement that it remains a very important tool, and, of course, the possibility of using the internet to experiment with new and, perhaps more open, mechanisms for improving peer review is one of the most promising developments of the current transformation in scholarly communication.

    The PRISM rhetoric about open access being a threat to peer review is obviously overblown and easily argued against, but few publishers, commercial or otherwise, have made extreme or unsupported claims for the value that peer review brings. It is very important to separate serious debate and discussion of peer review from the rhetorical mudslinging (on both sides) that’s been generated by the PRISM rhetoric.

  2. I don’t know if Ioannidis is right, but his critique goes beyond a few cases of fraud or poor science that slip through the cracks of a good but imperfect system to arguing that most research findings are false. Other findings also raise significant questions about peer review in the sciences, which also happens to be the knowledge domain with the most expensive journals. PRISM does represent many of these commercial publishers, and they do try justify the subscription prices by pointing to peer review (see their “principles.”) The debate and experiments you point to are encouraging, it will be interesting to see how the system evolves.

    Thanks for pointing out the Weller book, I’ll take a look.

  3. Marc has brought up a timely question. When the peer review system is being used as a bargaining chip (though it turns out to be a chip in a shell game, since peer review doesn’t belong to a group of publishers just because they say it does) it’s wise to think about what peer review means, how it works, and why it doesn’t always perform well.

    Marc is quite right – we can’t over-romanticize it, but we also can’t simply say “it’s corrupt; you can’t trust anything.” Yet the issues that complicate peer review are closely tied to the way we do science (and other scholarly work) these days, so it a good lens for looking at scholarship and its discontents.

    In a course I teach, I have students read Michael Polanyi’s “Republic of Science” – an admittedly rosy view of how it works. Then I give them a piece by John Ziman, “Is Science Losing its Objectivity?” (from Nature 382 [1996]: 751-754). He sees the importance of “disinterestedness” – or rather, interest in truth regardless of personal gain – under threat as science focuses on growth, influence, and competition for resources. They help students grasp how knowledge is built and rebuilt by people who may share some ideals but often fall short.

    I think the scare headline – “wow! most science is tainted!” – is one way to catch people’s eye and get them to to think about how science works, but you could just as easily say “wow! Most newspaper articles have errors!” or “You can’t trust books! They usually have mistakes!”

    That said – the more we think about all scholarly communication in the broadest context: creators, conservators, publishers, consumers, funders – the whole political, economic, and cultural context in which new ideas are generated – the more interesting and rich the conversation will be. And we should have that conversation, often, because it matters.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>