Daily Archives: September 14, 2007

Peer Review Problems In Medicine

For all the commercial publishers’ (fake) crowing about peer review, turns out the peer review process in medicine is not working so well lately. At least that’s the conclusion one comes to after reading Robert Lee Hotz’s interesting article in today’s Wall Street Journal, “Most Science Studies Appear to Be Tainted.”

Hotz references John P. A. Ioannidis, who wrote “the most downloaded technical paper” at the journal PLoS Medicine, Why Most Published Research Findings Are False. Ioannidis claims that one problem is the pressure to publish new findings:

Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets. “People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual,” Dr. Ioannidis said.

But Hotz also points out that besides statistical manipulation, the pressures of competition, and good ol’ fraud, ordinary human error is also a problem. The peers, it seems, are kind of slackin on the reviewing:

To root out mistakes, scientists rely on each other to be vigilant. Even so, findings too rarely are checked by others or independently replicated. Retractions, while more common, are still relatively infrequent. Findings that have been refuted can linger in the scientific literature for years to be cited unwittingly by other researchers, compounding the errors.

Overall, technical reviewers are hard-pressed to detect every anomaly. On average, researchers submit about 12,000 papers annually just to the weekly peer-reviewed journal Science. Last year, four papers in Science were retracted. A dozen others were corrected.

Earlier this year, informatics expert Murat Cokol and his colleagues at Columbia University sorted through 9.4 million research papers at the U.S. National Library of Medicine published from 1950 through 2004 in 4,000 journals. By raw count, just 596 had been formally retracted, Dr. Cokol reported.

(Aren’t you glad you’re paying all that money for “high quality information?”)

It’s tempting to throw up one’s hands and say “don’t trust anything,” “there are no authorities,” or “evaluate everything for yourself.” But critical thinking by individuals, although important, cannot be the only solution to this problem. In an information saturated hyper-competitive capitalist economy, no one has the time or the expertise to evaluate everything. There has to be a system in place that saves people time and promotes trust in research. Here’s why:

Every new fact discovered through experiment represents a foothold in the unknown. In a wilderness of knowledge, it can be difficult to distinguish error from fraud, sloppiness from deception, eagerness from greed or, increasingly, scientific conviction from partisan passion. As scientific findings become fodder for political policy wars over matters from stem-cell research to global warming, even trivial errors and corrections can have larger consequences.

Hotz points to the US Office of Research Integrity and the European Science Foundation’s sponsorship of the First World Conference on Research Integrity: Fostering Responsible Research as an attempt to begin a search for solutions. Academics, a museum, and med schools are represented, it would be great if librarians get in on this conversation as well.