Category Archives: Authority

Peer Review Problems In Medicine

For all the commercial publishers’ (fake) crowing about peer review, turns out the peer review process in medicine is not working so well lately. At least that’s the conclusion one comes to after reading Robert Lee Hotz’s interesting article in today’s Wall Street Journal, “Most Science Studies Appear to Be Tainted.”

Hotz references John P. A. Ioannidis, who wrote “the most downloaded technical paper” at the journal PLoS Medicine, Why Most Published Research Findings Are False. Ioannidis claims that one problem is the pressure to publish new findings:

Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets. “People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual,” Dr. Ioannidis said.

But Hotz also points out that besides statistical manipulation, the pressures of competition, and good ol’ fraud, ordinary human error is also a problem. The peers, it seems, are kind of slackin on the reviewing:

To root out mistakes, scientists rely on each other to be vigilant. Even so, findings too rarely are checked by others or independently replicated. Retractions, while more common, are still relatively infrequent. Findings that have been refuted can linger in the scientific literature for years to be cited unwittingly by other researchers, compounding the errors.

Overall, technical reviewers are hard-pressed to detect every anomaly. On average, researchers submit about 12,000 papers annually just to the weekly peer-reviewed journal Science. Last year, four papers in Science were retracted. A dozen others were corrected.

Earlier this year, informatics expert Murat Cokol and his colleagues at Columbia University sorted through 9.4 million research papers at the U.S. National Library of Medicine published from 1950 through 2004 in 4,000 journals. By raw count, just 596 had been formally retracted, Dr. Cokol reported.

(Aren’t you glad you’re paying all that money for “high quality information?”)

It’s tempting to throw up one’s hands and say “don’t trust anything,” “there are no authorities,” or “evaluate everything for yourself.” But critical thinking by individuals, although important, cannot be the only solution to this problem. In an information saturated hyper-competitive capitalist economy, no one has the time or the expertise to evaluate everything. There has to be a system in place that saves people time and promotes trust in research. Here’s why:

Every new fact discovered through experiment represents a foothold in the unknown. In a wilderness of knowledge, it can be difficult to distinguish error from fraud, sloppiness from deception, eagerness from greed or, increasingly, scientific conviction from partisan passion. As scientific findings become fodder for political policy wars over matters from stem-cell research to global warming, even trivial errors and corrections can have larger consequences.

Hotz points to the US Office of Research Integrity and the European Science Foundation’s sponsorship of the First World Conference on Research Integrity: Fostering Responsible Research as an attempt to begin a search for solutions. Academics, a museum, and med schools are represented, it would be great if librarians get in on this conversation as well.

Computing Wikipedia’s Authority

Michael Jensen has predicted

In the Web 3.0 world, we will also start seeing heavily computed reputation-and-authority metrics, based on many of the kinds of elements now used, as well as on elements that can be computed only in an information-rich, user-engaged environment.

By this he means that computer programs and data mining algorithms will be applied to information to help us decide what to trust and what not to trust, much as prestige of publisher or reputation of journal performed this function in the old (wipe away tear) information world.

It’s happening. Two recent projects apply computed authority to Wikipedia. One, the University of California Santa Cruz Wiki Lab, attempts to compute and then color-code the trustworthiness of a Wikipedia author’s contributions based on the contributor’s previous editing history. Interesting idea, but it needs some work. As it stands the software doesn’t really measure trustworthiness, and the danger is that people will trust the software to measure something that it does not. Also, all that orange is confusing.

More interestingly, another project called Wikipedia Scanner, uses data mining to uncover the IP addresses of anonymous Wikipedia contributors. As described in Wired, Wikipedia Scanner:

offers users a searchable database that ties millions of anonymous Wikipedia edits to organizations where those edits apparently originated, by cross-referencing the edits with data on who owns the associated block of internet IP addresses. …

The result: A database of 34.4 million edits, performed by 2.6 million organizations or individuals ranging from the CIA to Microsoft to Congressional offices, now linked to the edits they or someone at their organization’s net address has made.

The database uncovers, for example, that the anonymous Wikipedia user that deleted 15 paragraphs critical of electronic voting machines originated from an IP address at the voting machine company Diebold.

Both of these projects go beyond the “popularity as authority” model that comes from Web 2.0 by simultaneously reaching back to an older notion of authority that tries to gauge “who is the author” and fusing it with the new techniques of data mining and computer programming. (Perhaps librarians who wake up every morning and wonder why am I not still relevant? need to get a degree in computer science.)

If you prefer the oh-so-old-fashioned-critical-thinking-by-a-human approach, Paul Duguid has shown nicely that one of the unquestioned assumptions behind the accuracy of Wikipedia–that over time and with more edits entries get more and more accurate–is not necessarily so. Duguid documents how the Wikipedia entry for Daniel Defoe actually got less accurate over a period of time due to more editing. Duguid shows how writing a good encyclopedia article can actually be quite difficult, and that not all the aphorisms of the open source movement (given enough eyeballs all bugs are shallow) transfer to a project like Wikipedia. Duguid also provides a devastating look at the difficulties Project Gutenberg has with a text like Tristram Shandy.

Evaluating authority in the hybrid world calls for hybrid intelligences. We can and should make use of machine algorithms to uncover information that we wouldn’t be able to on our own. As always, though, we need to keep our human critical thinking skills activated and engaged.

Can (Political) Blogs Be Trusted?

Does political scare you?
Political doesn’t scare me. Radical political scares me. Political political scares me.
The Player

At an ALA Annual program sponsored by ACRL’s Law and Political Science Section section titled “Can Blogs Be Trusted,” Jason Zengerle of the New Republic raised questions about the objectivity and reliability of political blogs that went beyond the simple and oft heard objection “anyone can write a blog so they’re not authoritative.”

Zengerle, a senior editor at the New Republic, focused mainly on liberal political blogs such as the Daily Kos and described their ascendancy after the 2000 presidential election. Zengerle gave two examples of how the Daily Kos and other liberal blogs should be treated more like political campaign tools than reliable information tools.

The first involved Zengerle’s investigation of the possibility that Markos Moulitsas (of Daily Kos) was indirectly participating in a kind of pay-to-play scheme in which payment was received from a politician in exchange for favorable coverage on the blog. Zengerle discovered the existence of a listserv called the Townhouse used by liberal bloggers to stay unified and on message. Zengerle gained access to the listserv and posted a message by Moulitsas asking liberal bloggers to not talk about the pay-to-play story. When Zengerle posted something that turned out to be a factual error and a fabrication (which he claims to have admitted and fixed after discovering it) he described how liberal bloggers used this as a cudgel to discredit him and all the reporters at TNR. Furthermore, on the Townhouse list they discussed the strategy to use this information to discredit TNR. This according to Zengerle, led him to believe that liberal bloggers were acting more like a political campaign that had an organized strategy to discredit opponents as a primary motivation rather than as reporters with a responsibility to produce articles with accurate information. Zengerle’s claim appears to be that the bloggers were willing to be silent to defend Moulitsas, even if it was not warranted, and then opportunistically attack him for breaking the silence.

The second example involved a remark Harry Reid made in a conference call to liberal bloggers. Reid said that a general in the field was incompetent. (At this point in the talk someone in the audience shouted that Reid had never said that.) Zengerle went on to explain that a Washington newspaper, the Politico, had reported that Reid said it, and the Daily Kos accused the Politico of simply making it up. Later, when a tape was produced of Reid saying it, the Daily Kos said oh never mind. Zengerle criticized Daily Kos for making the serious allegation that the Politico had simply made it up and implied that the liberal bloggers knew that Reid had said it (since they were on the conference call) but chose to deny it because it hurt their political goals.
(It’s not clear to me if Daily Kos was on the conference call, but Zengerle implied that the major liberal bloggers are all on the same page because of the Townhouse listserv.)

Zengerle concluded that political blogs are not the most reliable resources for objective facts, but they can be useful as a kind of primary source–as an insight into a mindset and a worldview and for detecting breaking trends such as a groundswell of support for a candidate (e.g. Dean, Lamont, Thompson). Zengerle also pointed out that other blogs, such as Talking Points Memo, are doing reporting that is sometimes better than the mainstream media. He also stated that conservatives have been “working the refs” for years and that perhaps now that liberal blogs are performing that function maybe things are more balanced.

Still, the idea that some bloggers may be getting paid by politicians but not revealing it, and that political bloggers are united by a behind-the-scenes listserv in which strategy of what to reveal or who to attack is discussed, while not completely shocking (conservative talk radio supposedly has been operating this way for years), is unsettling and does raise concerns about objectivity.

In the Q & A, Zengerle seemed to backtrack, saying that the biases of most bloggers is obvious and easy to detect. Maybe so, but there’s a difference between having an editorial opinion and having a concerted strategy to advance political objectives even when the facts get in the way. Just because one is writing opinion doesn’t mean that facts can be distorted or used selectively to support an opinion.

Does this put political blogs such as Daily Kos and others a notch below the opinion sections of newspapers and magazines on the trust meter? Or is there not really much difference between a political blog and the opinion section of a newspaper or magazine?

As librarians and educators, we often recommend that students distinguish fact from opinion, usually without much more guidance than just stating it. This guidance is usually given in the context of the student needing information to write an argumentative paper, perhaps for a first year writing course. When we advise students this way, are we saying that all opinion writing should be distrusted? Or treated with less trust and more skepticism than so-called factual writing? Does this advice help for teaching students how to cultivate a useful attitude for dealing with opinion writing for the rest of their adult lives?

Regardless of the merits of Zengerle’s case against Kos, it does state two specific ways that opinion writing can be corrupted:

1. presence of unknown money contributions,
2. political motivations that override concerns for truth.

How would a lay person recognize these signs without being able to do the investigative journalism that could uncover a money trail or gain access to a private listserv? Zengerle was asked as much, but didn’t provide a direct answer. Are political blogs in fact more susceptible to this kind of undue influence and therefore are a notch below the opinion in newspapers and magazines? (Note I’m not saying newspapers and magazines are free of these and other nefarious kinds of influence, just perhaps less susceptible.) If so this would be interesting to point out to students.

On the other hand, liberal political bloggers in general were less taken in by the U.S. government’s case for the Iraq War, perhaps precisely because they weren’t being fed the so-called authoritative information from the government, as well as being more skeptical of the government to begin with.

There’s also the problem that readers of political blogs an opinion may be reading them for other reasons, to have their own opinions confirmed for example, and are therefore less likely to be open to information at odds with their own point of view. Or they may also just be nakedly politically motivated and perhaps they agree with attacking someone who disagrees with the group, regardless of the facts.

Needless to say these motivations are at odds with the critical thinking most of us hope to inspire in our students. This is one reason why libraries are urged to follow the Library Bill of Rights and “provide materials and information presenting all points of view.”

Librarians need to continue to aware of the difficulties of disentangling fact from opinion, especially with the new media. We can explain, uncover and give examples of the mechanisms by which truth can be and has been obscured in opinion writing. We should convey the subtlety of an information medium such as the blog that can both challenge vested interests and conceal vested interests at the same time.

One would hope that if opinion writing is not based on facts then ultimately those sources would lose credibility. Yet there seem to be an increasing number of people (often disparaged as wingnuts) who seem not willing or able to let any conflicting information get in the way of their own worldview.

As educated members of an information abundant society, we need to learn not only how to disentangle fact from opinion, but also how to put a check on our own ability to customize the information we receive by actively seeking out opinions that differ from our own, so that we aren’t increasingly caught in our own echo chambers.

More for the Authority Files

Michael Jensen has a fascinating piece in the Chron on what authority might look like in the future – fusing values of the academy (prestige, quality, significance) with those of Web 2.0 (availability, interactivity, formation of subcultures within an abundant information landscape). Good on the Chron for making this one free!

Found via an equally interesting post at shimenawa.

The Changing Nature Of Authority: Doctors

Medical doctors have long been considered paragons of authority and expertise in our society. Their authority derives from long, rigorous academic training and is refined through continual clinical practice. We should listen to doctors because they are the best chance we have to get a reliable diagnosis based on the best science available. Or are they?

In What’s Wrong With Doctors, Richard Horton reviews How Doctors Think, a book by Jerome Groopman. The review points out that on average 15 percent of doctors diagnoses are inaccurate (still pretty good compared to the error rate that used to be attributed to reference librarians–was it 55%? what ever happened to that by the way?).

Doctors go wrong in many ways: they misapply evidence-based medicine; their training doesn’t teach them how to learn from mistakes (actually they can’t even admit when they make a mistake); they are susceptible to bribes and misinformation from big pharma; they are prone to a host of cognitive errors that they are unaware of–attribution error, availability error, search satisfying error, confirmation bias, diagnostic momentum, commission bias; they work in a system that rewards hurrying as many patients through as possible; and finally the classic–they don’t listen to patients.

Horton points out that the authority of doctors is no longer sacred and that a better educated public with access to more information is more and more willing to question the gospel. Groopman suggests that doctors should ally themselves with patients in a partnership to guard against error.

But are patients up to the responsibility? A doctor friend of mine told me how the mother of one of his patients told him that she stopped her son’s medication months ago. Why? he asked. Because of something she read on the Internet, she said. He was surprised. What did you read? Was it a study? How was the study done? Are you sure your son’s situation is sufficiently similar to what you read? Do you know the risks associated with discontinuing the medication?

Reading as much as you can about an illness that affects you or a family member–good. Going against your doctor’s advice without consulting your doctor first–not so good.

Learning about an illness is one of the most concrete ways that information literacy skills can be put to use in what we often call “lifelong learning.” We get sick; we’re get scared; we want more information. Has anyone ever taught us how to go about finding information in this situation? Not really, though the more education in general one has the better off one is. Finding and making sense out of medical information has a lot of pitfalls–from filtering out noise on Internet bulletin boards to finding reliable information that’s free and available to understanding how much about medicine is really unknown and uncertain, especially how it applies to your specific situation. It takes a great deal of knowledge even to know what kind of questions to ask your doctor. And who’s got the time to do all this research?

It’s good that we realize that doctors are fallible. Yet this doesn’t imply that by doing a search on PubMed we know more than our doctors. The changing nature of authority requires new skills for both experts and non-experts. Experts (including professors and librarians) have to get used to not having a complete monopoly on information and should have an understanding of where they can and do go wrong. Non-experts need to know where to find reliable or alternative sources of information and how to put this information into context. And both need to figure out how to talk to each other so the right questions get asked and answered at the right time, so that the chances for error are reduced as much as possible, and the chances for finding the truth are increased as much as possible.