Gary Olsen raises an interesting issue in the Chron – as more scholars put their efforts into online scholarship, how can it factor into promotion and tenure decisions? His answer – devise a system whereby scholarly societies certify sites that are submitted for peer review, maintain a registry of certified sites, and check back often to make sure they haven’t fallen off in quality or been overtaken by pre-teen hackers.
He says the peer review system for vetting books and articles works pretty well, but P&T committees (and everyone else, apparently) are at a loss when confronting a website –
. . . since no vetting mechanism for scholarly sites exists, even those that are designed by reputable scholars typically undergo no formal review. Such uncertainty disrupts the orderly intercourse of scholarly activity and plays havoc with the tenure-and-promotion system.
Clearly, the scholarly community needs to devise a way to introduce dependability into the world of electronic scholarship. We need a process to certify sites so that we all can distinguish between one that contains reliable material and one that may have been slapped together by a dilettante. We need to be able to ascertain if we can rely on a site for our own scholarship and whether we should give credit toward a colleague’s tenure and promotion for a given site.
Gee, given that we’ve been evaluating and comparing websites and deciding which to highlight as useful ones for research for a couple of decades now . . . are we really incapable of making those choices without a disciplinary stamp of approval? And is peer review really so flawless that we need to replicate it for a new genre of scholarship? And what about all those sites that aren’t scholarly projects per se but are incredibly valuable – the Avalon Project, the Oyez site, or the Pew Research Center for the People and the Press, for example? Should we doubt their value because they aren’t vetted by scholars in the manner of journal articles or university press titles?
The fact is, we work hard to develop in our students the capability of judging quality, not just relying on a peer review stamp of approval. I mean, honestly – if we told students any peer reviewed source is guaranteed to be of high quality we’d be doing them a disservice. So why do P&T committees want to have their critical work so oversimplified for them? Can’t they learn to rely on their own capacity for critical thinking, or is that just for students?
The fact is, these are two entirely separate issues. The quality of websites can be evaluated – and peers already do that. Whether academics are willing to broaden their notions of what counts as scholarship and to consider electronic projects as serious work is another matter altogether. Replicating a cumbersome print-based peer review mechanism, flaws and all, is not the solution. Doing the real work of evaluating a colleague’s scholarship – without relying on university presses and journals to do the vetting for them – is what’s called for. Oh, and a more imaginative and open-minded definition of what scholarship is.
I thought this playful version of Rodin’s The Thinker that I saw on the Washington University of St. Louis campus last week seemed somehow appropriate.
ADDITION: I just bumped into This is Scholarship – an interesting response to the MLA task force report that questioned the dependence on the monograph as proof of scholarship over at the InfoFetishist (where there’s a terrific list of things to read and think about).
So maybe, as an addendum to our efforts to encourage more responsible use of open access research opportunitiies, libraries could also help scholars at their institutions think more creatively about what counts as scholarship? Hey, we could at least buy them lunch and let them talk through the issues.
5 thoughts on “Peer (to Peer) Review?”
Kudos, well spoken.
Peer review is not infallible, but it saves time, for individual researchers and tenure committees. No one can critically evaluate everything. Knowing which authorities to trust is also a research skill, one that often gets short shrift over so-called “critical thinking skills.” If you listen to most librarians, critical thinking about web sites is checking the url and looking for spelling mistakes. It’s much more complicated.
Agreed, and your article is one I’ve often recommended on that subject. Still, I think it’s strange that we think students should learn to evaluate the validity of sources and yet we dodge that responsibility when we’re on a P&T committee. (Not my discipline; I have to defer to the experts.) I would like to see some way to factor solid, useful, significant web projects into an academic’s portfolio – but I don’t think we need to create a cumbersome replication of traditional peer review to do that. Nor should only web projects that are so arcane that only a PhD in the field can make any sense of them be considered more worthy than ones that have a wider potential readership.
But Boyer already made that point much better many years ago.