There have been a couple of fascinating articles on scholarship and social networking published recently. First, historian Roy Rosenzweig asks “Can History Be Open Source?” He recommends that if teachers are worried about students’ use of Wikipedia, they should consider it an opportunity for information literacy: “Spend more time teaching about the limitations of all information sources, including Wikipedia, and emphasizing the skills of critical analysis of primary and secondary sources.” But he thinks historians might also learn a thing or two.
If the Internet and the notion of commons-based peer production provide intriguing opportunities for mobilizing volunteer historical enthusiasm to produce a massive digital archive, what about mobilizing and coordinating the work of professional historians in that fashion? That so much professional historical work already relies on volunteer labor—the peer review of journal articles, the staffing of conference program committees—suggests that professionals are willing to give up significant amounts of their time to advance the historical enterprise. But are they also willing to take the further step of abandoning individual credit and individual ownership of intellectual property as do Wikipedia authors?
The editors of Nature ask a similar question. Maybe scientific publishers should rethink peer review by drawing on the “wisdom of the crowd.” The traditional peer review process was driven, in part, by the scarcity of outlets for research. Only the most significant could be chosen when the costs were so high. In an online world, maybe the evaluative work could be done after publication.
Scientific peer review is typically a process of ‘pre-filtering’ – deciding which of the many papers submitted should be published. By contrast, Digg [a potential model] is a ‘post-filter’, deciding which of the many papers published are most interesting to a group of readers. Today, scientific publishing does the first kind of filtering pretty well, and the second hardly at all. Word of mouth aside, citation analysis, tracked in databases such as ISI and Scopus, is the only good way to determine what the collective wisdom has decided are the most important papers, and this takes years to emerge. Is there a faster and better way?
Maybe so – and maybe emerging social networks are offering some inspiration for how to adapt the “read-write web” to scholarly communication of all kinds. Certainly, there are many unanswered questions – but that’s the kind scholars like best.
This idea is both scary and exciting at the same time. In the study of economics we always begin by assuming complete information. But where does the role of filtering and peer review, authentication and quality control live in a world of the great unwashed. As much as it may make us cringe, students will always rely most heavily on resources that are most available to them. When Lexis-Nexis and Westlaw were born, law students begain ignoring the law that wasn’t online. Even expert librarians and researchers begin a new research task with Google, and they KNOW what’s available to them.
The idea of people manually tagging their own literature collections and blogs seems ludicrous, but I know I’ve “wasted” many hours rating movies at NetFlix and books at Amazon and sorting through the recommendation engine’s results.
I think we will find that there is a place for both traditional research and peer review and citation analysis, and also a place for less formal methods of building community.
If all review is post-publication, that means that hours of sifting work heretofore done by paid editors (whether they’re paid in money or “only” prestige) will need to be done by individual scholars. Most scholars I know are lucky to keep up with the main journals in their field and subspecialty. They do not have time to do all this post-publication reviewing.