Category Archives: Commercialization

Balancing Act

I’m kind of in the pickle that Maura describes – subscribed to too many sources of information that I would read if I weren’t so busy keeping up with the stream of new information. But Current Cites is always a good ‘un for finding a cross-section of interesting new stuff and this week it pointed me to a twig I must have missed in the current. Sometimes it’s only when you see it the second time, maybe just as you’re pouring a second cup of coffee int he morning, that it catches your eye.

First Mondays (an excellent and long-established open access journal) has an article by Brian Whitworth and Rob Friedman on “Reinventing Academic Publishing Online.” In a nutshell, it examines the fact that the “top” academic journals remain vested in a traditional system in which maintaining barriers and exclusivity because their exclusivity is perceived as rigor and therefore value. The higher your rejection rate, the prouder you are. But there are two mistakes academic publishing can make: publishing stuff that isn’t any good and not publishing stuff that turns out to be good. It’s the cost of the latter – failing to publish something innovative and challenging for fear it might be wrong – that these authors feel is left out of the equation.

These error types trade off, so reducing one increases the other, e.g., a journal can reduce Type I errors to 0 percent by rejecting all submissions, but this also raises Type II errors to 100 percent as nothing useful is published. The commonsense principle is that to win a lottery (get value) you must buy a ticket (take risk). In academic publishing the rigor problem occurs when reducing Type I error increases Type II error more . . . Pursuing rigor alone produces rigor mortis in the theory leg of scientific progress.

The authors point to the fact that the publishing industry essentially determines who is hired and fired in universities, which flies in the face of the mission we are supposedly on and the intellectual freedom that should enable our work.

When a system becomes the mechanism for power, profit and control, idealized goals like the search for truth can easily take a back seat. Authors may not personally want their work locked away in expensive journals that only endowed western universities can afford, but business exclusivity requires it. Authors may personally see others as colleagues in a cooperative research journey, but the system frames them as competition for jobs and grants. As academia becomes a business, new ideas become threats to power rather than opportunities for knowledge growth. Journals become the gatekeepers of academic power rather than cultivators of knowledge, and theories battle weapons in promotion arenas, rather than plows in knowledge fields.

The authors suggest that under the color of “rigor” this model sustains a system in which cross-disciplinary and innovative research is unwelcome. “As more rigorous and exclusive ‘specialties’ emerge, the expected trend is an academic publishing system that produces more and more about less and less.” (And hey, it’ll make the Big Bundle even bigger and more expensive, therefore more profitable.) They think instead technology could offer ways to facilitate information exchange rather than creation of further citadels of isolated specialization. Paying more attention to the mistake of failing to publish something that turns out to be worthwhile will require the creation of a democratic open knowledge exchange which can better balance the equation.

The funny thing is that this tension has existed for a long time. Well before the Internet enabled the opportunity for fundamental change in the way we share research, both Michael Polanyi and Thomas Kuhn described the delicate tension between maintaining an agreed-upon understanding by fending off crackpot theories and the need to allow something new to challenge the dominant paradigm. Both self interest and a more idealized notion of rigor conspire against innovation. What I find interesting about this First Monday article is the idea that our current dominant publishing model has let self-interest reign supreme, and that a new open model could let the more idealized urge to preserve that which is solid and true duke it out with ideas that challenge it. It could balance the risk/reward tradeoff involved in choosing what to publish and which questions to pursue.

By the way, what is your library planning to do for Open Access Week?

(Photo courtesy of rptnorris.)

This Journal Brought to You By . . .

It was shocking at the end of April when The Scientist reported that Elsevier had published a scholarly-journal-like series that was actually advertising paid for by Merck. The peer-reviewed-like articles in the journal-like object were either reprints or summaries of articles that reported results favorable to Merck drugs. There were also “review” articles that had only a couple of references. Reviewed that. Merck good. Go prescribe.

Now it turns out this wasn’t an embarrassing one-off. Elsevier published at least six fake journals – er, sorry, got my terminology wrong: “sponsored article publications.” (The Scientist article is free, but requires registration.)

Mistakes were made. Elsevier officials regret the error. The nasty people who did that left the company long ago. Besides, it was in Australia. The CEO of Elsevier’s Heath Sciences division says it’s going to be looked into, but he’s sure it’s not ever going to happen again. “I can assure all that the integrity of Elsevier’s publications and business practices remains intact.”

Um, isn’t that up to us to say? Seems to me Elsevier’s integrity was in question even before this disgraceful and embarrassing revelation.

Anne-Marie posted some thoughtful comments about this issue at Info-fetishist – particularly the implications for information literacy.

Maybe we can’t talk about peer review at all anymore without talking about the future of a system of knowledge reporting that is almost entirely dependent upon on the volunteer efforts of scholars and researchers, almost entirely dependent upon their professionalism and commitment to the quality of their disciplines, in a world where ultimate control is passing away from those scholars’ and researchers’ professional societies and into the hands of corporate entities whose decisions are driven not by commitment to quality, knowledge creation or disciplinary integrity.

We’ve been focusing on “why pay attention to scholarly work and conversations going on on the participatory web” mostly in terms of how these things help us give our students access to scholarly material, how they help our students contextualize and understand scholarly debates, how they lay bare the processes of knowledge creation that lie under the surface of the perfect, final-product article you see in scholarly journals. And all of those things are important. But I think we’re going to have to add that “whistleblower” aspect — we need to pay attention to scholars on the participatory web so they can point out where the traditional processes are corrupt, and where the gatekeepers are making decisions that aren’t in the interests of the rest of us.

Excellent food for thought.

Another approach to the news popped up at the LSW room at FriendFeed where Steve Lawson proposed “the LSW needs to get Elsevier to publish the Australasian Journal of Library Science.” And in the over 80 responses you can find helpful suggestions like “your article will be reviewed by a panel of representatives from library vendors,” “there should be one issue deliberately missing. Supplements should be completely unavailable electronically,” and “it’s only available on one computer on campus. There is a login & password if you want off-campus access, but you can’t share it with ANYONE. … and we’ll publish 4 issues per year. But if we can’t come up with enough content for 4 issues a year, we can just combine them, like 1/2 or 1-2-3 or 2-4 or whatever.” See how productive pent-up rage can be? Thanks to all the brilliance behind this thread for the best serials humor ever.

Amongst all the giddiness some commenters pointed out a previous little scandal involving a high-impact journal that got its high impact by having one allegedly “crackpot” author publish multiple papers., as many as five in a single issue, all of them citing himself. The publisher? You guessed it – Elsevier.

null

photo courtesy of London Permaculture

Heather Has Two Mommies and Just Canceled her Amazon Account

A current kerfuffle on the Internets has to do with Amazon de-ranking GLBT-themed books as reported on the LA Times Jacket Copy blog.

Amazon’s policy of removing “adult” content from its rankings seems to be both new and unevenly implemented. On Saturday, self-published author Mark R. Probst noticed that his book had lost its ranking, and made inquiries. The response he got from Amazon’s customer service explained:

In consideration of our entire customer base, we exclude “adult” material from appearing in some searches and best seller lists. Since these lists are generated using sales ranks, adult materials must also be excluded from that feature.

Probst wrote a novel for young adults with gay characters set in the old West; he was concerned that gay-friendly books were being unfairly targeted. Amazon has not responded to the L.A. Times request for clarification.

Our research shows that these books have lost their ranking: “Running with Scissors” by Augusten Burroughs, “Rubyfruit Jungle” by Rita Mae Brown, “Fun Home: A Family Tragicomic” by Alison Bechdel, “The History of Sexuality, Vol. 1″ by Michel Foucault, “Bastard Out of Carolina” by Dorothy Allison (2005 Plume edition), “Little Birds: Erotica” by Anais Nin, “The Diving Bell and the Butterfly” by Jean-Dominque Bauby (1997 Knopf edition), “Maurice” by E.M. Forster (2005 W.W. Norton edition) and “Becoming a Man” by Paul Monette, which won the 1992 National Book Award.

Maybe this is just a new marketing gimmick – create viral annoyance to get your brand out there. Certainly Kindle 2 got a lot of attention when the text-to-speech feature was disabled because the Author’s Guild has put its head in a place that shouldn’t be mentioned in polite company.

In any case, libraries have one thing going for them – we defend intellectual freedom. Let’s see if we can tweet that to the world. Support your free (as in beer and as in speech) library.

Libraries on Planet Google

It has been a week since news of the Google settlement with authors and publishers broke. Though rumors had been rife that it was imminent, I was still blown away by the scope of it. Of course the court still has to rule, but the outlines – if they remain intact – are stunning in their implications.

First of all, as Jeffrey Toobin predicted in his 2007 New Yorker article, “Google’s Moon Shot,” the fair use question remains unsettled. Anyone else who tries to follow in Google’s footsteps to digitize in-copyright books had better have a many millions of dollars handy to pay lawyers fees. This puts Google in an incredibly strong position. They will have a lock on great big digitized book collections. They have overnight become an enormous vendor of licensed content. And a huge product with no competitors can set the agenda. Did the libraries who jumped on this bandwagon foresee this outcome? Are they happy with it?

Paul Courant of UMich sees the positive side.

First, and foremost, the settlement continues to allow the libraries to retain control of digital copies of works that Google has scanned in connection with the digitization projects. We continue to be responsible for our own collections. Moreover, we will be able to make research uses of our own collections. The huge investments that universities have made in their libraries over a century and more will continue to benefit those universities and the academy more broadly.

Second, the settlement provides a mechanism that will make these collections widely available. Many, including me, would have been delighted if the outcome of the lawsuit had been a ringing affirmation of the fair use rights that Google had asserted as a defense. (My inexpert opinion is that Google’s position would and should have prevailed.) But even a win for Google would have left the libraries unable to have full use of their digitized collections of in-copyright materials on behalf of their own campuses or the broader public. . . . The settlement cuts through this morass. As the product develops, academic libraries will be able to license not only their own digitized works but everyone else’s. Michigan’s faculty and students will be able to read Stanford and California’s digitized books, as well as Michigan’s own. I never doubted that we were going to have to pay rightsholders in order to have reading access to digitized copies of works that are in-copyright. Under the settlement, academic libraries will pay, but will do so without having to bear large and repeated transaction costs. (Of course, saving on transaction costs won’t be of much value if the basic price is too high, but I expect that the prices will be reasonable, both because there is helpful language in the settlement and because of my reading of the relevant markets.)

Harvard is not so sanguine, according to a story in the Chron. They didn’t allow Google to digitize in-copyright books, and they will stick with that practice.

Harvard’s concerns center on access to the scanned texts — how widely available access would be and how much it might cost. “As we understand it, the settlement contains too many potential limitations on access to and use of the books by members of the higher-education community and by patrons of public libraries,” Harvard’s university-library director, Robert C. Darnton, wrote in a letter to the library staff.

He noted that “the settlement provides no assurance that the prices charged for access will be reasonable, especially since the subscription services will have no real competitors [and] the scope of access to the digitized books is in various ways both limited and uncertain.” He also expressed concern about the quality of the scanned books, which “in many cases will be missing photographs, illustrations, and other pictorial works, which will reduce their utility for research.”

Lawrence Lessig thinks there’s a lot that’s good about the settlement. We dodged the bullet of a loss on the fair use issue and improved on what was available in Google Books previously without shrinking the definition of fair use:

IMHO, this is a good deal that could be the basis for something really fantastic. The Authors Guild and the American Association of Publishers have settled for terms that will assure greater access to these materials than would have been the case had Google prevailed. Under the agreement, 20% of any work not opting out will be available freely; full access can be purchased for a fee. That secures more access for this class of out-of-print but presumptively-under-copyright works than Google was initially proposing. And as this constitutes up to 75% of the books in the libraries to be scanned, that is hugely important and good. That’s good news for Google, and the AAP/Authors Guild, and the public.

Andrew Keen isn’t so sure – as he writes in The Independent, “Will Life on Planet Google be a Nightmare or a Dream?” (And he is one of a few who consider the privacy issues – once a closely guarded value of libraries. We don’t think anyone should keep an eye on what you read. Unless it’s Uncle Google.)

Is Google good or is it evil? Is the company an all-knowing behemoth that is hubristically “transforming our lives”, Big Brother-style, with its intrusive technology? Or is it a plucky, selfless Silicon Valley start-up that is “audaciously” organising all the world’s information for all of our benefit? Is Google Orwell or is it Disney? . . .

The truth — and even on planet Google there remain truths – is that Google’s greed for knowledge is both thrillingly audacious and terrifyingly threatening. Google is, in fact, an Orwell-Disney co-production. The company wants to know everything about us so that it can help us in every way. Room 101, then, on planet Google, is a brightly lit, cheerful place where we can, at the click of a mouse, know all there is to know about ourselves, our neighbours and the world.

Brewster Kahle, not surprisingly, told the Mercury News this is a bad move. “When Google started out, they pointed people to other people’s content,” Kahle said. “Now they’re breaking the model of the Web. They’re like the bad old days of AOL, trying to build a walled garden of content that you have to pay to see.” Of course our libraries are full of enormously expensive walled gardens. And with this settlement we’ll have one more to tend. A big one. A big one with no serious competitors.

While Lessig is cheered that this settlement may well torpedo the flawed orphan works legislation pending in Congress, Georgia Harper encourages libraries to keep working on alternatives to the Google orphanage.

This isn’t the Congressional approach to problem solving (shove the parties into a room and lock the door until they have reached an agreement — and may the strongest interest obliterate the weaker and we’ll call it a compromise in the public interest). This is the publisher’s and Google’s no nonsense business approach: “Hey, let’s just start selling all the books and if there’s money to be made, the owners will either show up to claim it, or the money will lie there for 5 years while we give everyone time to wake up and smell the coffee. At the end of 5 years, we’ll pretty much know what’s orphan and what’s not. What’s not to like?” . . .

Google clearly understood and accepted that this plan was based on an idea I found repugnant: if orphan works don’t have owners, by definition, then why is it that the Registry should keep the money that comes in for books that ultimately no one claims? The publishers and authors just don’t see orphans as really belonging to everyone in the absence of an owner. They see them as belonging to all the other authors and publishers, but not the public. . . .

I want this process to work. I think it has a much better chance of working than that piece of, uh, than that piece of legislation that nearly passed earlier this fall. It doesn’t give us an answer today and it *only* deals with books, so it’s not a comprehensive solution, but it might serve as an example of what works, assuming it does work. But libraries can still do their own research on individual titles that they think may be orphans while we wait for this deal’s market incentives to do their job, and for it to become clear that transparency is in the owners’ best interests as well as the public’s.

For example, I believe that the OCLC’s Copyright Evidence Registry is just as important today as it was 5 days ago before Google announced this deal. Although the publisher/author Registry has potential to be definitive, there will be need for multiple sources of information about the copyright status of works until the publisher/author Registry earns its keep. No source that wants to be definitive can do so if it can’t be trusted.

James Gibson wraps up his analysis in the Washington Post

By settling the case, Google has made it much more difficult for others to compete with its Book Search service. Of course, Google was already in a dominant position because few companies have the resources to scan all those millions of books. But even fewer have the additional funds needed to pay fees to all those copyright owners. The licenses are essentially a barrier to entry, and it’s possible that only Google will be able to surmount that barrier.

Sure, Google now has to share its profits with publishers. But when a company has no competitors, there are plenty of profits to share.

For more commentary, see the round-ups provided by Library Journal, Peter Suber, and EFF.

UPDATE: Library Journal on not holding our breaths; Peter Brantley on the stinginess of the public library provision.

photo courtesy of stevecadman

The Mark of Zotero

This just in, via beSpacificReuters is suing George Mason University for violating the Endnote TOS. Apparently (though I’m not sure I really understand the issue – this news story is very cryptic) Reuters claims the organization violated the terms of service when they analyzed ways to convert style files from Endnote to Zotero. Reuters (parent company of ISI, parent company of Endnote) accuses Zotero’s programmers of reverse-engineering Endnote files to make the conversion possible and that this threatens to destroy their customer base.

Some chat at Zotero suggests the legality of using the style files (many of which were contributed by users of Endnote) to be a bit murky. Some bloggers say the allegation is false, and that this is a SLAP suit. Others wonder if this means anyone who migrates their citations from Endnote to something else are equally liable. (Of course, anyone who has tried to import Web of Knowledge citations into RefWorks knows the company is not happy with competition and is willing to sacrifice user satisfaction on that altar.)

And hey, couldn’t MLA, Chicago, APA, and CBE sue Reuters for reverse-engineering their style manuals and destroying their customer base? Just asking.

Want to follow the paper trail? The Disruptive Library Technology Jester explains how. And Jystar raises a fascinating issue: “law suits like this really make me wonder if the current scheme of intellectual property law in the US actually fosters innovation. or … fosters bullying and enables large corporations, backed by lots of money and lawyers, to edge out any smaller competition, even if the competition is superior?”

Good question.

UPDATE 10/06: Michael Feldstein at e-literate, who at first thought Reuters might have a point, has learned more about the claims and now thinks … they probably have misrepresented what Zotero did. Via a comment at the Chron.

There’s now much talk of boycotting Thomson/Reuters. Given the number of companies and products involved, it may be hard to do since they own Westlaw, ISI, Findlaw, all kinds of business, financial, medical, and accounting resources, not to mention the Reuters news service.