Category Archives: Authority

Analyzing Authority @ the ACRL Conference

On the last morning of my last day at the ACRL Conference I tweeted out a quick observation:

I got a couple of retweets and even started up a Twitter conversation with @nancyeadams, who shared a preprint of an article she’s written that discusses authority (among other topics), which I’m looking forward to reading this summer. But then it was time to head home.

I’ve never done any textmining before, so I tried to dip my toe in the pool by using Storify to pull together tweets that included the word “authority” and the hashtag #acrl2013. But I was tired after the conference and somewhat impatient. I couldn’t get Storify to simultaneously display tweets with the other hashtag (#acrl13) I saw being used occasionally, so I gave up pretty quickly; it also seemed like Storify wasn’t pulling in every single tweet from Twitter. I tried using Zach Coble’s fascinating ACRL Conference social media archive, but I couldn’t manipulate the tweet text all at once. I was also worried that as the conference receded into the past, tweets would become more difficult to find. So I went for the bash-it-with-a-rock strategy: I did a search in Twitter for each of the two hashtags, then I cut and pasted all of the tweets into a text file.

And there the text file sat until Memorial Day weekend, when the semester had ended and I finally had a chance to get back to it. I should stress that this is (still) a fairly basic analysis — I’ve gone through the text of tweets from the beginning of the conference to the end to find all instances of the word “authority” to see whether anything particularly interesting stood out. I’m certain that there are better tools to use for this task, but I’m (still) impatient so I’m plowing ahead with my rocks. (If you’ve used any tools that seem like they’d be useful in this context, please let me know in the comments!)

So, what did I find? I pulled 8,393 tweets (including retweets) with the hashtags #acrl2013 and #acrl13 dating from April 3 through April 16 at around 10:30pm. There were 60 occurrences of the word “authority” in the tweets I pulled.

Some of the patterns are easy enough to see and explain. First thing Thursday morning was the panel session “Questioning Authority: Standard Three and the Critical Classroom” with Jenna Freedman, Emily Drabinski, and Lia Friedman. This session had its own hashtag — #qacrlauthority — which made the tweets even easier to spot (and which I really appreciated since the wicked weather made me miss the session). There were 41 occurrences of the word “authority” in the tweets and retweets from this session. Laura O’Brien created a Storify of the panel which looks to have captured the session well. As librarians we should examine the authority embedded in controlled vocabularies, sources, and other library systems we use, and consider the ways we can empower students as authorities.

Chronologically, the next mention of authority was a tweet from Alison Head’s invited paper on Project Information Literacy, a multi-year, multi-institution study of college students’ information seeking and use. They have a nifty infographic created from their data on how college students seek information.

I missed that presentation (and haven’t read the paper yet) so I can’t offer any extra context around this tweet. But it’s an interesting comparison to the tweets from the Questioning Authority session, especially this one:

And in comparison to Henry Rollins’ mention of authority in his keynote (there were 5 tweets that referred to the thematic links he drew between Thomas Jefferson and punk rock):

And in comparison to the three tweets from the Feminist Pedagogy panel session on Sunday morning, especially:

Taken together, all of these tweets seem to point to a tension between librarians (and libraries) and our patrons, especially students. We have authority in the information realm, authority conferred by education, by experience, by knowledge. Is there a down side to having that authority? Can looking for ways to enable students and patrons to seize some of that authority enhance their learning? And are there reasons not to share or transfer that authority?

A couple of tweets from the libraries and publishing discussion at THATCamp ACRL hinted at the relationship between authority and prestige, a relationship which seems to be growing increasingly fraught as scholarly communications continue to shift and change.

Finally, three tweets discussed the nature of authority in our own library workplaces. Two were from the session “Think Like A Startup: Creating a Culture of Innovation, Inspiration, and Entrepreneurialism,” including one from my fellow ACRLogger Laura Braunstein:

Another seems to have been from the session “Curb Your Enthusiasm? Essential Guidance for Newbie Academic Librarians,” and pairs well with Laura’s tweet above:

I’ve found it interesting to see the various points of the conference where the topic of authority was discussed and considered. I confess that I’m not a big fan of the word authority. When I teach students about evaluating information I always use the term expertise, and in writing this post it’s been easy to see why: in looking through these tweets I’m struck by the underlying theme of power. Thinking on this more drove me to seek out some definitions. The Merriam-Webster Dictionary lists this as the first definition of authority:

an individual cited or appealed to as an expert

and this as the second:

power to influence or command thought, opinion, or behavior

which for me comes uncomfortably close to authoritarian:

1. of, relating to, or favoring blind submission to authority
2. of, relating to, or favoring a concentration of power in a leader or an elite not constitutionally responsible to the people

This as opposed to the more egalitarian nature of the term expertise, from expert:

having, involving, or displaying special skill or knowledge derived from training or experience

As librarians we aim to increase access to information, to share it, and ultimately to promote expertise among our patrons and students. The words we use when we describe our roles and relationships — both within and outside of the library — matter. When we use the term authority, is it possible to get away from power? And do we want to? After all, power can be used for good as well as for ill. Do we lose anything by shifting our use to expertise instead of authority?

Truth, Information and Knowledge: u r boring me

A funny and ultimately disheartening? article in the Washington Post portrays librarians as the last defenders of truth in a decadent culture consumed with trivia and superficialities, even going so far as to describe librarians as “trench warriors for truth.” Here’s a dramatic excerpt from a chat reference service:

“We’re losing him! We’re going to lose him!” Chad Stark frantically clicks back and forth between two windows on his computer screen.

Stark is the sweater vest-wearing, 30-something Hyattsville librarian currently manning AskUsNow, a 24/7 online chat open to Maryland residents who need research help.

AskUsNow, developed four years ago, helps patrons find accurate online information so they don’t have to fumble blindly in Google. Librarians: reliably on the front lines of truth protection.

Stark types that he’d be happy to help, but he’s not fast enough for the user:

“dude u r boring me.”

Librarians have been known to stand for many noble things, reading, learning, free speech, and now truth! Although it may feel like we are the orchestra that supposedly played on while the Titanic was sinking, there are worse ways to go down. I wrote about librarians and truth in a book review here; for more on librarians and truth see Don Fallis’s work on social epistemology.

The article goes on to raise the issue of the distinction between information and knowledge, which I have always found more puzzling than helpful. The most useful discussion of this I’ve read recently is in Dominique Foray’s Economics of Knowledge. Foray points out that the main distinction between information and knowledge is that knowledge depends on human cognition, whereas information can simply be words on a page. Information can be reproduced quickly and cheaply with a copy machine, but reproducing knowledge is far more expensive and time consuming because, well, teaching others is hard. Here’s Foray:

These means of reproducing knowledge may remain at the heart of many professions and traditions, but they can easily fail to operate when social ties unravel, when contact is broken between older and younger generations, and when professional communities lose their capacity in stabilizing, preserving, and transmitting knowledge. In such cases, reproduction grinds to a halt and the knowledge in question is in imminent danger of being lost and forgotten.

Can we use the distinction between information and knowledge to articulate a role for libraries and librarians in the digital age? Although information is bountiful and some of it seemingly cheap, tons of knowledge is being lost and forgotten everyday. Academic libraries and librarians are part of institutions that help to stabilize, preserve, and transmit knowledge as opposed to information. Hmm, how’s that? Good start, maybe, but needs work.

The article goes on to raise disturbing questions about the psychology of knowledge acquisition, noting that even when people are told repeatedly that something is false, the fact that they have heard it somewhere makes them think it is true. Politics immediately comes to mind here, but this raises a serious concern with all the new media that allow for the rapid reproduction of bits of information.

Quite thought provoking for a newspaper article, but once again reading the news gives me the feeling that we are doomed.

Open and Closed Questions

Another way to introduce students to the idea of complexity in the research process is through open and closed questions. In Second-hand Knowledge: An Inquiry into Cognitive Authority, Patrick Wilson describes closed questions as matters which (for now) have been settled beyond practical doubt and open questions as questions on which doubt remains.

I suggest to my students that one way to focus their research is to pay attention to clues that suggest where the open questions are and to concentrate their efforts there. Wilson points out that previously closed questions can become open when new information comes to light. In class, you can illustrate this and attempt some humor with the line, “when I was your age, Pluto was a planet!” Then proceed to explain how the planetary status of Pluto became an open question with the discovery of the Trans-Neptunian objects Quaor, Sedna, and Eris. Then follow this up with an example of an open question in the subject matter of the class you are teaching.

The term “research” is ambiguous. For some it means consulting some oracle–the Internet, the Library, the encyclopedia–finding out what some authority has said on a topic and then reporting on it. Fine, sometimes that’s what research is. That kind of research can be interesting, but it can also be pretty boring. What makes higher education thrilling is discovering live controversies and trying to make progress on them. Academic libraries are not only storehouses of established wisdom, they also reflect ongoing debates on questions that are unsettled, in dispute, very open, and very much alive.

Peer Review Problems In Medicine

For all the commercial publishers’ (fake) crowing about peer review, turns out the peer review process in medicine is not working so well lately. At least that’s the conclusion one comes to after reading Robert Lee Hotz’s interesting article in today’s Wall Street Journal, “Most Science Studies Appear to Be Tainted.”

Hotz references John P. A. Ioannidis, who wrote “the most downloaded technical paper” at the journal PLoS Medicine, Why Most Published Research Findings Are False. Ioannidis claims that one problem is the pressure to publish new findings:

Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets. “People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual,” Dr. Ioannidis said.

But Hotz also points out that besides statistical manipulation, the pressures of competition, and good ol’ fraud, ordinary human error is also a problem. The peers, it seems, are kind of slackin on the reviewing:

To root out mistakes, scientists rely on each other to be vigilant. Even so, findings too rarely are checked by others or independently replicated. Retractions, while more common, are still relatively infrequent. Findings that have been refuted can linger in the scientific literature for years to be cited unwittingly by other researchers, compounding the errors.

Overall, technical reviewers are hard-pressed to detect every anomaly. On average, researchers submit about 12,000 papers annually just to the weekly peer-reviewed journal Science. Last year, four papers in Science were retracted. A dozen others were corrected.

Earlier this year, informatics expert Murat Cokol and his colleagues at Columbia University sorted through 9.4 million research papers at the U.S. National Library of Medicine published from 1950 through 2004 in 4,000 journals. By raw count, just 596 had been formally retracted, Dr. Cokol reported.

(Aren’t you glad you’re paying all that money for “high quality information?”)

It’s tempting to throw up one’s hands and say “don’t trust anything,” “there are no authorities,” or “evaluate everything for yourself.” But critical thinking by individuals, although important, cannot be the only solution to this problem. In an information saturated hyper-competitive capitalist economy, no one has the time or the expertise to evaluate everything. There has to be a system in place that saves people time and promotes trust in research. Here’s why:

Every new fact discovered through experiment represents a foothold in the unknown. In a wilderness of knowledge, it can be difficult to distinguish error from fraud, sloppiness from deception, eagerness from greed or, increasingly, scientific conviction from partisan passion. As scientific findings become fodder for political policy wars over matters from stem-cell research to global warming, even trivial errors and corrections can have larger consequences.

Hotz points to the US Office of Research Integrity and the European Science Foundation’s sponsorship of the First World Conference on Research Integrity: Fostering Responsible Research as an attempt to begin a search for solutions. Academics, a museum, and med schools are represented, it would be great if librarians get in on this conversation as well.

Computing Wikipedia’s Authority

Michael Jensen has predicted

In the Web 3.0 world, we will also start seeing heavily computed reputation-and-authority metrics, based on many of the kinds of elements now used, as well as on elements that can be computed only in an information-rich, user-engaged environment.

By this he means that computer programs and data mining algorithms will be applied to information to help us decide what to trust and what not to trust, much as prestige of publisher or reputation of journal performed this function in the old (wipe away tear) information world.

It’s happening. Two recent projects apply computed authority to Wikipedia. One, the University of California Santa Cruz Wiki Lab, attempts to compute and then color-code the trustworthiness of a Wikipedia author’s contributions based on the contributor’s previous editing history. Interesting idea, but it needs some work. As it stands the software doesn’t really measure trustworthiness, and the danger is that people will trust the software to measure something that it does not. Also, all that orange is confusing.

More interestingly, another project called Wikipedia Scanner, uses data mining to uncover the IP addresses of anonymous Wikipedia contributors. As described in Wired, Wikipedia Scanner:

offers users a searchable database that ties millions of anonymous Wikipedia edits to organizations where those edits apparently originated, by cross-referencing the edits with data on who owns the associated block of internet IP addresses. …

The result: A database of 34.4 million edits, performed by 2.6 million organizations or individuals ranging from the CIA to Microsoft to Congressional offices, now linked to the edits they or someone at their organization’s net address has made.

The database uncovers, for example, that the anonymous Wikipedia user that deleted 15 paragraphs critical of electronic voting machines originated from an IP address at the voting machine company Diebold.

Both of these projects go beyond the “popularity as authority” model that comes from Web 2.0 by simultaneously reaching back to an older notion of authority that tries to gauge “who is the author” and fusing it with the new techniques of data mining and computer programming. (Perhaps librarians who wake up every morning and wonder why am I not still relevant? need to get a degree in computer science.)

If you prefer the oh-so-old-fashioned-critical-thinking-by-a-human approach, Paul Duguid has shown nicely that one of the unquestioned assumptions behind the accuracy of Wikipedia–that over time and with more edits entries get more and more accurate–is not necessarily so. Duguid documents how the Wikipedia entry for Daniel Defoe actually got less accurate over a period of time due to more editing. Duguid shows how writing a good encyclopedia article can actually be quite difficult, and that not all the aphorisms of the open source movement (given enough eyeballs all bugs are shallow) transfer to a project like Wikipedia. Duguid also provides a devastating look at the difficulties Project Gutenberg has with a text like Tristram Shandy.

Evaluating authority in the hybrid world calls for hybrid intelligences. We can and should make use of machine algorithms to uncover information that we wouldn’t be able to on our own. As always, though, we need to keep our human critical thinking skills activated and engaged.