I-Schools Bring Us A “Credibility Commons”

This morning’s CHE announces the launch of the newest collaboration between Syracuse’s Dave Lankes and Washington’s Mike Eisenberg – the Credibility Commons – a research project aimed at helping people to understand the many issues related to the credibility of information found on the Internet and to develop tools aimed at helping people to locate credible information on a range of topics.

I haven’t had time to read all the materials now available through the site, so I don’t know how Dave and Mike have linked this project to their long-time support for information literacy instruction (which strikes me as the most basic “practical approach” to helping people to find credible information on the Internet, i.e., the ability to articulate why something is credible according to standards other than appearance, consonance with one’s own views, etc.). I am sure the link is there and that it is strong. There aren’t too many LIS faculty members in whom I have more trust than these two.

Still, I would have liked to have seen a library listed among the project partners. As the CC partners write “There are few professions better suited to the world of credibility on the Internet than librarians.” I look forward to hearing how we’ll be able to contribute to this project and brings its results home to our daily practice.

Do Academic Librarians On The T-Track Blog

How blogging impacts on the academician’s career continues to be debated and discussed in the blogosphere. We’ve discussed academia’s conflicted reaction to blogging here previously. A worthwhile list of the pros and cons of blogging for those with and working for tenure appeared in a post by Christopher Sessums titled “Academic Research and Blogging.” He writes:

Recently a professor/mentor of mine noted that I seem to spend more time writing on my blog rather than writing for academic journals. She noted that I will not get tenure or be promoted for my blog posts but that I will for publishing in peer-reviewed journals. I’ll admit, she made a good point. I use my blog space to reflect on ideas for “proper” articles. In many cases I receive useful feedback that helps me tighten my argument or consider alternate or opposing viewpoints. In this light, my blog serves as a handy testbed and sandbox which allows me room to play.

What are some of the pros and cons? The blog allows freedom to explore, the ability to get ideas out there more quickly, the benefits of feedback provided in comments, and regular blogging may help with writing skills. Of course blog posts can also be poorly written, offer little in the way of cited sources, contribute to sloppy research methods,and fail to reach the intended audience.

Academic librarians are doing a fair amount of blogging, and I wonder who these folks are. According to data collected by Michael Stephens for his blogger survey 41% of the 283 respondents claimed an academic affiliation. That is nearly double the number of bloggers from the next largest group, public librarians. So who are all these academic librarian bloggers? I wonder how many are on the tenure track? I ask this because it is my guess that librarians on the tenure track are not blogging. Why? Probably because some senior librarian or mentor, not unlike Sessums reports, warned against blogging because it counts for tenure status about as much as cleaning out the library staff room frig once a week.

If that’s the case it could be unfortunate. While a blog has all the potential in the world for being a pointless time sink, a thoughtful, well designed and maintained blog can be far more helpful to academic colleagues than a stack of academic journal articles. There’s a place for the scholarly publication of course, and it shouldn’t be a case for anyone of all of one and none of the other. If you’re an academic blogger, tenure track or not, you ought to be able to show you’ve got what it takes by publishing a credible scholarly article or two. Otherwise, all that talk about your academic library blog helping you to write better, to get your thoughts out, to test new, radical ideas, to gather feedback from colleagues, may not amount to a hill of beans if you can’t demonstrate the ability to go beyond blogging as a means of professional communication.

Results From The Survey On Surveys

In a post last week about survey proliferation, I raised some questions about the impact of the surveys conducted by way of e-mail solicitations to library discussion lists. First, I want to thank those of you who provided comments, especially those that added some insight to my contention that while seeking survey respondents from discussion lists is a convenient method for quickly gathering data (and we never did get into a discussion of how response rates are calculated), this method is susceptible to response bias. Second, many thanks to the 31 individuals (out of some 1,500+ ACRLog readers) who took the time to complete my highly informal survey. The results, which may not be generalizable to the entire library profession (my official “escape clause”), show that 60% of you have received a whopping 6 or more surveys by way of a discussion list so far this calendar year – and a total of 71% have received 5 or more. I think we can officially declare that our profession is over surveyed. The majority of you, 53%, complete about half of the ones received. Only 10% complete them all while 10% complete none.

The top two reasons, by far, given for choosing to complete a survey are (1) there’s time to complete it and (2) being influenced by the survey topic. This seems to support the suggestion that bias will occur because those who respond do so because they have an interest in the topic while those who do not just don’t care about the survey. Now, you could make a case that even in totally randomized distribution method, those who don’t care about the subject could just as easily choose not to respond. That’s why I find the third highest response “I think it’s important to help a colleague doing research” of great importance. If I get a survey by discussion list I think, “Well, this is going to thousands of people, I’m sure the person sending it will still get a bunch of responses if I delete this message.” I have no personal attachment to the survey. I think it’s quite different when I make the effort to develop a unique survey population, and then mail those individuals a message indicating they were randomly chosen to participate, and that my chances of getting a statistically significant response rate will depend on their willingness to respond. I think that personal touch can make a difference in motivating the less interested person to respond.

Another indication from the responses is that, as a profession, we appear to be uncertain about our knowledge of survey methodology. Many of the respondents, 75%, when asked if soliciting survey responses from discussion lists was a valid methodology, responded “maybe”. Or it may be that some of us feel it could be a valid method in some research situations but not others. The results also support that fewer of us send survey questionnaires directly to individuals, and instead now show a clear preference for using the discussion list. A majority of 65% indicated they had been directly solicted for a survey 2 or fewer times in calendar year 2005, and 30% received zero direct invitations to participate in a survey. The final question, “Do you think requests to complete surveys sent to distribution lists have become excessive?”, was a mixed bag. While 39% responded “definitely yes” or “yes”, 45% responded “neutral”. So there’s no consensus there, but only 15% responded “no” or “definitely no.”

It seems there is hardly a case for calling for an end to the distribution of surveys by e-mail discussion list, and even if one were to do so this is one practice that’s hardly likely to reverse itself anytime soon. As I noted in the original post, and as did some commenters, the fact that librarians are conducting research in an effort to improve our knowledge of professional matters is a good sign. As long as the journal editors find this survey methodology (or at least the noting of its use) acceptable the practice will continue. Still, it might be good to explore this entire issue in more depth, and some commenters addressed this specific need. So for those of you looking for a meaty research topic perhaps this is it. What about doing two surveys on the same topic, one by direction solicitation using a completely random method and a second one that gathers data completely from responses by way of discussion lists, and then comparing the results. Who knows, it could lead to some interesting conclusions.

Some Questions About Survey Proliferation

Is it just me or does it seem like the number of times we get solicited to complete web-based surveys is rapidly rising. You may be asked to complete a survey after attending a conference. Perhaps an association or a vendor wants to know how you like their service. There are colleagues who just want to know who else is doing something a certain way or dealing with a certain issue, and would just like some feedback; these folks usually don’t even bother with a web-based questionnaire and instead just stick a few questions right in their e-mail message.

I’m more concerned about academic librarians that are gathering research data by way of sending an e-mail to a discussion list to solicit colleagues to complete a web-based survey. An increase is no doubt owing to the ease, speed, and low (or no) cost associated with a web-based survey. Have an idea for some research? Get on SurveyMonkey, create a survey, send an e-mail providing its URL to one or several discussion lists, and then just sit back and collect the data. This sure beats figuring out how to find a unique survey population, then using a totally random method to identify questionnaire recipients, and then sending surveys to only those targeted individuals. With many librarians under the gun to publish or perish, the proliferation in requests to complete surveys being sent to discussion lists is no surprise.

I certainly don’t respond to them all. I doubt most of you have the time for that either. So how do you decide which ones you’ll respond to, and since the distribution method is hardly random isn’t the likelihood of response bias much higher? I posed this question to a few social scientists. They suggest that soliciting via discussion lists introduces a variety of response biases, but mostly self-selection. You might complete a survey about ERM systems because you just purchased or are in the market for one, while others who have no interest in them ignore the survey. Thus the results are skewed by respondents with a particular attitude, mindset, or set of values.

Granted, it may be that academic librarians are in fact using random survey techniques but are also soliciting on discussion lists just to get a statistically valid number of responses. It’s also commendable that more academic librarians are actively pursuing research projects that advance our knowledge of the profession. But the discussion list solicitation technique seems more reasonable for an informal survey, perhaps to support a conference presentation. I question if our research literature is becoming largely based on survey data gathered via discussion list. I don’t claim to have expertise on this issue so I’m open to the insights of those who do or who edit the journals that publish research articles. I also can’t say there is any research on the number of discussion list surveys, or that examines the validity and reliability of the research resulting from them.

I suppose all I can do at this point is to – and I’m really sorry about this – ask you to complete another survey. That’s right, a survey about surveys. Let me know what you think of the proliferation of requests to complete surveys being distributed via discussion list. There are just six questions. I’ll report the results in a future post to ACRLog – assuming any of you have the time or inclination to respond.