Results From The Survey On Surveys

In a post last week about survey proliferation, I raised some questions about the impact of the surveys conducted by way of e-mail solicitations to library discussion lists. First, I want to thank those of you who provided comments, especially those that added some insight to my contention that while seeking survey respondents from discussion lists is a convenient method for quickly gathering data (and we never did get into a discussion of how response rates are calculated), this method is susceptible to response bias. Second, many thanks to the 31 individuals (out of some 1,500+ ACRLog readers) who took the time to complete my highly informal survey. The results, which may not be generalizable to the entire library profession (my official “escape clause”), show that 60% of you have received a whopping 6 or more surveys by way of a discussion list so far this calendar year – and a total of 71% have received 5 or more. I think we can officially declare that our profession is over surveyed. The majority of you, 53%, complete about half of the ones received. Only 10% complete them all while 10% complete none.

The top two reasons, by far, given for choosing to complete a survey are (1) there’s time to complete it and (2) being influenced by the survey topic. This seems to support the suggestion that bias will occur because those who respond do so because they have an interest in the topic while those who do not just don’t care about the survey. Now, you could make a case that even in totally randomized distribution method, those who don’t care about the subject could just as easily choose not to respond. That’s why I find the third highest response “I think it’s important to help a colleague doing research” of great importance. If I get a survey by discussion list I think, “Well, this is going to thousands of people, I’m sure the person sending it will still get a bunch of responses if I delete this message.” I have no personal attachment to the survey. I think it’s quite different when I make the effort to develop a unique survey population, and then mail those individuals a message indicating they were randomly chosen to participate, and that my chances of getting a statistically significant response rate will depend on their willingness to respond. I think that personal touch can make a difference in motivating the less interested person to respond.

Another indication from the responses is that, as a profession, we appear to be uncertain about our knowledge of survey methodology. Many of the respondents, 75%, when asked if soliciting survey responses from discussion lists was a valid methodology, responded “maybe”. Or it may be that some of us feel it could be a valid method in some research situations but not others. The results also support that fewer of us send survey questionnaires directly to individuals, and instead now show a clear preference for using the discussion list. A majority of 65% indicated they had been directly solicted for a survey 2 or fewer times in calendar year 2005, and 30% received zero direct invitations to participate in a survey. The final question, “Do you think requests to complete surveys sent to distribution lists have become excessive?”, was a mixed bag. While 39% responded “definitely yes” or “yes”, 45% responded “neutral”. So there’s no consensus there, but only 15% responded “no” or “definitely no.”

It seems there is hardly a case for calling for an end to the distribution of surveys by e-mail discussion list, and even if one were to do so this is one practice that’s hardly likely to reverse itself anytime soon. As I noted in the original post, and as did some commenters, the fact that librarians are conducting research in an effort to improve our knowledge of professional matters is a good sign. As long as the journal editors find this survey methodology (or at least the noting of its use) acceptable the practice will continue. Still, it might be good to explore this entire issue in more depth, and some commenters addressed this specific need. So for those of you looking for a meaty research topic perhaps this is it. What about doing two surveys on the same topic, one by direction solicitation using a completely random method and a second one that gathers data completely from responses by way of discussion lists, and then comparing the results. Who knows, it could lead to some interesting conclusions.

Some Questions About Survey Proliferation

Is it just me or does it seem like the number of times we get solicited to complete web-based surveys is rapidly rising. You may be asked to complete a survey after attending a conference. Perhaps an association or a vendor wants to know how you like their service. There are colleagues who just want to know who else is doing something a certain way or dealing with a certain issue, and would just like some feedback; these folks usually don’t even bother with a web-based questionnaire and instead just stick a few questions right in their e-mail message.

I’m more concerned about academic librarians that are gathering research data by way of sending an e-mail to a discussion list to solicit colleagues to complete a web-based survey. An increase is no doubt owing to the ease, speed, and low (or no) cost associated with a web-based survey. Have an idea for some research? Get on SurveyMonkey, create a survey, send an e-mail providing its URL to one or several discussion lists, and then just sit back and collect the data. This sure beats figuring out how to find a unique survey population, then using a totally random method to identify questionnaire recipients, and then sending surveys to only those targeted individuals. With many librarians under the gun to publish or perish, the proliferation in requests to complete surveys being sent to discussion lists is no surprise.

I certainly don’t respond to them all. I doubt most of you have the time for that either. So how do you decide which ones you’ll respond to, and since the distribution method is hardly random isn’t the likelihood of response bias much higher? I posed this question to a few social scientists. They suggest that soliciting via discussion lists introduces a variety of response biases, but mostly self-selection. You might complete a survey about ERM systems because you just purchased or are in the market for one, while others who have no interest in them ignore the survey. Thus the results are skewed by respondents with a particular attitude, mindset, or set of values.

Granted, it may be that academic librarians are in fact using random survey techniques but are also soliciting on discussion lists just to get a statistically valid number of responses. It’s also commendable that more academic librarians are actively pursuing research projects that advance our knowledge of the profession. But the discussion list solicitation technique seems more reasonable for an informal survey, perhaps to support a conference presentation. I question if our research literature is becoming largely based on survey data gathered via discussion list. I don’t claim to have expertise on this issue so I’m open to the insights of those who do or who edit the journals that publish research articles. I also can’t say there is any research on the number of discussion list surveys, or that examines the validity and reliability of the research resulting from them.

I suppose all I can do at this point is to – and I’m really sorry about this – ask you to complete another survey. That’s right, a survey about surveys. Let me know what you think of the proliferation of requests to complete surveys being distributed via discussion list. There are just six questions. I’ll report the results in a future post to ACRLog – assuming any of you have the time or inclination to respond.