In a post last week about survey proliferation, I raised some questions about the impact of the surveys conducted by way of e-mail solicitations to library discussion lists. First, I want to thank those of you who provided comments, especially those that added some insight to my contention that while seeking survey respondents from discussion lists is a convenient method for quickly gathering data (and we never did get into a discussion of how response rates are calculated), this method is susceptible to response bias. Second, many thanks to the 31 individuals (out of some 1,500+ ACRLog readers) who took the time to complete my highly informal survey. The results, which may not be generalizable to the entire library profession (my official “escape clause”), show that 60% of you have received a whopping 6 or more surveys by way of a discussion list so far this calendar year – and a total of 71% have received 5 or more. I think we can officially declare that our profession is over surveyed. The majority of you, 53%, complete about half of the ones received. Only 10% complete them all while 10% complete none.
The top two reasons, by far, given for choosing to complete a survey are (1) there’s time to complete it and (2) being influenced by the survey topic. This seems to support the suggestion that bias will occur because those who respond do so because they have an interest in the topic while those who do not just don’t care about the survey. Now, you could make a case that even in totally randomized distribution method, those who don’t care about the subject could just as easily choose not to respond. That’s why I find the third highest response “I think it’s important to help a colleague doing research” of great importance. If I get a survey by discussion list I think, “Well, this is going to thousands of people, I’m sure the person sending it will still get a bunch of responses if I delete this message.” I have no personal attachment to the survey. I think it’s quite different when I make the effort to develop a unique survey population, and then mail those individuals a message indicating they were randomly chosen to participate, and that my chances of getting a statistically significant response rate will depend on their willingness to respond. I think that personal touch can make a difference in motivating the less interested person to respond.
Another indication from the responses is that, as a profession, we appear to be uncertain about our knowledge of survey methodology. Many of the respondents, 75%, when asked if soliciting survey responses from discussion lists was a valid methodology, responded “maybe”. Or it may be that some of us feel it could be a valid method in some research situations but not others. The results also support that fewer of us send survey questionnaires directly to individuals, and instead now show a clear preference for using the discussion list. A majority of 65% indicated they had been directly solicted for a survey 2 or fewer times in calendar year 2005, and 30% received zero direct invitations to participate in a survey. The final question, “Do you think requests to complete surveys sent to distribution lists have become excessive?”, was a mixed bag. While 39% responded “definitely yes” or “yes”, 45% responded “neutral”. So there’s no consensus there, but only 15% responded “no” or “definitely no.”
It seems there is hardly a case for calling for an end to the distribution of surveys by e-mail discussion list, and even if one were to do so this is one practice that’s hardly likely to reverse itself anytime soon. As I noted in the original post, and as did some commenters, the fact that librarians are conducting research in an effort to improve our knowledge of professional matters is a good sign. As long as the journal editors find this survey methodology (or at least the noting of its use) acceptable the practice will continue. Still, it might be good to explore this entire issue in more depth, and some commenters addressed this specific need. So for those of you looking for a meaty research topic perhaps this is it. What about doing two surveys on the same topic, one by direction solicitation using a completely random method and a second one that gathers data completely from responses by way of discussion lists, and then comparing the results. Who knows, it could lead to some interesting conclusions.