Is it just me or does it seem like the number of times we get solicited to complete web-based surveys is rapidly rising. You may be asked to complete a survey after attending a conference. Perhaps an association or a vendor wants to know how you like their service. There are colleagues who just want to know who else is doing something a certain way or dealing with a certain issue, and would just like some feedback; these folks usually donâ€™t even bother with a web-based questionnaire and instead just stick a few questions right in their e-mail message.
Iâ€™m more concerned about academic librarians that are gathering research data by way of sending an e-mail to a discussion list to solicit colleagues to complete a web-based survey. An increase is no doubt owing to the ease, speed, and low (or no) cost associated with a web-based survey. Have an idea for some research? Get on SurveyMonkey, create a survey, send an e-mail providing its URL to one or several discussion lists, and then just sit back and collect the data. This sure beats figuring out how to find a unique survey population, then using a totally random method to identify questionnaire recipients, and then sending surveys to only those targeted individuals. With many librarians under the gun to publish or perish, the proliferation in requests to complete surveys being sent to discussion lists is no surprise.
I certainly donâ€™t respond to them all. I doubt most of you have the time for that either. So how do you decide which ones youâ€™ll respond to, and since the distribution method is hardly random isnâ€™t the likelihood of response bias much higher? I posed this question to a few social scientists. They suggest that soliciting via discussion lists introduces a variety of response biases, but mostly self-selection. You might complete a survey about ERM systems because you just purchased or are in the market for one, while others who have no interest in them ignore the survey. Thus the results are skewed by respondents with a particular attitude, mindset, or set of values.
Granted, it may be that academic librarians are in fact using random survey techniques but are also soliciting on discussion lists just to get a statistically valid number of responses. It’s also commendable that more academic librarians are actively pursuing research projects that advance our knowledge of the profession. But the discussion list solicitation technique seems more reasonable for an informal survey, perhaps to support a conference presentation. I question if our research literature is becoming largely based on survey data gathered via discussion list. I donâ€™t claim to have expertise on this issue so Iâ€™m open to the insights of those who do or who edit the journals that publish research articles. I also canâ€™t say there is any research on the number of discussion list surveys, or that examines the validity and reliability of the research resulting from them.
I suppose all I can do at this point is to – and Iâ€™m really sorry about this â€“ ask you to complete another survey. Thatâ€™s right, a survey about surveys. Let me know what you think of the proliferation of requests to complete surveys being distributed via discussion list. There are just six questions. Iâ€™ll report the results in a future post to ACRLog â€“ assuming any of you have the time or inclination to respond.