Some Questions About Survey Proliferation

Is it just me or does it seem like the number of times we get solicited to complete web-based surveys is rapidly rising. You may be asked to complete a survey after attending a conference. Perhaps an association or a vendor wants to know how you like their service. There are colleagues who just want to know who else is doing something a certain way or dealing with a certain issue, and would just like some feedback; these folks usually don’t even bother with a web-based questionnaire and instead just stick a few questions right in their e-mail message.

I’m more concerned about academic librarians that are gathering research data by way of sending an e-mail to a discussion list to solicit colleagues to complete a web-based survey. An increase is no doubt owing to the ease, speed, and low (or no) cost associated with a web-based survey. Have an idea for some research? Get on SurveyMonkey, create a survey, send an e-mail providing its URL to one or several discussion lists, and then just sit back and collect the data. This sure beats figuring out how to find a unique survey population, then using a totally random method to identify questionnaire recipients, and then sending surveys to only those targeted individuals. With many librarians under the gun to publish or perish, the proliferation in requests to complete surveys being sent to discussion lists is no surprise.

I certainly don’t respond to them all. I doubt most of you have the time for that either. So how do you decide which ones you’ll respond to, and since the distribution method is hardly random isn’t the likelihood of response bias much higher? I posed this question to a few social scientists. They suggest that soliciting via discussion lists introduces a variety of response biases, but mostly self-selection. You might complete a survey about ERM systems because you just purchased or are in the market for one, while others who have no interest in them ignore the survey. Thus the results are skewed by respondents with a particular attitude, mindset, or set of values.

Granted, it may be that academic librarians are in fact using random survey techniques but are also soliciting on discussion lists just to get a statistically valid number of responses. It’s also commendable that more academic librarians are actively pursuing research projects that advance our knowledge of the profession. But the discussion list solicitation technique seems more reasonable for an informal survey, perhaps to support a conference presentation. I question if our research literature is becoming largely based on survey data gathered via discussion list. I don’t claim to have expertise on this issue so I’m open to the insights of those who do or who edit the journals that publish research articles. I also can’t say there is any research on the number of discussion list surveys, or that examines the validity and reliability of the research resulting from them.

I suppose all I can do at this point is to – and I’m really sorry about this – ask you to complete another survey. That’s right, a survey about surveys. Let me know what you think of the proliferation of requests to complete surveys being distributed via discussion list. There are just six questions. I’ll report the results in a future post to ACRLog – assuming any of you have the time or inclination to respond.

9 thoughts on “Some Questions About Survey Proliferation

  1. As a journal editor who regularly receives manuscripts based on such discussion list surveys – usually with no acknowledgement in the text at all that there may be any limitations to such surveys, I think this is a problem. Whether it is increasing sort of doesn’t really matter to me. We need to distinguish between research findings that are generalizable and getting opinions from others that help us think about our own professional work. They both have their place – but they aren’t the same.

  2. I had to address this issue when I was preparing for my dissertation proposal defense. Basically, yes, you give up any means of rigorously defining your population when you do an open list call for participants. When I wrote my dissertation, I followed a much more rigorous program for defining my population that I could defend to my committee. Of course, I also got some comments from participants asking why I didn’t simply send the survey to a list, since that’s what they were used to :-)

    In a more recent survey, I cast a wider net and used discussion lists, but I did plan to discuss this as a limitation of the survey.

    Richard Fyffe and I touched on this issue (but just scratched the surface) in our handbook for the KU Graduate School, The Digital Difference (which you can find at ). It’s definitely a significant methodological issue that will likely find its way into more and more research methods classes in the future.

  3. This is a very valid question and honestly, it’s one I haven’t considered. Since I am one of those who have been flooding the lists with postings about filling out a survey recently, I guess my thinking was to reach as many librarians as I possibly could in the quickest amount of time. The question would be, how would one really create a random population that are not influenced by ERM purchases/experiences? This would mean knowing which libraries have an ERM system, when they were purchases/installed, and if they’re integrated into the ILS. This is more information than some people might be willing to share or even know (and not take the time to find out). Trying to balance a brief survey to a large population of people with random responses without outside influences really is difficult to do and time consuming. Unfortunately, time is not something I have much of.

  4. James – if you use a truly random survey method you wouldn’t need to be concerned about who has an ERM and who does not. The idea is to get a sampling that represents the total population. Let’s assume that out of 100 librarians, 5 have some experience with an ERM. If you randomly sampled 20 librarians you’d likely get 1 librarian with ERM experience. But the results should be more valid and reliable since your survey population acurately represents the entire population. That’s very different from surveying librarians on a electronic resources discussion list, and getting responses from 15 librarians who have ERM experience and 5 that do not – it doesn’t accurately represent the total population. Do we have any real research methodology experts who can comment on this.

  5. I am wondering if this is an issue that solely plagues practice-based fields like ours. This question intrigued me, and I spent some time today trying to find a scholarly article that studied the validity of survey populations solicited via profession-based listservs. In the short time that I looked, I couldn’t find one, but I did read several articles which thought that web surveys, although often biased, were a tremendous time-saver and overall an improvement on phone and paper and pencil surveys.

    One of the articles that I read was: Zhang, Yin. Using the Internet for survey research: A case study Journal of the American Society for Information Science. Jan 1, 2000. Vol. 51, Iss. 1; p. 57

    I suppose I was especially interested in this topic because, like James, I recently conducted a web-based survey using the exact method (SurveyMonkey and all) that Steven outlined. While my results might have been biased, without access to discussion lists, I’m not sure how I would have gotten such a terrific pool of responses using other methods (and given my time constraints).

    Even though there are limitations to this type of survey research, is it perhaps enabling preliminary explorations into different aspects of our field? Prior to the advent of the Web, how many nationally-focused library-related surveys were actually conducted each year by practicing librarians? I’d like to think that although imperfect, these surveys and their disseminated results inspire discussion and future innovation in our field.

  6. I think another aspect of this that needs to be considered is not just lack of randomness but also response rate. If one considers the subscriber base to a listserv and then the number of responses, I would venture that most listserv-based surveys have a very low response rate. This also causes
    problems when one starts to make statistical statements and inferences. Another issue is whether one really wants to be sampling librarians or libraries (in the ERM case, for example, one might actually want library-based responses not librarian).

    For what it is worth, I often recommend the text Basic Research Methods for Librarians by Ronald Powell and Lynn Silipigni Connaway (Libraries Unlimited, currently 4th edition, 2004) as a very clearly written text. Chapter 4 on Survey Research and Sampling, and Chapter 5 on Data Collection Techniques seem particularly relevant to this discussion.

    All said though – I think the final deciding factor is what you want to do with the information. If you just want some thoughts and ideas to enrich your thinking, listserv-based, etc. could be fine. If you want research findings that are generalizable, you need to sample and get a decent response rate.

  7. Some resources on this topic:

    Couper, M. P., Traugott, M. W., & Lamias, M. J. (2001). Web survey design and administration. Public Opinion Quarterly, 65 (2), 230-253.

    Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). New York: John Wiley & Sons.

    Schaefer, D., R., & Dillman, D. A. (1998). Development of a standard e-mail methodology: Results of an experiment. Public Opinion Quarterly, 62 (3), 378-397.

    I also liked the discussion of e-mail surveys in the broader chapter on survey method found in:

    Ary, D., Jacobs, L. C., & Razavieh, A. (2002). Introduction to research in education (6th ed.). Belmont, CA: Wadsworth.

    Basically, what each of these sources tell us is that the decision about how to sample a population using the Internet is no different than decisions made for decades about how to construct a good sample for any “self-administered” survey (i.e., one that you complete yourself outside a controlled environment). What the Web does is provide us with access to a much greater variety of communities (at little cost to the researcher), but that does not mean that we do not need to address issues of sample size and representativeness. Schaefer and Dillman (1998), for example, directly address the point Lisa made about response rates for surveys posted for open participation through a discussion list.

    Luckily for me, I define most of my research as exploratory :-)

    And, here’s an interesting bit of exploratory research for us to consider per Steven’s comment. In a survey that I am currently analyzing and hope to present at a future meeting, I asked academic librarians currently pursuing a graduate degree in the field of Higher Education Administration to identify the reasons why they chose to pursue that degree. The most common response (64.5% or respondents chose this option from a menu of choices) was: “[the] opportunity to learn more about how to design, carry out, and report the results of research in higher education.”

    So, is continuing education in educational research an area that ACRL should further pursue? This discussion, my (exploratory) research, and the appearance of ACRL documents such as the IS-sponsored Research Agenda for Library Instruction and Information Literacy suggests that it may be.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>