This survey-software is the worst thing to ever happen to my university. It’s opened a gate to my own personal hell, where little demons constantly interrupt me while working, begging me to respond to some low-information, poor quality survey that has zero chance of usefully informing anyone about anything. Some of them are for “research”, but many others are about the operation of the university itself. We are, after all, 2 years into a 6-year strategy, and while I’m no Sun Tzu, apparently this phase seems to entail a lot of “information gathering.”
So how do I know how to recognize a low-information, poor-quality survey? It’s my job to know. And I know that for a survey to usefully inform decision-making it must have, at a minimum, the following properties:
Its content should be directly relevant to the specific decision(s) you want to make.
The form of the content should facilitate a sensible analysis.
That analysis should be conducted by an experienced, knowledgeable data analyst.
The results of that analysis should be interpreted in light of any systematic biases and/or uncertainty due to random error in the nature of the responses you obtained (among other things).
And these interpretations need to be clearly communicated and understood by decision makers.
In short, a survey is a serious scientific tool of measurement that when appropriately designed, deployed, analyzed and interpreted can absolutely lead to better decisions. But the surveys I receive typically fall well short of the requirements listed above, even if I accept relevancy of their content.
For starters, the questions they ask are often ambiguous, confusing, or even logically inconsistent (not to mention ridden with typos). Further, the survey instruments themselves often permit respondents to enter their answers in a variety of creative ways, which has deleterious knock-on effects for the analysis and interpretation. So for a survey to be useful, it must be rigorously piloted. Was yours?
Further, for these surveys to produce useful data, the targeted respondents need to respond in great enough numbers to overcome random error; and the people that actually responded need to be representative enough of the entire group of people you hoped to respond to in the first place. We can’t hope to dive into what this might entail in this blog-post, but if you are designing and deploying surveys, it’s 100% your job to know before you send out the survey. Do you?
Regardless, I’m confident that for the vast majority of surveys I’m asked to fill out, the actual sample of respondents they attract is not large enough, or unbiased enough, to generate informative data. So before presenting these results to decision makers, it’s critical that you can convey these limitations in the data. Can you?
Most of the people reading this are scientists, so please feel free to let me know if you think I’ve got the wrong end of the stick.
Many of the survey requests I receive claim to have approval from an ethics committee. I long ago stopped expecting ethics committees to spot methodological flaws, and I suppose that the risks to respondents for each individual survey are indeed limited (unless they send you into an incandescent rage). But does this mean they are cost-free?
Just think of the time involved. First, resources are spent “designing” and “analyzing” these pointless things. Who authorized that bit?
Next, if I get a survey that’s relevant for every student and staff member of University College Cork, that’s about 30,000 people. If it takes those people 20 minutes on average to carefully fill out the survey, that is 10k person-hours of time spent, much of it work time. Per survey. So who authorized all those very real expenses?
Of course, you might be saying that nobody actually wants all 30k people to respond, which is surely sensible - but I would then ask how you intend to get a random sample of them, laugh until I cry, and then carry on with this screed. So you are either desperately hoping to spend huge amounts of collective time on your clumsy survey, or you are attracting a small number of respondents in a biased manner. And if it’s the latter, what exact decision are you hoping to inform again? Are you telling me that the university is operating, at least partly, on the whims of n = 129 people with nothing better to do than answer obviously flawed surveys? We’re doomed.
Of course there is one value to these surveys. They create an illusion that the people sending them out are doing something. I don’t expect them to go away any time soon.
I have similar thoughts. The team I work has an almost obsessive level of scrutiny during surveys, with complex standards in place. For example, when conducting walking speed tests, they are extremely strict about the starting position. This is all fine, but when it comes to the data analysis phase, they insist on following guidelines and reducing the continuous walking speed data to binary values. My question has always been: if we're using such a rough binary measure, what’s the point of this nearly pathological demand for measurement precision? I feel that visually judging walking speed could easily separate 'slow' and 'fast.' So what’s the difference between visual inspection and these complex standards? I don’t see any difference—it just seems to be about self-satisfaction, about believing that we’ve conducted a thorough investigation.
"I long ago stopped expecting ethics committees to spot methodological flaws." I have a similar thought when I saw an almost absurd sample size calculation in an application for a major and passed research grant.
I often reflect on my work before bed: what am I actually doing, and does it really matter? These thoughts have puzzled me for a long time, even to this day.
Great post. Agree with you 100%. (Well, 99.9%. If the university community receives a survey about university operations, you might not need to worry much about representativeness if the target population is stakeholders who care enough about the university that they would do things like respond to these surveys.)