3 Comments
User's avatar
jiaqi li's avatar

I have similar thoughts. The team I work has an almost obsessive level of scrutiny during surveys, with complex standards in place. For example, when conducting walking speed tests, they are extremely strict about the starting position. This is all fine, but when it comes to the data analysis phase, they insist on following guidelines and reducing the continuous walking speed data to binary values. My question has always been: if we're using such a rough binary measure, what’s the point of this nearly pathological demand for measurement precision? I feel that visually judging walking speed could easily separate 'slow' and 'fast.' So what’s the difference between visual inspection and these complex standards? I don’t see any difference—it just seems to be about self-satisfaction, about believing that we’ve conducted a thorough investigation.

"I long ago stopped expecting ethics committees to spot methodological flaws." I have a similar thought when I saw an almost absurd sample size calculation in an application for a major and passed research grant.

I often reflect on my work before bed: what am I actually doing, and does it really matter? These thoughts have puzzled me for a long time, even to this day.

Expand full comment
Dr. Ken Springer's avatar

Great post. Agree with you 100%. (Well, 99.9%. If the university community receives a survey about university operations, you might not need to worry much about representativeness if the target population is stakeholders who care enough about the university that they would do things like respond to these surveys.)

Expand full comment
Darren Dahly, PhD lol FFS jFc's avatar

Indeed. And I'd often much prefer more targeted, qualitative work from the outset, but that's so much harder than designing a shit survey and pressing send to 30k people.

Expand full comment