Bring on the critics...even the ones on Twitter
Sloppy research methods need more scrutiny not less, and the medium for such critique can hardly matter.
Dr Robert Califf (@califf001), an important leader in the world of clinical research and someone whose career certainly warrants respect and admiration, recently published a brief editorial about #medtwitter. There are some important points that disagree with and I wanted to quickly, but thoughtfully (I hope), respond to them. My objections start with this quote from the editorial:
Given the rapid growth of Twitter, it is not surprising that analysis of data related to its use would begin to develop. Shahzeb Khan et al. (2) analyzed the relationship between the Impact Factor (IF) and Twitter followers, a ratio that has become known as the “Kardashian index,” or “K-index.” The concept of the K-index is to identify the relationship between following on the social media platform and the IF, assuming that the IF is directly related to scientific contribution. In an ideal world, scientists with the most important contribution of original knowledge would have the largest Twitter following. However, pundits with few publications with impact and a large Twitter following either may be expert commentators and analysts or may represent “crackpots” with little real knowledge of the topics on which they are commenting.
The first thing that jumps off the page is that little word assuming. In this case, the assumption is unambiguously false. Citation-based metrics, whether applied to people, papers, or journals, are a poor proxy of scientific contribution.
First, they are deeply confounded with opportunity. Older researchers will tend to have more citations than younger ones. Researchers from rich countries with well-developed research infrastructures will tend to have more citations than scientists from countries without those supports. Even within countries, opportunities to conduct “high-impact” research also vary widely, depending on the schools, research institutions, and golfing buddies one is associated with.
Second, using citation-based metrics like this conflates quantity with quality. If you still need to be convinced of this, I give you concerns about Research Waste, Reproducibility, and the Scandal of Medical Research. I would actually argue that nothing has caused more harm to science than the use of citation-based measures of scientific contribution. Thankfully some of us seem to finally be catching on (e.g. please see DORA).
Thus I think it’s safe to say that it would be far from “ideal” if people’s citation counts were correlated with their Twitter followers. In fact, the very notion that the k-index is actually a measure of anything, much less anything that anyone would be interested in, is laughable. Dr. Califf pretty much makes this point for me at the end of the above quote— someone with a high k-index could be an “expert commentator” or a “crackpot”. This doesn’t sound like very good screening tool to me.
Simply put, the k-index is not a serious thing. It’s a parlor trick. It’s something to calculate and compare with your friends to have a laugh. The idea that smart people are giving it serious discussion is astounding to me. The k-index is in fact so pointless I could easily make the following, equally self-serving argument: I earned my “platform” on twitter as an early career researcher that apparently shared enough useful content, good ideas and educational critiques to earn people’s attention, with no other incentives or enticements (they can come and go as they please)— and my modest citation metrics have simply failed to capture my potential scientific impact.
My larger objection though is the overall gate-keeping tone of the editorial. The argument seems to be that the people who conduct lots of research are the ones whose opinions matter the most, and apparently anyone else can just be ignored. Dr Califf offers this perspective on the matter:
People with a high K-index may be those who thrive by commenting on the work of others rather than doing their own work. Theodore Roosevelt wrote a speech known as “The Man in the Arena” that exemplifies the importance of doing rather than commenting “from the peanut gallery”:
“It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat” (3).
Marred by dust and sweat and blood? It sounds like Dr Califf should bring this up with HR.
As a publicly funded expert on research methods, I actually think it’s part of my job to comment on and critique research that extends beyond my own CV. Can methodological experts like myself use Twitter to discuss research? Are patients allowed to comment? Journalists? They don’t tend to conduct research at all — does that invalidate their insights? At the risk of taking this silly gladiator analogy even further, these people aren’t a crowd of spectators - they are down on the floor of the arena too.
Of course nobody would say (out loud) that these people shouldn’t comment. Surely they are included in Dr Califf’s group of “expert commentators”. Or are they? I suppose what I would prefer is some clear list of who is and who isn’t allowed to comment on published research articles. Who exactly is The Man in the Arena? Who exactly are the gallery? I shouldn’t have to read someone’s mind about this kind of thing.
The final point I will make is in response to this idea:
Furthermore, as the gap between new technology and high-quality evidence on risks and benefits continues to expand, we need to encourage and reward those investigators who participate and generate evidence as a priority. If an assistant professor can advance by analyzing or commenting on others’ research with more rapid publication and recognition opportunity, it may dissuade young clinicians and clinical investigators from participating in the research enterprise because of its much longer latency between work and work product.
I find it hard to accept that we need even more incentives to produce research, especially if we are measuring it with publications. This of course returns us to the points above on research waste and reproducibility. The reality is that we are drowning in a sea of low-quality research that is, more often than not, useless for decision making, and overwhelming the few quality control systems we have in place. I find myself quoting Doug Altman a lot these days, but I’m forced to do it again: We need less research, better research, and research done for the right reasons.
So I say bring on the commentators and the critics. Recognize and promote them if their arguments have merit, made in public, for all to see and respond to in kind (not in some backroom , faculty lounge, or editorial office). And if their efforts lead the researchers to strive for more — to produce insights, not just publications — and to avoid the many mistakes of the past, then I’ll happily accept the need to mute a few crackpots on Twitter as a trade-off.