12 Comments

Excellent read, thank you. Having designed a few studies, in medicine and even in unrelated fields, I appreciate the rigor required to get it right. And, thanks for the references. I'm creating a short-course where we're trying to teach a little bit of epidemiology and I've now got more background reading for the class!

Expand full comment
Jan 4Liked by Darren Dahly, PhD

I am a cardiovascular epidemiologist. As your article mentions, predicting/causal inference is a question I am frequently asked, and it is indeed a challenging one to articulate. One perspective I have been taught is that observational studies are not sufficient for drawing causal inferences, which is why ambiguous terms (such as risk factors or associations) are used (as you rightly point out, this should be avoided, and I fully agree). However, I am perplexed about how to choose terms after avoiding them. My understanding is that a reasonable causal inference requires the integration of many studies, including observational research, and information beyond the data. However, this seems to contradict what you are saying. I find both perspectives reasonable. Causal inference is complex, and honestly, I still do not understand how to conduct a perfect causal inference. The more literature I read, the more confused I become about what I am actually doing. Causal inference requires assessing the impact of interventions (is that right?), and it is also challenging in observational studies. Now I wonder if my analytical methods are problematic. I lack a reasonable analysis plan and methodology. If possible, could you provide me with some reference papers on epidemiological (observational study) practices?

Expand full comment
author

You are right to highlight the need to integrate findings across studies for useful causal inference. Even in the most optimal situation where large randomized trials are possible you probably need > 1. For observational studies where potential biases are lurking everywhere, the idea of triangulation is very useful. See https://academic.oup.com/ije/article/45/6/1866/2930550.

Expand full comment
Jan 6Liked by Darren Dahly, PhD

That is an excellent reference. I also found this review https://link.springer.com/article/10.1007/s40471-023-00340-0

Expand full comment

Great paper! thank you.

Expand full comment
Jan 5·edited Jan 5

Thank you for your kind response, triangulation makes a lot of sense. If possible, could you take a look at the paper below? I'm wondering whether it should be considered causal or predictive. My research is same with that; we create a descriptive Table 1, calculate the risk ratio, and that's it (essentially showing an ambiguous "association" as you said).

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5207003/

Additionally, I'm increasingly inclined to believe that observational studies should focus more on the overall predictive performance of the model rather than solely calculating exposure risk ratio. Developing a predictive model seems well-suited for observational studies, as we lack RCT experimental data and must find ways to extract information from observational data. I would appreciate hearing your thoughts. Thank you.

Expand full comment
Jan 6Liked by Darren Dahly, PhD

The migraine study above seems like it's trying to establish a causal link between migraine and ischemic stroke. But like many published observational studies, it's a bit unclear what, if anything, clinicians should do with the results. There's no compelling mechanism by which migraine with aura could directly *cause* cardioembolic stroke (the type of stroke most strongly linked with migraine with aura in this study). Rather, it seems quite plausible that patients who have a long history of migraine with aura (e.g., visual aura) might be more likely to write off symptoms of a TIA as "just another migraine." This self misdiagnosis could cause them to avoid seeking medical attention for their neurologic symptoms and, in turn, to a failure to diagnose underlying conditions that might have *caused* their TIA (e.g., atrial fibrillation). And if a.fib is not diagnosed, these patients will be at increased future risk of cardioembolic stroke.

As an aside, some patients are diagnosed with stroke of "presumed cardioembolic" origin, when nobody is really sure that this was actually the stroke mechanism...

Contrary to the authors' suggestion, I'm not sure that these results are compelling enough to recommend that physicians start investigating all their older patients with migraine with aura for underlying conditions that could lead to cardioembolic stroke (e.g., doing Holters to look for paroxysmal a.fib, echocardiograms to look for PFOs) given that the absolute rate of cardioembolic stroke among the migraine with aura group in this study was very low. It would probably make more sense for us to just caution our older migraineurs that they should seek medical attention if their aura symptoms become "atypical" for them.

Expand full comment

Very great commentary, and as you mentioned, many observational studies often neglect the importance of translating findings into medical practice, significantly undermining the value of the research. By the way, if possible, could you share your thoughts on what an ideal observational study should look like? I'd also like to hear your opinion or any recommended papers on the matter.

Expand full comment
Jan 9·edited Jan 9Liked by Darren Dahly, PhD

Instead of focusing on specific design choices, I think it’s far more important to be sure that the *goal* of any observational study is articulated very clearly in advance. Researchers should ask subject matter expert end-users what type(s) of results (if any) they might consider to be actionable. Before proceeding, there should be consensus that the study will be valuable regardless of its result. Ideally, researchers would never get to the end of a study, look at the results, and end up saying “Now what?”

In my view (as a physician), these are the main contexts in which observational studies are most likely to have a meaningful impact in medicine:

1. Identifying safety signals for approved pharmacologic therapies. A big caveat is that this type of research can easily end up doing more harm than good. Any potential safety signals need to be rigorously contextualized by researchers by considering clinical risk/benefit tradeoffs for the drug in question and the reliability of the signal. Otherwise, these types of studies can easily generate unnecessary panic among patients (which is incredibly aggravating for practitioners).

2. Describing the characteristics of populations/patients afflicted with certain conditions, in order to identify how best to allocate research dollars or public health interventions (e.g., determining whether the morbidity/mortality burden for an infectious disease is sufficient to justify a government paying for a new vaccine);

3. Clarifying what types of non-drug exposures might contribute to the development of certain diseases. But there’s a huge caveat here that many researchers seem to neglect, and which renders much published research unusable: painstaking, intentional triangulation of several lines of evidence is needed in order for this type of research to be clinically actionable. And strength of association is key; in general, it’s very difficult to convince the clinical community to pay much attention to a weak database association. If an association is moderate or strong, then other lines of evidence for causality ideally should be compelling as well. Here’s an example of an excellent observational study that was published recently. The authors went to impressive lengths to make a case for causality, using many different approaches:

https://www.mdpi.com/2072-6694/13/23/6022/htm

Another great example is the research that led to establishment of a strong link (very likely to be causal) between Multiple Sclerosis and prior Epstein Barr Virus infection:

https://www.nature.com/articles/s41579-022-00770-5

https://www.science.org/content/blog-post/ebv-and-multiple-sclerosis-more-story

Unfortunately, the vast majority of published observational research isn’t nearly as compelling as the above examples. Most studies report weak associations identified from administrative databases, with “meh” attempts at triangulating evidence (if this effort is made at all). Some studies probably involve undisclosed Hypothesizing After the Results are Known (“HARKing”). This frowned-upon form of deception is usually pretty obvious to a trained reader. Ultimately, if a database study identifies only a weak association and if supporting findings (e.g., lab/animal studies) are also flimsy, we’re left with a corpus of evidence that's too weak to inform clinical practice or policy.

4. Finally, as you know, one of the most famous uses of observational evidence involved the Framingham cohort studies. They led to identification of cardiovascular disease “risk factors” (e.g., HTN, hyperlipidemia) and ultimately to methods for estimating a patient’s future risk of cardiovascular events. Subsequently, experimental studies involving *manipulation* of certain risk factors (e.g., lipid levels) showed improved outcomes, indicating that the associations were not just prognostic/predictive (these terms seem to be used inconsistently…), but also causal.

Expand full comment

Great opinions, thank you very much. I will share these insights with my colleagues, especially the concept of triangulation. Rarely have I heard this term or similar triangulation research mentioned by my colleagues or within our organization. It seems that most of us here have overlooked the importance of this analytical concept.

Expand full comment
Jan 3Liked by Darren Dahly, PhD

Thank you, this is really helpful. I worked in the humanities as a methodologist and, when asked to help, would always start with 'what is your question?'. Very often and very telling the response was 'I want to show support for X". We would then have a 'but if you already know X is true, then what is the question you hope to answer with your research?" discussion. Research was too often seen as a form of advocacy. Focusing on the question was most useful to identify this misconception.

Expand full comment
author

We get a lot of "research as advocacy" in public health too. It's very well intentioned and almost always from people I share many values with, but counter-productive in my opinion.

Expand full comment