The pandemic of Long COVID psychobabble continues. This week saw yet another terrible study claiming that long-haul COVID is just an illusion of human cognition, an illness-like experience rooted in psychological processes.
The widely reported “finding” was published in the medical journal JAMA Internal Medicine by a group of French researchers. Having crunched some numbers from a large-scale population-based cohort study, they claimed to have found a meaningful statistical association between long-term COVID symptoms and “the belief in having been infected with SARS-CoV-2.” They even speculated that such “beliefs” might cause people to develop “maladaptive health behaviours,” invoking a classic psychiatric trope that blames sick people for making themselves ill through faulty reasoning.
The analyses showed that participants who reported previously having COVID were likely also to report a wide range of lasting physical symptoms. But crucially, according to these researchers, participants whose SARS-CoV-2 infections were confirmed by laboratory tests appeared to exhibit much fewer symptoms than participants whose antibody tests came back negative.
The researchers’ implication was clear: the symptoms of long COVID that you hear about in media reports arise mostly in people who believe they had COVID, but who actually never had. Real COVID infections, by contrast, cause few lasting symptoms (and mainly just anosmia).
Too bad the data showed no such thing.
* * *
There are two fatals flaws in the research design, either one of which on its own would render the work pointless.
The first relates to the laboratory tests that were used to test for Sars-CoV-2 infection. The flaw is easily summarised: the tests don’t work as described. The researchers acknowledge that the procedure generated “missed targets” (or “false negatives”), and that around 140 of the participants with negative antibody test results probably did have COVID after all. However, what they fail to acknowledge is that there was also a significant rate of “false alarms” (or “false positives”). According to the parameters they described themselves, at least 40% of all positive antibody results would probably have been wrong.
As the entire study hung on comparing “lab-confirmed” COVID cases with “self-reported” ones, such rates of missed targets and false alarms really do matter. Nearly half of the participants in one of the comparison groups should have been in the other. That’s a pretty bad problem for any research study.
However, as the authors fail to mention it, it seems the journal’s reviewers (and editors) didn’t think to check for themselves.
* * *
By definition, one fatal problem should be enough to kill a study. But so many bad research practices are essentially zombies: they keep coming back from the dead. So for good measure let’s discuss the second fatal blow as well.
The problem relates to the fact that the study was cross-sectional. This means that causal sequences cannot be identified. Even if the data were reliable, the study design would not allow the researchers to determine what actually caused what.
Because of the cross-sectional design, the researchers had to rely on participants to self-report their own symptom histories. But asking people whether they previously had COVID is unreliable, especially when they have ongoing symptoms. Many participants will logically interpret their illness history in terms of their present health.
Having many current symptoms may lead a person to conclude that an earlier illness was in fact COVID, even though they were unsure about it at the time. On the other hand, having few (or no) current symptoms may cause them to discount a previous period of ill-health as insignificant. In research terms, we would say that recall of past illness is inevitably confounded by current symptom profile. Prior experience might influence current symptoms, but current symptoms might influence (recall of) prior experience. There is simply no way to know which is happening.
It seems the researchers tried to morph a cross-sectional study into some kind of longitudinal investigation by relying on retrospective self-report. But, unfortunately, research doesn’t work that way. Logic doesn’t work that way.
And as David Strain of the University of Exeter has pointed out, even if a participant didn’t actually have COVID, that doesn’t mean they didn’t have some other viral illness instead. Their current symptoms might have been caused by another infection. But the researchers didn’t check for other infections, so their conclusion that these people’s long-lasting symptoms were likely caused by “beliefs” falls flat on its face. They cannot infer an “illness belief” explanation unless they first rule out other, more likely, explanations.
The evidence in this paper can be called many things.
But extraordinary it certainly ain’t.