It seems that just about everyone is doing COVID-19 research these days. To date, more than 3,500 COVID-19 trials have been registered with ClinicalTrials.gov this year (compared to, for example, just 263 trials on “testicular cancer” in the entire history of the site). Meanwhile, the social sciences are also experiencing their own coronavirus boom.
But not all research is good research, and there are reasons to be concerned about whether a literal stampede of scholarship is actually the best way to generate new knowledge.
Social policy academic Tina Haux is among those to express reservations about the surge:
I am not sure the rush has allowed us to reflect on the existing knowledge from previous crisis, to work across disciplinary divides and ensure that any research will be able to tell us not just about this crisis as it unfolds now, but also its long term effects…
And a new paper in the Journal of Medical Ethics draws attention to the specific pitfalls:
Research normally occurs at the speed of a marathon, but during a pandemic, the pace is more like a sprint. The prior slower pace gave space for reflection on quality, as well as time for researchers to rest their minds and bodies.
During the pandemic there is the potential for the race of research to occur with fewer reflective and rest periods. Less reflection can facilitate missed opportunities for quality checks such as robust reviews of study applications by research ethics committees (RECs) and quality checks by researchers and their supervisors. Less rest can facilitate fatigue and mistakes of human error or judgement (eg, taking short cuts rather than performing standard practice).
And let us not forget that for some people, the pandemic is the perfect opportunity to peddle their favourite gobbledygook using the cloak of science as camouflage. Back in April, law professor and quackbusting medical ethicist Timothy Caulfield had “had enough already“:
…we must stop tolerating and legitimizing health pseudoscience, especially at universities and health-care institutions. Many bogus COVID-19 therapies have been embraced by integrative health centres at leading universities and hospitals…Those pushing unproven ideas use the language of real science — a phenomenon I call “scienceploitation” — to legitimize their products.
The World Health Organization have a name for this too. They call it the “COVID-19 infodemic.”
* * *
The explosion of COVID science raises questions about research standards. Investigators, reviewers, editors, and readers are processing more and more information with ever lessening bandwidth. Our systems of science were never designed for this level of activity.
In fact, they were never “designed” at all. Peer-reviewing evolved organically, as an amateur honour system intended to bolster the credibility of academic journals. Its limitations had become increasingly apparent long before the COVID crisis struck.
According to bioethicist Katrina A. Bramstedt — who wrote that Journal of Medical Ethics paper — by mid-2020, some 19 COVID articles and 14 preprints were either retracted, withdrawn, or subjected to expressions of concern in the research literature. As of October, the Retraction Watch website was showing 35 retractions of COVID-19 research.
Many of the retractions related to research exploring the biomedical aspects of COVID, including the notorious Surgisphere studies into hydroxychloroquine.
However, the social sciences have been well represented too. Problematic studies have included a paper arguing that lockdowns have no deliterious effect on the self-care in diabetes patients, a paper on the management of college operations during outbreaks, a study of mental health and coping in medical workers, and a study claiming that “anxiety” is a risk factor for COVID-19 death.
All of these sound like important pieces of work. It’s just that they were also substandard pieces of work, weighed down by shortcomings so bad they needed to be wiped from the academic record.
* * *
Research rigour is often undermined by excessive investigator optimism. In the scramble to produce useful findings for humanity (or, if you prefer, in the rush to board the latest grant-awarding gravy train), researchers can end up over-estimating the importance — and merit — of their work.
In this context, research critism can be a sometimes treacherous pursuit. It is easy to agree that fakery and charlatanism should be condemned. But what about all the mainstream stuff that is just of plainly questionable quality? What about the papers that never get retracted even though their flaws are widely discussed?
In many fields — for example, studies of behavioural healthcare interventions — condemning a study’s research methodology can result in quite a bitter backlash. In some cliques it is considered extremely poor etiquette (if not bullying or harassment) to complain about a healthcare intervention study because, after all, the researchers were just trying to help people.
It’s as though mistakes made in the milieu should be written off because of the nobility of the researchers’ aspirations.
* * *
But social norms about how we should deal with “good intentions” present a barrier to dealing with the infodemic problem.
This is because bad research does more damage than good. It should be critiqued against a higher standard of evaluation, rather than excused on the basis of a lower one.
Bad research hurts all stakeholders. Bad intervention research hurts patients. Bad screening research hurts the general population. Bad ergonomic research hurts workers. Bad diagnostic research hurts clinical practice.
And contrary to the understanding of most researchers, bad research is more likely to produce “false positives” than “missed targets”. Instead of giving us no findings at all, the way probability theory works ensures that flawed research is statistically more likely to produce findings that look real but are spurious — and which then go on to mislead everyone who subsequently hears about them.
In simple terms, bad research produces fake news.
* * *
Good intentions regularly lead to terrible outcomes. While researchers’ hearts may be pure, that fact alone does not guarantee that their works will be blessed.
In psychology, we sometimes talk about the “just-world fallacy”, the belief that in the end our collective actions will average out to produce fair and fitting consequences. This, of course, is a fallacy because our efforts are free-standing. They stand or fall on their merits, not on the basis of karma.
Putting all hands to the pump in a time of crisis certainly feels appropriate, but there are dangers in rushing headlong into the COVID-19 research frontline.
The belief that good intentions lead inevitably to good outcomes may prove to be one of our world’s most dangerous delusions.
Brian Hughes is an academic psychologist and university professor in Galway, Ireland, specialising in stress, health, and the application of psychology to social issues. He writes widely on the psychology of empiricism and of empirically disputable claims, especially as they pertain to science, health, medicine, and politics.