An invited, non-peer-reviewed guest editorial in the BMJ has claimed that behavioural interventions for “complex conditions” (such as ME or CFS) should not be judged using the customary criteria — and that the relevant studies should not be evaluated as though they were proper randomised controlled trials — because, among other things, “double blinding is impossible.”
I don’t think this makes sense. Not least because one of the editorial’s co-authors had previously argued that standard RCTs were, in fact, essential for this purpose.
This is what he said back then:
“Randomised controlled trials provide the best and only reliable evidence on safety and effectiveness of any intervention in any condition”
— February 2011
I couldn’t have put it better myself.
In my view, we should indeed judge such studies by the best available standards. A lack of blinding, together with the use of self-report, is a massive fail in research methodology terms. Any suggestion that we now downgrade the importance of blinding should be dismissed for the self-serving goalpost manipulation that it is.
The good news is that the Science Media Centre is on the case. Kind of.
Let’s have a look at what the Science Media Centre has had to say about blinding, and the general importance of methodological rigour when evaluating complex conditions and interventions:
“It’s been established for at least 70 years that blinding is an absolutely essential part of clinical trial design, and it’s deeply shocking that [Journal Name] should publish a test of [therapy name] in which the patients were aware of which group they were in”
— February, 2019
“…[the research design] does not adequately control for the non-specific positive effects associated with receiving at least some form of regular therapy and could therefore be expected to have inflated the apparent effect of CBT”
— July 2019
“…I would caution against any uncritical assumption that this will translate into clinical benefit. The findings need to be replicated under clinical trial conditions, with larger numbers of participants and with robust blinding of subjects and outcome assessors”
— April 2019
“the treatment assignment is not “blinded” from participants….any positive results could be due to participants’ perception that they are on a ‘better’ treatment, rather than due to the effects of the treatment itself”
— November 2017
“Studies using even the most basic methods, e.g., control groups and blinding, to reduce bias showed less efficacy, suggesting some or all of the positive outcomes are due to bias”
— December 2016
“The trial has several serious flaws, e.g. small sample size, high drop-out rate, questionable statistics and doubtful blinding of patients or evaluators”
— December 2013
“In randomised clinical trials like this, it’s standard good practice to ‘blind’ the participants, that is, as far as possible to disguise from them which treatment they are receiving”
— December 2013
All these quotes are from various “Expert Reactions” published by the SMC, and relate to studies of a range of complex conditions and interventions, including CBT.
In short, there is clearly a common consensus among researchers in the health sciences on this point. Even for “complex conditions”, or indeed for “complex interventions”, a lack of blinding significantly undermines the merit of a research trial.
Therefore, of course poorly blinded trials should attract criticism from expert commentators, such as those appointed by regulatory bodies to establish treatment guidelines for “complex conditions”.
Of course the evidence produced by such research should be classifed as being of “LOW QUALITY” or even “VERY LOW QUALITY”.
And of course it would be truly absurd to argue — on the pages of the BMJ, no less — that all these terrible studies would look a hell of a lot better if we would only just ignore their flaws.
No doubt the Science Media Centre will issue a fresh set of statements to this effect very soon.
I am available should they need me.