selective focus photography of magazines

Journalists covering ME/CFS: Don’t ask about the new NICE guideline, ask about the old one

To fully understand the future, it is important to know the past.

Next week, we finally get to see that long awaited new NICE treatment guideline for ME/CFS. As regular readers will know, all indications so far suggest the new guideline will be dramatically different from the old one. Graded Exercise Therapy is set to be expunged entirely, while Cognitive Behavioural Therapy is no longer to be used as “treatment” for ME/CFS, but instead is to be recommended as psychological support for people’s mental health and well-being.

By extension, the so-called “science” of so-called “psychogenic illness” is on the verge of being discredited. The whole transition amounts to nothing less than a paradigm shift in what has long been a contentious area of medicine.

Of course, there has been pushback from some quarters. Recently I described a highly partisan news story that appeared in the BMJ. The BMJ journalist appeared genuinely confused as to why the old guidelines had to be changed at all. As they wrote:

In 2007, NICE recommended interventions such as cognitive behavioural therapy and graded exercise therapy for people with mild or moderate ME/CFS, whereas the draft update cites a “lack of evidence for the effectiveness of these interventions.” It is unclear, however, how the evidence became unsupportive.

I anticipate that this line — “it is unclear how the evidence became unsupportive” — will be the preferred narrative used by those who wish to discredit the new guideline when it is published next week.

For defenders of the status quo, it’s always about narrative. That is because when evidence and logic evaporate, narrative is all you have.

Many of these people are experienced media players. When the draft guideline appeared last November, they took to the media in unison to decry the proposed changes. Their chief weapon was wordplay: in what appears to have been a co-ordinated PR strategy, they persisted in referring to the old CBT and GET treatments as “evidence-based,” even though NICE had just revealed that the opposite was essentially the case.

In psychology, such wordplay is known as anchoring. It succeeds because it plants a set of assumptions in an audience’s mind. By pitching your own preferred view first — even if it is false — you shift the burden of proof onto anyone who contradicts you. You make your critics look as though they are the ones who are being contentious.

So when the BMJ journalist states — as fact — that it is “unclear…how the evidence became unsupportive“, they are pushing a narrative that asserts — as fact — that (a) the old evidence was supportive, and (b) the new evidence should rightly be the same as the old evidence.

And this is why we need to know our history. Because contrary to what was implied by that BMJ news article, there is every reason to expect the new evidence to differ from that which existed before. This is mainly because the old evidence was not, in fact, all that it appeared to be.

* * *

The old NICE guideline for ME/CFS (which NICE at the time referred to as “Chronic fatigue syndrome/myalgic encephalomyelitis (or encephalopathy)“) appeared in 2007. The guideline and its supplementary material were posted on the NICE website. Included was a 488-page Appendix entitled “Work to support the NICE Guidelines.” The document contained several sections, each relating to a specific review question. The section on Question 3 contained the evidence on whether “any particular intervention or combination of interventions is effective in treatment, management or rehabilitation” of people with ME/CFS.

In short, this was the section that included the original 2007 evidence relating to the use of CBT and GET as treatments for ME/CFS.

While the entire Appendix runs to nearly 500 pages, most of it concerned issues such as criteria for diagnosis and the needs of healthcare workers, support staff, and families. The section dealing with whether CBT and GET should be used as treatments for adults with ME/CFS was quite brief.

Focusing on randomised controlled trials (RCTs), the core evidence base spanned just four studies. The section on GET contained just five RCTs. There was a small number of other studies that examined modified versions of CBT, or interventions that combined GET with drug treatments, but by definition these trials did not test the effects of CBT and GET per se.

The authors of the Appendix reported that the unadulterated RCTs showed positive effects for CBT (and GET), although not in every case. And that was it. They provided no discussion of potential limitations of these studies, other than to declare them to have had high scores for “validity”.

What made this rather slim discussion especially intriguing was its provenance. The work was prepared by a group of researchers from University of York’s Centre for Reviews and Dissemination. The intriguing part is that back in 2001, these very same authors published a much more thorough review of exactly the same evidence base. This earlier review was not exactly obscure: it appeared JAMA, the flagship journal of the American Medical Association.

Unlike in their Appendix for NICE, in JAMA the York Reviewers elaborated on several caveats to the so-called evidence base:

  • The York Reviewers noted that the studies had high dropout rates, raising the prospect of survivor bias — a tendency for apparent improvement effects to be artificially inflated by the fact that the analysed datasets were likely to contain only those participants who found CBT or GET beneficial.
  • The York Reviewers criticised the use of subjectively measured treatment outcomes. They explained that it was “unclear” whether improvements in such measures amounted to actual improvements in real life. They specified that any tangible record of life function — such as post-treatment employment status, school participation, or physical activity — would have offered much better insights as to whether the interventions were truly beneficial. They bemoaned the fact that the studies they reviewed had lacked such measures.
  • The York Reviewers specifically identified a problem with the use of “self-reported function” as a measure of illness improvement. They pointed out that a patient might rate themselves as able to function better simply because their expectations have declined, or even plummeted. In other words, their frame of reference might have shifted downwards. This is especially relevant because such shifting-of-the-frame-of-reference is exactly the target of CBT treatment. When patients say they improve after CBT, it might just be a sign that the CBT taught them to attach higher ratings to their own poor health. Once again, the York Reviewers bemoaned the fact that in the studies they had reviewed, precisely this type of problematic measure was used as standard.

To be fair, the York Reviewers were quite polite in their JAMA paper. They did not rebuke the researchers, they merely drew attention to the large number of damaging flaws in their studies. However, their conclusion was pretty clear. Given the various caveats that characterised this research, they declared: “All conclusions about effectiveness should be considered together with the methodological inadequacies of the studies.” The best they could say about CBT and GET was that the relevant studies showed “promise.”

Strangely, this focus on caveats disappeared by the time the York Reviewers wrote their Appendix for NICE. This was despite the fact that their assessment of the research for NICE was based on the same studies covered in their JAMA article. The text they prepared for NICE contained no caveats or discussion of study limitations. As a result, it drew attention only to the purported treatment improvements that were reported by the original researchers. It was as if they now had no concerns at all about high dropout rates, subjective outcome measures, or the inherent ambiguity of self-reported improvement.

One wonders what brief the York Reviewers were given by NICE at the time.

* * *

This is what it all comes down to: to appreciate the significance of the 2021 NICE guideline for ME/CFS, you must see it in the context of what preceded it. The new guideline is not just different to the old one. It is superior.

Here’s a quick comparison.

In the 2007 NICE guideline:

  • CBT was promoted as a treatment for ME/CFS — on the basis of 4 RCTs
  • GET was promoted as a treatment for ME/CFS — on the basis of 5 RCTs
  • The limitations of these studies were largely ignored by the NICE guideline committee

By contrast, in this year’s NICE guideline:

Quite obviously, in contrast to what preceded it, the new NICE guideline (as apparent from the draft) draws from a far more robust evaluation of a vastly broader evidence base. That is why the 2021 update departs so radically from what was produced back in 2007.

Claims that “it is unclear how the evidence became unsupportive” are simply bogus. Simply put, the evidence was never really “supportive” to begin with.

Journalists, please take note. When covering this story, the questions should not be about how the new NICE guideline came to be so different from the old one.

The real question is: How exactly was the old guideline ever approved in the first place?

To successfully understand the future, first you must investigate the past…

Share this: