“The problem may well be that some of our treatments are too evidence based”

Whether to laugh or to cry, truly that is the question.

Professors of psychology can be a strange breed. While some are blind to their own faults, others are are obsessively self-critical. Perhaps this is why the field of psychology has been described as being “in crisis” since (at least) 1897.

Concerns about the robustness of psychology research have been discussed for over a century. Wundt was lauded as a pioneer of experimental psychology, but eventually saw his own work repudiated for, among other things, its lack of replicability. Watson launched the behaviourist revolution on the claim that subjective methods were irretrievably flawed. In the 1970s we had the “crisis in social psychology,” where sceptics pointed out the fatal limitations of trying to generalise from research findings to real life.

We have also had decades-long discussions of psychology’s low statistical power, its misuse of hypothesis testing, publication bias, demand characteristics, fashionable obfuscation, fabrication, Questionable Research Practices, and, of course, the replication crisis itself.

One would have thought that professors of psychology would be totally immersed in a professional discourse highly sensitised to the steep challenges involved in conducting effective research, and wary of the many pitfalls that cause so much of our work to be riddled with shortcomings.

After all, as professors, their job is to “profess” on behalf of their field. They are supposed to know about these things. Their students certainly do: all these issues are customarily explained to psychology undergraduates on degree programmes around the world. This is core subject matter.

But it seems that some professors of psychology are less able to profess than others.

Colleagues on the Science For ME forum drew my attention to this little gem of a passage — removed from the internet just last month, but preserved for posterity by the Wayback Machine — which had been posted by an esteemed psychology professor on the website of an illustrious UK university (emphasis added, but original typos preserved):

Much has been made of the fact that when badly “administered”, Cognitive Behavioural Therapy can be less than helpful in the management of Chronic Fatigue Syndrome…there was a recent patient conducted survey which showed a high degree of dissatisfaction with implementation of this treatment. However, we professionals would like to minimise this finding, we cannot ignore the fact that there is a very real problem…

The problem may well be that some of our treatments are too evidence based. Thoroughly convinced as were are by the evidence, we all too easily fall into a repetitive giving of advice…What this fails to address as an approach, are the very real reasons why a client may be sceptical or anxious; their wariness of yet more professional advice…

Stop right there — too evidence-based, you say?

Holy smokes, I think they actually believe it.

They fully acknowledge that these “evidence-based” therapies are widely seen as useless by patients, and are concerned that this is making people “wary” of their advice. Their solution to this conundrum is not to question the quality of the alleged underlying “evidence”, but to blame clinicians for implementing the therapies too trustingly.

The problem they see is not bad evidence, but bad interpretation of the evidence. When they say the therapies are “too evidence-based,” what they mean is “too evidence-based for some of you clinicians to get your unsophisticated heads around.”

The problem, as always, is other people. Clinicians who rigidly adhere to the treatment manual are being faulted for not ditching it in favour of their own judgement. If anything, sticking to the manual is letting the side down. It is making the so-called “evidence-based” treatments look bad.

Of course, we now know that the therapies this professor was talking about are not actually evidence-based at all. After an independent review of the associated research, the UK’s National Institute of Health and Care Excellence concluded that the quality of this alleged evidence was so poor as to have no standing in clinical terms. For this reason, NICE has now formally dropped these therapies from its official treatment guideline.

It is quite clear that to call these treatments “evidence-based” suggests a loose understanding of the facts.

But to refer to them as “too evidence-based” is on a whole different level of absurdity.

Such a phrase suggests that the writer doesn’t even know what word evidence actually means — or that they know perfectly well, but are so committed to preserving their own status as to not care a damn.

Share this: