See no evil, hear no evil…

If you were confused by last week’s media stories about the latest review of UK science research on monkeys, then you are forgiven. The review, commissioned by the major British science funders and chaired by the president of the Zoological Society of London, evaluated 10 years of research involving non-human primates (NHPs) with a brief to establish whether or not such work was justified. Not a terribly vague objective, you might think, although perhaps a complex one. So what did they conclude, exactly? Well, according to some media headlines, NHP research was found to be little more than monkey business. For example, according to the Daily Telegraph, “Most experiments on monkeys have little medical benefit“. Now that certainly sounds bad: the Telegraph are telling us that the majority of NHP studies were medically useless. However, the Guardian presented quite a different arithmetic: “One in ten research projects using monkeys has no benefit, finds review” was their headline. One in ten, eh? That’s quite a lot. But it’s by no means the majority. So what exactly is going on? And are the Review Group’s findings reliable?

The full review report itself can be found here. The reviewers gathered together the published medical literature on NHP studies conducted between 1996 and 2006 that were funded by the major British funding agencies. They then evaluated each study against three criteria: scientific quality of the research; importance or usefulness of the findings; and level of suffering caused to the animals. Some of their reasoning was very objective and quantitative. For example, they examined bibliometric data in an attempt to bolster their determination of quality and importance, and they posited a complex comparison metric to help balance the various competing considerations. They even produced a lovely picture of their system, as follows:

Cubist bioethics

Now one important thing to note about all this is that, notwithstanding the fact that the Review Group comprised eleven members, the system of decision-making described relates solely to an approach devised by the Review Group Chairperson, Patrick Bateson. In fact, it is so specifically associated with him personally that in some circles it has become known as “Bateson’s Decision Cube“. Eerily, this is not mentioned in the Group’s report. This is pertinent because it helps to show us the influence of the Chairperson’s perspectives on the overall group-level conclusions. And that is pertinent because, although undeniably a very eminent scholar and scientist, Bateson is very much on record as having views on animal rights that are characterized by some observers as showing distinct queasiness about animal suffering. For example, Bateson is unequivocally opposed to the hunting of deer using hounds, despite the fact that his research on the matter produced much the same findings as those of pro-hunting lobbyists, who simply choose to interpret the data differently. There might be nothing at all wrong with being opposed to this type of hunting (as I am myself), but it is certainly worth knowing that Bateson has this view even though the underlying research data can be interpreted in different ways. Because ultimately, his approach on these matters is never as conclusive as it appears.

In fact, despite the emphasis on technical nuts and bolts and the various other signifiers of objectivity, final conclusions in this area can only ever be subjective. For example, let’s imagine that a particular experiment was extremely poorly designed but involved only three monkeys. Is that study more or less justifiable than an experiment that is extremely well designed but involved twenty monkeys? What if the first experiment (poorly designed, few monkeys) produced information that helped us to improve cancer treatments, but the latter experiment (better designed, more monkeys) produced information that helped us to understand different types of amnesia? What if the first experiment involved killing the monkeys, but the latter involved only non-fatal surgery? Or what if all of these permutations were re-arranged? The point is that no three-dimensional quasi-cube can tell a reviewer how to balance a measure of scientific quality or medical benefit against a measure of animal suffering. This is because its dimensions are quantitatively immeasurable.

As a result the Bateson report presents opinions rather than facts. Unsurprisingly, its descriptions of the findings are verbose and, to a large extent, non-committal. They are specific only for the simplest (and least informative) types of information, such as the number of scientific papers generated by each research grant awarded. However, while such information can be interesting in its own right, it doesn’t succeed in meeting the Group’s original brief; namely, to establish whether the research conducted was justifiable. Instead, that part of the exercise reduces to a series of subjective judgement calls produced by a group of eminent scientists and commentators (with some of the most important judgement calls relating to how the methodology used for the review was designed), who by all appearances are inclined toward accepting the approach of their strong-willed leader.

Oh. So 90% of projects are okay then?

In reporting the story, few if any media outlets challenged the validity of the findings. This is disappointing because there is a real problem emerging in contemporary medical debates, where animal-rights views are often injected surreptitiously into the discourse under the guise of scientific commentary.

Instead, the media attempted to look at what it was the Bateson group had found. And hence the confusion. The Telegraph chose to focus on the ostensibly poor productivity of the literature as a whole, and to emphasize the fact that the reviewers felt that a large majority of studies (c. 90%) produced little more than a modest benefit to science (the Telegraph also distinguished medical benefits from non-medical benefits, which had the effect of reducing the number of “beneficial” studies referred to in its headline). However, the Guardian focused instead on the relatively small minority (c. 9%) of studies that the reviewers declared to be completely useless in any scientific sense. In the end, all this still largely comes down to a person’s own point of view as to whether a particular amount of benefit to humans can be said to outweigh a particular amount of suffering to animals.

Precisely why the Telegraph and Guardian chose their contrasting positions is not immediately clear. On social issues, the Telegraph and Guardian typically adopt right-wing and left-wing editorial stances, respectively. As animal rights activism is often seen as a left-wing cause, it might have been less surprising had it been the Guardian that voiced skepticism toward monkey experiments and the Telegraph that was the more encouraging.

But more simply, it could just be that readership demographics played a part. The (median) average age of Telegraph readers is around 65, with a full two-thirds of readers aged 55 or over. Meanwhile the Guardian’s readership is much younger — the median age of their readers is less than 44, more than 20 years younger that those of the Telegraph. Given their much older readership, maybe the Telegraph journos are more cautious about alienating animal lovers (given that older adults are more likely to own domestic pets), and so are naturally reticent when it comes to the issue of animal experimentation.

Or maybe it was all just a fluke and that the particular journalists (or headline writers) at the Telegraph are themselves personally opposed to animal testing, and that this principled position affects how science stories are written up there. However, as things turn out, this appears unlikely. Rather, it seems that when it comes to monkey science, the Telegraph prefers to keep its options open. On the Science section of its website, it followed its report on how monkey experiments have been found to be generally useless with an article describing, in excited terms, how scientists may finally have discovered a means by which to reverse age-related memory loss. And how, pray tell, was this wonderful breakthrough achieved? Well, guess what. It involved research. Conducted on animals.

And not just any old animal, you see…

Ahem. Banana, anyone?

Share this:

One Comment

  1. Nice post. The print screen at the end sums it up perfectly… great spot!

Leave a Reply