Providers of complementary and alternative medicine (CAM) are always absolutely thrilled whenever real medical doctors claim to support their activities. Such support from authentic physicians constitutes a highly persuasive form of community advertising. Therefore, the fact that an entire organization of medical doctors is willing to lend its corporate support to such therapies is extremely significant. One such organization is the British Medical Association, which after 10 years, continues to publish its 2000 booklet, Acupuncture: Efficacy, Safety and Practice. This booklet announces the BMA’s recommendation that acupuncture be considered suitable for medical use, which is a truly odd recommendation given acupuncture’s very dubious record in empirical trials. In fact, far from backing away from this controversial publication, the BMA have recently made it available electronically via the Amazon Kindle and, as recently as 2010, included it in a list of recommended reading for physicians published by the BMA’s Board of Science.
However, quite apart from anything specific it says about acupuncture (material that is rather flimsy and poorly referenced), the book is actually more frequently cited as the source for a very dubious statistical claim concerning alternative medicine in general; namely, the claim that “over 50%” of British General Practitioners (primary care physicians) refer their patients to providers of alternative healthcare. This assertion doubles the pleasure for the CAM lobby: not only do we see a major organization of real doctors endorsing alternative medicine as a corporate level, but we are also informed that the majority of doctors at ground-level endorse it too. As you can imagine, this statistic has been extremely well aired within the CAM community (for example, see here, here, here, here, here, and here). It is a shame, therefore, that it is based on dodgy foundations, is wholly unreliable, and really ought to be withdrawn immediately from publication.
The statistic is described as arising from a survey conducted specifically for this booklet on acupuncture. Arising from the survey we are told that 58% of GPs have at one time or another arranged CAM services for their patients; 47% have arranged acupuncture specifically, 30% have arranged osteopathy, and 25% have arranged homoeopathy. Other CAM treatments arranged by the GPs included reflexology, chiropractic, and aromatherapy. Now, it is clear that if you add all these percentages together, you end up with a total far greater than 58%. This can only mean that several GPs have arranged multiple CAM services for their patients. (However, despite the pertinence of this point, the extent of the relevant overlaps — ordinarily an elementary descriptive feature of a dataset — is not reported).
If these figures seem a little high, they pale by comparison to what we are told about the number of GPs who employ their own CAM practitioner(s) on-site. The BMA results tell us that a “significant, but small” percentage of GPs — 11% — go so far as to give employment to complementary therapists within their premises. However “small” this figure might appear to the BMA, it appears extremely large to me. Based on the total number of GPs in Britain, it translates to a figure of around 4,415 primary care practices that formally employ CAM practitioners as part of their medical practice.
However, while these statistics seem truly amazing, your sense of awe might begin to wane when you look closer at the survey itself. First of all, the survey was confined to GPs who were members of the British Medical Association. Notably, more than 30% of British GPs are not members of the BMA, so at best the researchers had access to only two out of every three of their target population. So straight away it would be a little dodgy to conclude that 58% of “British GPs” are covered by this statistic.
Secondly, the response rate for the survey was just 56% (and this was after reminders were issued to those GPs who failed to reply within the initial time limit). Consequently, the statistical results reported can be interpreted as representing the views of just over half of those who were contacted. This poses severe interpretational problems. Given that the choice to participate was voluntary, it is likely that the 56% who volunteered were precisely those GPs who were most interested in alternative medicine. GPs who felt that alternative medicine is a load of nonsense would be less motivated by the survey initiative and far more likely to put the questionnaire in the bin. Overall, such a process would have created in-built pro-CAM exaggeration in any findings.
In fact, we already know that GPs who participate in surveys are not representative of GPs as a whole. And how to we know this? Well, because researchers have examined this very issue in empirical research of their own. One typical study found that GPs who respond to surveys are less recently trained, less qualified, and less likely to be working in a practice with fellow physicians, than non-responders. And where was this research published? In the Journal of Epidemiology and Community Health. And who publishes that journal? Erm… actually, the BMA. So essentially what we have here is the BMA promulgating a statistic, widely used as marketing spin by CAM providers, which is based on a sample that the BMA itself has demonstrated to lack validity.
In fact, for such reasons, the traditional threshold of acceptability for survey response rates before they can be published in medical journals is usually considered to be around 70%. According to one researcher, even 70% is insufficient. This researcher collected data showing that the people who respond first to physician surveys tend to have views that are different to later (or non-) responders. And where did this researcher publish this informative conclusion? In the British Medical Journal. And who publishes that journal? Yes, you’ve guessed it…
Quite apart from response rates, another important feature of any survey is the size of its target sample. While representativeness is certainly important, it is also well recognised that the smaller a target sample (as a proportion of the overall population), the less reliable its findings. The target sample of the BMA CAM survey was strikingly small. Despite the fact that there are over 27,900 GPs in the British Medical Association, and a further 12,200 GPs in Britain, the BMA issued their questionnaire to a random sample of just 650 potential respondents. This amounts to less than 2% of the total number of GPs in the United Kingdom. And given that only around half of these ended up taking part, we can safely compute that the BMA gathered the views of no more than 1% of British GPs, a figure that stretches the generalizability of the survey results to breaking point. For example, the 11% who said they employ CAM therapists in their practices physically equates to .099% of GPs as a whole. Given the nature of the survey, the statistic showing that “over 50%” of British GPs have arranged CAM services for their patients should also be recast. The proportion documented reflects .52% of British GPs who informed the BMA of such activities.
Now it is true that virtually all surveys involve extrapolating from small samples to the general population. However, there is a particular public relations problem in the present case. The BMA is a large professional organization with many thousands of members. Presumably the BMA is capable of keeping track of its members and, indeed, is in the business of doing so. Thus, when you hear statements along the lines that “the BMA reports that over 50% of GPs provide CAM”, you would be forgiven for assuming that the statistic is drawn from records that cover all BMA member GPs (drawn, perhaps, from membership files) rather than just 1% of them (who chose to take part in a survey). This would certainly appear to be the intonation of most second-hand reports of this statistic.
The extent to which it is scientifically reasonable to extrapolate the survey findings to the overall population of British GPs is restricted by the sheer statistical unreliability of generalising from such a tiny target sample to the population at large; and is damaged further by the self-selecting (and therefore non-random) nature of the participants, the consequent poor response rate, and the restriction of the study to members of the BMA. This lengthy caveat is particularly unfortunate given that the “over 50%” statistic has cropped up repeatedly and unquestioned since its initial publication, in government reports, publicity materials for CAM advocates, and in respected media outlets. It also continues to provide easy advertising spin for CAM providers who stand to make money out of consumers’ misconceptions.
As CAM providers are not generally renowned for taking a sophisticated approach to the analysis of research data, or for encouraging consumers to question the basis of extravagant claims, the best thing that can be done to protect the public from misinformation would be for the BMA to withdraw this propaganda once and for all.

Brian Hughes is an academic psychologist and university professor in Galway, Ireland, specialising in stress, health, and the application of psychology to social issues. He writes widely on the psychology of empiricism and of empirically disputable claims, especially as they pertain to science, health, medicine, and politics.