What’s the most cited academic paper on complementary and alternative medicine (CAM)? Is it an evidence-based trial demonstrating the efficacy of a particular therapy? Is it a systematic review of a collection of efficacy literature? Is it a paper that explains a single major CAM modality, such as acupuncture or chiropractic, or one that explains them all? Or is it a treatise on how biological mechanisms (such as the placebo effect) can help explain how such therapies appear to work? Actually, it’s none of these. The most cited CAM paper in history doesn’t tell us how CAM works, how effective it is, what its limitations might be, or even what it actually comprises – instead, it tells us how popular CAM is. Yes, that, and how much it costs. In other words, it’s a paper about the marketing and commercial dimensions of the CAM industry in the US. And it appears in one of the most widely read medical journals in the world.
Now you might expect that a heavily cited paper must be a terrific one to consult, given that so many wise academics have found it to be such a useful source. For example, its methodology and conclusions must be pretty robust, and its take-home message impressively reliable, right? Right? Well, no, not really. Despite the fact that its findings are regularly cited, they are so wholly unreliable as to be highly ambiguous, if not downright misleading. The reason it gets cited so much has little to do with academic rigour or scientific validity. It gets cited because it makes the right point from a partisan perspective: namely, that CAM is just so hugely popular that everybody is using it!
The paper — “Trends in alternative medicine use in the United States, 1990-1997: Results of a follow-up national survey” — was published in JAMA: The Journal of the American Medical Association, by academic David M. Eisenberg and colleagues in 1998. It doesn’t really matter which academic search engine you use to measure citations, it always comes out on top. Entering the term “alternative medicine” into Google Scholar returns around 106,000 hits: the Eisenberg paper is the most cited with 4,803 citations (as of today), with the next ranked paper attracting 2,001 citations, and only one further paper attracting more than a thousand citations. Elsevier’s Scopus database is more discerning (it only searches among peer-reviewed technical journals) and returns 28,766 articles with “alternative medicine” in the title, abstract, or keywords. Nonetheless, Eisenberg comes out on top with 3,679 citations. The even more restrictive ISI Web of Science paints a similar picture. Of 6,000 articles listed with “alternative medicine” as a formal topic, the Eisenberg paper comes top again, with 2,998 citations.
Based on a telephone survey of 2,055 US adults, the Eisenberg group famously reported a near doubling of total visits to alternative medicine practitioners from 1990 to 1997, with around 40% of the American population said to be using CAM treatments at the time of publication. The vast majority of citations of this paper usually take the form of the following two claims: (a) that the number of Americans (40%) using CAM appears greater than that using mainstream medicine; and (b) that it can be computed by extrapolation that the amount of money spent on alternative medicine exceeds that spent on so-called “out-of-pocket” costs for mainstream treatment. However, both claims are just plain wrong.
One of the biggest problems in this study is that the researchers adopted an extremely broad frame of reference when attempting to categorise particular activities as types of alternative medicine. The researchers asked their study participants to report the number of visits they had made to a professional for a health problem, and to describe the type of therapeutic service received. You should bear in mind that the telephone interviewers never mentioned that the survey was aimed at investigating complementary or alternative therapies. The researchers then crunched the data so that mentions of CAM remedies were isolated and the numbers of visits leading to such recommendations were counted.
The first thing to notice is that while 30.5% of total visits reported were to a provider of chiropractic, the second through fifth most frequent remedies presented were massage (18.1% of reported visits), relaxation (16.4%), self-help groups (12.7%), and commercial diets (4.4%). However, including services like relaxation and self-help groups as examples of CAM is controversial. Doing so means that the number of visits counted as leading to recommendations of CAM will include respondents who were simply advised to “relax” or to attend a self-help group by a conventional physician for orthodox biomedical reasons. Many conventional physicians will make such recommendations even if they are strongly opposed to CAM on professional and philosophical grounds. The point is that these practices do not constitute CAM by any reasonable definition.
While not quite going as far as counting listening to a car radio as CAM, the Eisenberg team do stretch their definition very loosely when they include “relaxation” as a CAM modality. In fact, it could have been worse. They initially intended to include “self-prayer” (i.e., praying to a deity for better personal health) in their definition of CAM (they considered classifying it as a type of “relaxation technique”), but decided to exclude it from their final analyses. Presumably, the sheer prevalence of praying (35.1%) would have skewed their findings to a point where the absurdity would have been unconcealable.
It is interesting to note that when respondents reported visits for therapies that were conspicuously CAM, the reported rates were extremely low: only 1.7% of reported visits were for herbal medicine, 0.9% for acupuncture, and 0.3% for homeopathy. In other words, the Eisenberg study doesn’t tell us that CAM is popular; rather, it tells us that activities like relaxation and commercial diets are popular, but ordinary run-of-the-mill CAM — things like homeopathy, herbal medicine, acupuncture, and so on — is actually surprisingly unpopular.
The second major problem with the Eisenberg report is that the results were reported in terms of number of visits for CAM, rather than number of users of CAM. Crucially, the respondents who visited CAM providers did so repeatedly and to a far greater extent than those visiting mainstream doctors. Of course, this isn’t stated explicitly in the Eisenberg report; you need to extract the conclusion yourself from Table 3 of their article. Extrapolating the statistics across the entire US population, over 629 million visits to CAM providers took place in 1997 but just 386 million visits to mainstream physicians. However, looking at people rather than visits shows that 45.5% of the sample visited mainstream doctors compared to 25.1% who visiting CAM providers (both these figures include the 13.7% who reported visiting both). The data do not show that “more Americans visit CAM providers”; in fact they show the opposite.
Finally, the extrapolation to the American population at large is highly dubious. After you count in the various respondents who were excluded from the study while the sampling process was ongoing, the absolute response rate was 49.3% (2,055 of 4,167 contacted). When you count in the 481 respondents who didn’t speak English or the 878 who weren’t at home when the researchers phoned, the response rate becomes 37.2%. Even if everybody was included or were at home at the right time, the total sample would still have amounted to just 0.00002% of the US population.
Given all these flaws and limitations, the standard claims made for which the Eisenberg paper is cited appear extremely over-hyped. The reality is that these findings are cited because of their sensationalist nature, not because of their meaningfulness. And they continue to be cited again and again, at a breathtaking rate: according to Google Scholar it has been cited over 340 times in the past year, nearly once per day.
As described elsewhere on this blog, claims about the popularity of CAM are very frequently exaggerated, and the methodologies employed to cobble together supportive evidence are almost always substandard. But this doesn’t seem to affect the frequency with which these data get cited. Authors, reviewers, and journal editors really need to become more discerning about the types of citations that find their way into academic papers. In particular, the Eisenberg report ought not be cited uncritically as demonstrating its own hyped up conclusions. Rather than bolstering an argument, citing Eisenberg only succeeds in perpetuating misinformation. It adds error to the scientific literature and exposes the public to exploitative marketing pitches.
After all, if you are unable to cite convincing evidence for the medicinal efficacy of your therapies, what would be better than telling the world that “Everyone else is doing it”?
Brian Hughes is an academic psychologist and university professor in Galway, Ireland, specialising in stress, health, and the application of psychology to social issues. He writes widely on the psychology of empiricism and of empirically disputable claims, especially as they pertain to science, health, medicine, and politics.