Academic publication is increasingly a performance art. So how effective is its quality control?
– AN ESSAY –
The sheer stampede of COVID-19 research has drawn mainstream attention to the issue of quality control in scientific publishing. Even long-established major journals seem more inclined than ever to carry content of dubious quality. For several reasons, the problems seem especially parlous in research that straddles the behavioural sciences and medicine.
Apropos of all of this, here is an essay extracted from my book, Psychology in Crisis.
It might not seem obvious, but in many respects professional academia is a lot like the music business.
Talented practitioners spend professional time reducing hundreds of hours of toil to an end-product that take a few minutes to consume. They strive for their output to be purchased, downloaded, and (ideally) appreciated. But even if it is ignored, a true rock star must keep on keeping on. They work on several projects at once, aiming for prolific productivity over time. Collaboration is the key to success (now and again a featured rapper might be called upon to bolster the gravitas of an otherwise forgettable track).
Some outputs will be unexpected triumphs, others out-and-out duds, but each will stand or fall on its own merits.
Yesterday’s hits can be a source of nostalgia, but today’s street cred requires new material, critical acclaim, and ongoing chart success.
The music business is highly competitive. Few performers achieve the success they envisaged in their dreams. Most are left to ponder what might have been. Obscurity is not so much a risk, but a reasonable level to aim for. Churning out enough middle-of-the-road product to keep food on the table is all that matters in the end. Everything is much less glamorous than it appeared from the outside.
Nonetheless, the show must go on…
Professional academics go through all these experiences, but with journal articles instead of pop songs. Their energy is expended designing new studies, garnering resources to conduct them, analysing the resulting data, writing the relevant papers, and submitting the stuff to journals. Little of this work is itemized on a paid-by-the-hour basis. Rather, it is absorbed into daily life, whenever and wherever it can be fitted in. Ultimately what counts is the success of the end-product. Formally and informally, the fortunes of academics are determined by research outputs. By their papers shall you know them. Reputations, promotions, introductions, and opportunities all flow from scholarly productivity.
In university life, the slippery pole can be climbed in many ways: serving time on committees; navigating, or cutting through, bureaucracy; learning to decode criteria; knowing the right people; or – let’s not forget – being an effective teacher. However, not all promotions are the same. The truest cachet attaches to those who are elevated on the basis of research. Nobody became an academic star by being good at chairing meetings. When a keynote speaker is announced, their bio will not tell us how many hours per week they teach.
What counts as currency in this crazy, mixed-up, rock ‘n’ roll world is volume.
How many peer-reviewed publications have you managed to churn out?

* * *
The entire academic journal industry owes its origin to status anxiety in academia. Before the seventeenth century, scholars and scientists published their ideas in books. Books were difficult to manufacture and slow to produce. If you wished to report something, your audience would have to wait to read about it. The delay meant that one of your rivals might get there first. If another scientist made the same discovery – but had better access to printing – they could publish their work before yours was ever read.
Nobody would care that you were really the first with the finding. History would not even notice you.
Journals were established to deal with this problem. By containing works of several scientists, curtailing the length of articles, and appearing frequently throughout the year, they offered a rapid and accessible way to disseminate discoveries. More importantly, they helped scientists establish primacy: they recorded who was first to produce particular findings. So it was that journals were invented to ease the angst of academics who feared falling behind.
Today, academic journal publishing is an enormous commercial industry, largely operated by private corporations. The market for science journals alone is believed to be around €10 billion per year. Currently, 28,000 English-language journals publish 50,000 articles per week. According to the APA’s PsycINFO database, around 2,000 journals publish psychology research. By rough approximation, this means that around 500 psychology articles are published each day.
The vast bulk of these are written by status-anxious academics.
Each year, hundreds of thousands of researchers, most of them public servants, expend millions of pro bono hours producing material for journals to publish. They do this because, in one form or another, they want to advance their careers. For many, it’s a simple matter of clocking up a paper-count: the higher the number, the higher their score in the system that determines their rank.
True, for other academics, publication in journals is a means to a different end: a way of sharing knowledge, solving problems, or advocating for positive change. However, even these colleagues will want their names on the papers. It matters to them that the world knows it is they who have authored their work.
They may not be hungry for promotion, but they still infer status from publication.

* * *
Journal publishing is a highly profitable business. Even though subscription fees are high, most of the production is done by volunteers. Authors submit papers to editors, editors allocate submissions to associate editors, associate editors recruit peer reviewers, peer reviewers review submissions and return comments, associate editors pass these comments back to authors, and authors then re-write their papers. For nearly every journal these tasks are done for free; nobody gets paid.
In fact, for an increasing number, the opposite is the case. Should a paper be deemed good enough to be published, then instead of being offered a royalty, the author will receive a bill. An ‘article processing fee’ is required to ensure the paper sees the light of day. Around ten per cent of journals operate such pay-to-play systems, where the author must cough up a thousand or so dollars or else ditch the paper from their résumé.
Each year more and more new journals are launched, the majority on a for-profit basis. Given that most private corporations run their businesses on commercial terms, there inevitably arises a question about how – or whether – quality control is ensured.
At face value it seems that a worldwide public education sector is subsidising a worldwide private publishing industry. Universities pay for bulk subscriptions using taxpayers’ money, while public servants supply the content for free, sometimes contributing a stipend for it to be published. The incentives in this industry are for quantity, not quality.
In psychology, as for most sciences, academic journals are integral to the production and dissemination of knowledge. New research is not considered credible until it appears in a journal. Journals are the gatekeepers of the field’s accumulating content. They determine what is and is not psychology. The merit of the field is shaped by their policies and practices.
Therefore, if psychology really is in ‘crisis’, we certainly need to talk about journals.
* * *
Quality control in academic journals is managed through the peer review system. Each submitted paper is assigned to an associate editor, who co-ordinates the process.
Their first task is to find some peer reviewers. This usually involves searching online for academics who are knowledgeable in the requisite subject area. Often several must be contacted before any agree to serve. Once two or three are recruited, each will read the submitted paper and write a report for the journal.
They can reject the paper immediately or recommend it for further consideration. If the latter, their reports will contain suggestions as to how the paper might be improved. The authors are sent these reports and invited to reconfigure their paper to take account of the reviewers’ recommendations. The paper is then re-submitted and reviewed all over again.
There is no guarantee of eventual publication: if the modified manuscript is unsatisfactory, the reviewers will return it with fresh reports. Most papers undergo multiple such rewrites. A large number are still rejected in the end.
Rejection rates vary from journal to journal. Prestigious journals might turn down ninety per cent of submissions. A journal considered reasonably high quality could reject seventy-five per cent. Journals that charge ‘article processing fees’ usually have lower rejection rates, but will still dump around a third of what they receive.
In principle, the system should work: it is blind, expert, conscientious, and balanced. Getting published is certainly not easy.
However, practice does not always emulate principle. In practice, the peer review system is not really blind, not really expert, not really conscientious, and not really balanced.
A large portion of submissions are rejected in the name of quality control. However, the accuracy of the cull is open to question.

* * *
There are several reasons why the system is not really blind.
Reviewers regularly realise whose papers they are reviewing. They recognise the methods of research, the laboratories described, the writing styles, or the ideas expressed. A frequent giveaway lies in the manuscript’s bibliography: authors often over-emphasize their own research when explaining new studies, creating clusters of conspicuous self-citations.
Knowing whose work you are reviewing strains your objectivity. You might be sympathetic towards authors you know, and cagey towards ones you’ve never heard of. You might allow grudges or rivalries to cloud your judgement. You may succumb to a range of unconscious biases prompted by the university or country the authors hail from.
For such reasons, many journals dispense with blinding altogether: reviewers remain anonymous, but author names are included with manuscripts. Editors argue that, as blinding doesn’t really work, it is better to standardize the way that authors’ identities become known.
The peer review system also falls short on expertise.
Reviewer expertise cannot be assured because it is not closely assessed. Journals rely on a range of dodgy indicators of reviewer knowledge. One example is publication record: reviewers are assumed to know a subject if they have previously authored (or co-authored) papers relating to it. However, co-authorship can accrue from many reasons other than subject-area awareness.
Some journals try to divine expertise from reviewer self-report. On the journal’s website, potential reviewers are invited to tick boxes indicating their chosen specialist subjects. This approach is reliant on said reviewers having sufficient personal insight to recognize the limits of their own competence, a quality not shared widely among the human race in general.
Another idea has been to ask authors to nominate their own reviewers. Perhaps inevitably, unscrupulous researchers have been known to try to game such systems. Some nominate their friends. Others invent fake reviewers, or provide fake email addresses when nominating authentic ones.
Ultimately, the peer review system does not by necessity ensure that reviewers have appropriate expertise. Many submissions – possibly the majority – are reviewed by non-experts.
There is also the matter of reviewer diligence: the peer review system is not inevitably conscientious.
Nothing guarantees that reviewers will concentrate when writing reviews. If anything, the highly automated, deadline-driven nature of most journals’ reviewer management systems is likelier to provoke irritation, and thus carelessness. The torrent of automatic emails that reviewers receive from journals can seem relentless.
First you are invited to be a reviewer, then reminded you have been invited, then sent a note to thank you for accepting, then reminded you have accepted, then prompted to submit your review on time, then reminded that the deadline is approaching, then told that the deadline has passed, then sent a second reminder, and so on…
While a barrage of emails from ‘no-reply@journal.com’ probably prevents you from forgetting your assignment, you might be sorely tempted to rush the job just to stem the flow of robotic correspondence.
Finally, the peer review system is not that well balanced.
Some papers receive an in-depth and forensic assessment, others receive a lighter touch. Some reviewers concentrate on the overall concept, others get hung up on the smallest details. Some reviewers present their review in one hundred words, others require a thousand to deal with the same manuscript. Some reviewers write in a friendly and collaborative tone, even when rejecting a paper; others are austere and sanctimonious, even when recommending publication.
A lot of variation results from inconsistencies in reviewer expertise and conscientiousness, as we have just discussed. However, much of it stems from the fact that reviewers rarely receive clear or consistent instructions from journals.
Usually there is a permissive atmosphere: reviewers are trusted to decide for themselves how to produce their reviews. While reviews tend to be written in a particular style, this reflects more an evolved academic dialect than an agreed industry standard.
There is no ‘official’ way to write a review, and no ideal or definitive format.
* * *
Psychologists will be well aware of another factor that impinges on peer review: namely, the social context of author-reviewer relationships.
All interactions between authors and reviewers amount to social exchanges characterized by power differentials. When a reviewer asserts their expertise, the author must respond in a way that meets the reviewer’s needs as well as their own.
Given the difference in status between subjugated author and pre-eminent referee, this usually involves much deference and forelock-tugging.
In their cover letters, authors will express effusive gratitude for the blessings bestowed upon them (e.g., ‘We thank the reviewers for their helpful comments, which have greatly improved our paper’). Most will calculate that reviewers have a soft spot for personal flattery (‘We thank the reviewer for their valuable insights’). If the reviewer has recommended a specific point be added, the author will transcribe it verbatim into their revised manuscript, and then draw attention to having done so (‘We thank the reviewer for this most helpful suggestion’).
Should they find the feedback unfair, authors will do everything they can to disguise it. It is much safer to play the role of grateful serf, happy to facilitate the whims of one’s social superior.
This culture of obsequiousness has arisen because it is adaptive. Authors are more likely to succeed if they do these things. For their part, reviewers are just as likely to succumb to this norm of social harmony. When things go well, the reviewer’s amenability is reinforced by the author’s praise, and the author’s praise is reinforced by the reviewer’s amenability. As love-bombing campaigns go, this escalation between reviewers and authors effectively constitutes an arms race.
In short, publication decisions are influenced by more than just scientific merit. Any belief that peer review filters out flaws and polishes perfection should be dispensed with.

* * *
The warts-and-all reality of peer review can be revealed by a thought experiment:
Imagine that a paper has been submitted to a journal. After it is reviewed, the authors modify the manuscript and resubmit it in revised form. Assuming the recommendations have been acted upon, the paper is ultimately accepted.
However, for the purposes of our thought experiment, it is not actually published. Instead, the final version of the accepted paper is now re-submitted, as though it were a brand new paper, to the same journal. It is assigned to a different associate editor who recruits a new set of reviewers.
We should bear in mind that the version of the paper they read has already been declared fit for immediate publication by the journal’s peer review process. But we can be pretty sure of the result of our thought experiment: the second set of reviewers will pick apart the manuscript, identify shortcomings, and write serious reports that include recommendations for adjustment.
In their minds, the paper will not be fit for immediate publication. Prior to interacting with the authors, the reviewers will invariably assume the manuscript to be deficient. The reviews they write will confirm these self-fulfilling prophesies.
Our thought experiment leads us to a stark conclusion: no paper ever published in any journal would be considered ready for publication were it to be reviewed, as fresh, by new reviewers.
In that sense, no published paper meets the publication standards of the publication in which it is published.
* * *
The entirety of psychology’s subject area content – for example, every substantive point made in a textbook – is judged by the standards of journal peer review. Psychology studies are not considered credible unless they have appeared in peer-reviewed journals.
We might assume that most publishers, editors, associate editors, and reviewers are honest people with high standards who want to do a good job. The problem is not that there is a clear preponderance of untrustworthy people involved. The problem is that bad practices are possible.
If malpractice is not really controlled, then it is not really curtailed. It is impossible to be sure when and where we can have confidence in published research.
Inconsistent reviewer selection, unjudged reviewer expertise, variable reviewer effort, unpredictable reviewing styles, and the social psychology of author-reviewer interactions do not bode well for the rigour of peer review as a system of quality control.
And yet this is the system that controls the quality of psychology.
This essay is adapted from my book, Psychology in Crisis, published by Palgrave.
RECENT POSTS

Brian Hughes is an academic psychologist and university professor in Galway, Ireland, specialising in stress, health, and the application of psychology to social issues. He writes widely on the psychology of empiricism and of empirically disputable claims, especially as they pertain to science, health, medicine, and politics.