In 2013, the European Health Psychology Society convened in Bordeaux for its annual conference. During this conference’s members meeting, the editors of the EHPS journals (Psychology & Health and Health Psychology Review) proudly presented the latest performance indicators of these journals. These included the rejection rates, which are typically seen as an indicator of journal quality: over three quarters of the submitted articles were rejected. These high rejection rates prompted long time member Marie Johnston to wonder whether these rates mean that so many submissions were methodologically unacceptable, or that the EHPS journals reject much high-quality research of EHPS members.

This anecdote touches upon one of the problems currently facing health psychology and other disciplines: the publishing practices that have served us so well in the past are increasingly at risk of becoming obstacles to efficient scientific progress. This mostly has to do with the page limits that publishers historically had to impose on traditional journals: because those journals were printed, the number of pages that could be printed for a given subscription fee was limited. Hence, a lot of submitted articles had to be rejected. Although this was a sensible, even inevitable measure, the resulting scarcity has incentivized a number of dysfunctional publication practices in authors, reviewers and editors. Most of these are related to the resulting necessity for journal editors to be very selective about which research is accepted.

Traditional publishing practices

Originally, publishing one’s scientific results in journals was a practical necessity, an innovation that enabled global scientific communication. Nowadays, published articles seem to have become ends rather than means, being used to evaluate researcher quality in grant procedures, job interviews, and by tenure committees. The impact factor of the corresponding journal is often considered to be an indicator of the quality of the journal, and therefore used as a proxy for article quality (although most investigations of an association between impact factor and study quality appear to show no, or even a negative, association, see Brembs, Button, & Munafò, 2013).

This impact factor is the number of citations to articles in a journal in the two preceding years divided by the number of articles in the journal in those two years, in other words, the mean number of citations for an article in a journal in the two preceding years. One of the goals of journal editors is to increase a journal’s impact factor. Because strategic article selection is one of the few tools available to editors to increase their journal’s impact factor, this incentivizes selection of articles that will likely be cited a lot. Some journals formalise this by instructing reviewers to select ‘important’ or ‘novel’ articles (where criteria could also favour ‘rigorous’ or ‘methodologically sound’ articles, regardless of their novelty value). A consequence of this selection criterion is that some types of research, such as null findings, reports of failed manipulations, replications, and multi-disciplinary studies are hard to publish. They sometimes are not published at all, regardless of their methodological merits, a phenomenon also known as the file drawer problem (Rosenthal, 1979) and contributing to the replication crisis (Open Science Collaboration, 2015).

The page limits employed by scientific journals also mean that authors have to be very concise in their articles, which means that descriptions of methods, analyses, considerations, decisions, and justifications are often shortened to the point where they preclude accurate replication. In addition to obfuscating the researcher degrees of freedom (Simmons, Nelson, & Simonsohn, 2011) this inhibits successful meta-analysis, thereby retarding scientific progress (Crutzen, Peters, & Abraham, 2012; Fuller, Pearson, & Peters, 2013; Peters, Abraham, & Crutzen, 2012).

A need for change

The advent of online-only journals could in theory have changed these practices, because the release from a printed edition means the page constraints are lifted, and as many supplemental materials as desired can be published with negligible costs. Nonetheless, the publishing preferences appear to remain unchanged. Null findings have been decreasing over time and this trend has not recently turned around (Fanelli, 2012). Despite many articles diagnosing these problems and pleading for improvement (Ioannidis, 2005, 2014; Nosek, Spies, & Motyl, 2012), either authors remain reluctant to submit for example null findings and reports of failed manipulations, or publishers and journals remain reluctant to start accepting these for publication. Similarly, most journals have not changed their policy regarding encouraging or demanding full disclosure of their authors (though some have implemented incentives that appear effective; Kidwell et al., 2016). Thus, we find ourselves in a situation where on the one hand, an urgent need to change the way our journals operate has been identified; but on the other hand, existing journals appear reluctant to implement the required improvements, even though this would not require any technological innovation.

Health Psychology Bulletin is the European Health Psychology Society’s answer to this problem.

The Health Psychology Bulletin model

Health Psychology Bulletin (HPB) aims to address the problems that were just outlined using a number of strategies. First, HPB explicitly welcomes null findings, reports of failed manipulations, and replications, alongside regular articles. Second, to minimize the likelihood that editors or reviewers reject an article based on study outcomes (e.g. rejecting null findings), HPB employs a two-tiered review process where reviewers and editors first only receive the introduction and methods sections, and only after these have been accepted, the results and discussion sections. Note that this procedure is followed for all articles: also for articles where the data has already been collected.

In addition, HPB has a strong Full Disclosure policy, which means that authors are very strongly encouraged to publish both a replication package and an analysis package along with their articles. A replication package contains everything other researchers need to replicate the data collection: questionnaires, stimuli, computer task source code, study protocols, and requests and provisions of ethical approval. For experimental set-ups, researchers are encouraged to include photographs or even videos to document their set-up as accurately as possible (and HPB is open to experimenting with novel ways to share details as to how a study was conducted). An analysis package contains everything other researchers need to replicate the analyses: the raw data file, the commented analysis scripts (e.g. SPSS syntax files or R scripts), and the generated output files. Both in replication and analysis packages, proprietary file formats are preferably avoided in favour of open standards, to minimize marginalizing researchers with minimal funds. Full Disclosure strongly increases the quality of data syntheses such as systematic reviews and meta-analyses, facilitates learning from each other (e.g. new analysis methods), minimizes redundant activities (e.g. development of similar questionnaires, computer tasks, or stimuli), and facilitates close scrutiny of studies, thereby increasing the likelihood that errors are spotted. In short, HPB aims to shift health psychology from a competitive to a collaborative model.

In addition to aiming to contribute to solving these problems, HPB addresses another issue that has become apparent at the society’s annual meetings: researchers with limited funds have problems making their research accessible. It is often desirable, and sometimes mandatory when demanded by funders, to publish in open access journals. Open access journals do not charge subscription fees, instead making articles freely available. The costs of processing these articles, the article processing charges (APCs), are paid by the authors, but are often too high for researchers with limited budgets. To meet this need for an affordable but reliable research outlet, HPB charges only €400 per submission if the first author is an EHPS member (note that regular EHPS membership costs €75, and discounted membership €25), and €500 for other submissions (these fees are valid at the time of writing, but are of course subject to change over time).

One step further: ethical publishing and the switch from story-telling to documenting

In addition to implementing a two-tiered review process to minimize outcome-based biases in editorial decisions, HPB takes one extra step to eliminate the biases in the published literature. The background to this step lies in the ethics of scientific research and is based on the view that collecting data creates an obligation to the scientific community to publish those data. There are three reasons for this. First, once the data has been collected, a scarce resource has been used: the participant burden had been incurred. Using up this scarce resource for a given study means that other studies will have a harder time recruiting participants. In addition, public resources such as money and researchers’ time will often have been used up. Given that society already paid these costs, it seems hard to justify discarding the dataset. Second, for the participants in a study, one incentive to participate in scientific research may be the promise of contributing to scientific progress. By choosing to not publish the dataset, the scientific community elects to violate that promise. Third, as the seventh revision of the Helsinki declaration states, “Researchers have a duty to make publicly available the results of their research on human subjects and are accountable for the completeness and accuracy of their reports. […] Negative and inconclusive as well as positive results must be published or otherwise made publicly available.” (World Health Organisation, 2013, p. 2194). Thus, ethically, there are very good reasons to publish all data once collected – but of course this does require a journal that accepts ‘negative and inconclusive’ results as well.

In addition to these ethical obligations, there are two more reasons why it is important to publish data regardless of our current assessment of the methodology. The first lies in the premise that scientific progress generally requires systematic reviews and meta-analyses (see e.g. Cumming, 2013). Single studies often lack the power to achieve sufficiently narrow confidence intervals (Maxwell, Kelley, & Rausch, 2008), and even if a study is sufficiently powerful, it is often constrained in terms of its population to one culture, region, or moment in time, precluding insight into moderation by such factors (Tan, Huedo-Medina, Lennon, White, & Johnson, 2014). Therefore, single studies allow for very tentative conclusions at best, and mostly serve as input for systematic reviews and meta-analyses. It would be presumptuous to assume that we now already know what kinds of datasets can be of use to answer the research questions that will be posed in systematic reviews and meta-analyses ten or twenty years from now. So, it is important to have as much data as possible accessible for future re-analyses.

The second additional reason to publish methodologically flawed research is that flawed research is rarely intentionally designed to be flawed. When a flawed study is conducted, it is usually safe to assume that the researchers were unaware of the flaws, and similarly, it is safe to assume that they were not the only ones who considered the study worthwhile. Publishing the reports of these failed studies including the acknowledgements of the flaws and the recommendations for preventing those flaws can serve as an efficient means to enhance the quality of future studies. Furthermore, such publications, by focusing on key methodological points of study designs, foster methodological discussions that may spark methodological innovations.

Thus, HPB goes one step further than accepting all methodologically high quality research. HPB accepts all research.

This policy means, however, that authors, reviewers, and editors have to get used to a different way of writing. Currently, telling a compelling story and presenting the results in a seamless manner are sometimes favoured over completeness and in some cases, accuracy. In HPB, the story is secondary to the accurate and complete description of the study proceedings. It is important to list everything that went wrong; all errors that were made; and all changes in the plans along the way. This is the only way to learn everything that can be learned from any given research endeavour. Thus, where traditional publishing practices have often incentivized polishing articles to the point that the study seemed virtually flawless and shiny, HPB encourages being as frank as possible about what happened exactly. This way, not only the involved researchers will learn the emerging practical lessons, but the entire scientific community can profit. This will require authors, reviewers and editors to approach the reviewing process differently.

At HPB, the reviewing process is envisioned as a collaborative effort to achieve an optimally informative article: a documentation of a study, rather than a story selling the results. Ideally, articles are only rejected when, for example, authors refuse to write up their study with complete transparency and honesty, conduct additional analyses, or adjust the conclusions that they draw (often, tone them down; sometimes, reverse the conclusions; in all cases, these adjustments will serve to align the conclusions better with the data). Reviewer and editor comments should always be constructive suggestions that enable the authors to improve their paper such that it will eventually be as clear as possible to readers how the study was designed, what went wrong (if anything), what can be learned from the data, and what can be learned from the procedure. The focus therefore shifts from the results of a study (for example, from p-values) to the process of conducting the study as a whole. The inherent subjectivity in data analysis is explicitly acknowledged, as well as the fact that not all health psychologists are statisticians. The full disclosure policy means that other researchers can replicate studies with minimum effort, as well as scrutinize analyses and re-analyse datasets if they so desire. Because HPB will aim to implement a post-publication reviewing procedure, findings from such re-analyses can then be attached to the original publications. Thus, HPB’s model of collaboration extends from the authors, editors, and peer reviewers to readers, as well. Just like authors are not always statisticians in addition to being health psychology researchers, editors and peer reviewers may make mistakes, too, and acknowledging that and facilitating detection of those mistakes is a necessary step to remedy them.

In aiming to publish all data, as long as the reports are completely transparent, integrous and aligned to the data, and in adopting collaborative editorial procedures, the traditional role of the editorial process to select only what editorial policies prescribe (e.g. excellent, novel, or exciting research) is abandoned. Does foregoing this (conceived) function of peer review as methodological quality assurance mean that there should be no quality control? No, absolutely not. To the contrary: quality control of the reporting is intensified. However, quality control of a study’s methodology should occur before any data collection is initiated, not afterwards. Every researcher has the responsibility to get approval of a body that assesses the ethical acceptability of the study (e.g. an ethical committee or an institutional review board) before collecting data (World Health Organisation, 2013). To determine whether a study is ethically acceptable, such bodies first need to establish that the study is likely to yield useful outcomes. If a study is poorly designed given its research questions, it is unlikely those original research questions can be answered by that study. If a study has so little power that if no association is found, it is impossible to conclude that it is unlikely that the sought after association exists, it cannot answer the original research question. In such situations, given the scarcity of research participants and the implicit promise that participants contribute to scientific progress, it is hard to justify that such a study would be ethical to conduct. Thus, since these ethical decision bodies carry the burden of establishing methodological and statistical sensibility of the study proposal before approving it, elimination of flawed research will increasingly occur at the gates. If, however, flawed research is conducted anyway, once the data have been collected, it is the responsibility of the scientific community to publish the data. Who knows which research questions we can one day answer with those datasets? And at the very least, this enables us to learn from our mistakes as a community.

Peer-reviewing research protocols before data collection is also a useful instrument to avoid collection of datasets that are inadequate given the original research questions. Publishing the protocols (the replication package) before data collection has a number of advantages. First, it enables identifying methodological methodology and flaws before any participant burden has been incurred. Second, the peer review process enables discussing the design with a number of experts. This increases the benefits of the peer review process: instead of simply identifying methodological weaknesses and flaws, peer reviewers can constructively work with authors to enhance the quality of the dataset that will be generated. Third, publication of the protocol beforehand minimizes the likelihood of Hypothesizing After the Results are Known (HARKing; Kerr, 1998). Fourth, for those researchers who fear competitors beat them to publishing about a given idea, publishing the protocols beforehand allows them to stake their claim while simultaneously fostering collaboration by publishing the replication package.

If a design has been deemed adequate through peer review before data collection starts that should always warrant publication of the data, thereby incentivizing this approach for authors. To further stimulate this shift towards peer reviewing the methodology prior to data collection, HPB will explore whether it is possible to offer a discount on publication of introduction and methods sections (plus replication package), as well as a discount on publication of the corresponding results and discussion sections (plus analysis package). That would provide an incentive to already profit from the peer review procedure before starting data collection. An additional incentive would be that authors would have two publications instead of one, and if this procedure draws authors to HPB, the scientific community would benefit further through the full disclosure policy (i.e. the replication and analysis packages would become available, which may not be the case if authors would publish elsewhere).

Unblinding reviews

In addition to being open access, strongly encouraging Full Disclosure, using a two-tiered review process and aiming to accept all research, HPB implements another change from traditional journals. Most journals choose to either employ a double (or sometimes single) blinded review process, where the review history remains confidential, or an unblinded review process, where author and review names are disclosed and the review history of an article becomes public upon publication of the article. The first approach is hypothesized to encourage reviewers to be more honest in their evaluation and criticism of an article, preventing preferential treatment of friends or superiors, or conversely, exceptionally harsh treatment of competitors. There are two drawbacks to this approach. First, once papers reach a certain level of specialisation, there are often only a few researchers able to provide competent reviews, and all researchers with that level of expertise are likely to know each other, which may render the blinding illusory. Second, reviewers may abuse the confidential nature of the review process to behave unreasonably, for example if they identified the authors as competitors, or if the study at hand falsifies their pet theory. The second approach, where reviews are open and public, has two advantages. First, publishing the reviews prevents exactly this last risk: public reviews compel reviewers to be reasonable and constructive. Second, publishing reviews enables citing them and lets reviewers take credit for their reviews. However, the drawback is that researchers may be reluctant to point out flaws in research of their mentors, superiors or friends.

HPB will aim to address these drawbacks of both methods by combining the best of both worlds. Reviews are initially double blinded, but unblinded once a decision is reached. The anticipation of publication of the review history should serve to promote decency and constructive reviews, while the blinding prevents reviews that are unduly mild or harsh insofar possible (for example, during the reviewing procedure, reviewers do not know they are reviewing their superior). In the future, HPB might even explore unblinding and publishing review histories regardless of the editorial decision, which would make writing an elaborate review worthwhile to reviewers even if the scientific article ends up being rejected. In the future, HPB will work towards making reviews citable, to further incentivize high-quality reviews. Given the recent substantial increases in reviewer burden, such benefits for the reviewers seem long overdue (Kovanis, Porcher, Ravaud, & Trinquart, 2016).

Conclusion

As a scientific community aiming to foster health psychology, the European Health Psychology Society (EHPS) publishes a number of periodicals. Health Psychology Bulletin (HPB) is a new addition filling a unique niche in the EHPS’ journal portfolio specifically, but also in the landscape of health psychology journals more generally. HPB implements a number of innovative adjustments to the traditional publication practices that are designed to accelerate the progress of health psychology science. HPB is open access and promotes full disclosure through publication of replication and analysis packages. In addition, HPB explicitly welcomes null findings, reports of failed manipulations, replications, as well as regular contributions, in fact striving to publish all conducted studies in a manner that maximizes the potential lessons that can be learned. Finally, HPB implements a double blind peer reviewing procedure that is unblinded once an article is accepted. Review histories will receive a Digital Object Identifier (DOI) and thus be citable, providing additional incentive for reviewers to write high-quality reviews, and HPB will strive to enable post-publication peer reviews. In addition to implementing these innovations, HPB is designed to be a flexible journal: the editorial policies of HPB are expected to adjust to new insights into how to best publish scientific results, implementing new ways to share knowledge and methods.

We believe that by implementing these innovations, HPB meets a number of needs of the health psychology community in general and the European Health Psychology Society in particular. By contributing to emptying our file drawers, HPB can enhance the quality of our systematic reviews and meta-analyses, decreasing bias and increasing the accuracy of our effect size measures. In order to achieve this goal, of course, HPB requires submissions. We hope all health psychology researchers who still have data in their file drawers will consider compiling the corresponding replication and analysis packages and making these scientific products public by submitting them to HPB (note that full disclosure is strongly encouraged, but not required: researchers who have old manuscripts and can only retrieve partial replication and analysis packages, or nothing at all, are still encouraged to submit their papers). In addition, we hope researchers will submit their introductions and methods sections and replication packages before starting to collect data, so that their study can benefit from the peer review procedure while improvements to the study’s methodology are still possible. Finally, in addition to submitting their past work (i.e. file drawered manuscripts) and future work (i.e. by submitting replication packages before data collection), we hope researchers will henceforth also consider HPB for their present work (i.e. regular articles). By making use of these improved publication practices, we can accelerate scientific progress in our field, and hopefully together realise a much-needed shift from a competitive academic model to a collaborative one.