Become a Reviewer

Reading: Leonardo da Vinci, preregistration and the Architecture of Science: Towards a More Open and ...

Download

A- A+
Alt. Display

Theoretical/debate paper

Leonardo da Vinci, preregistration and the Architecture of Science: Towards a More Open and Transparent Research Culture

Author:

Daryl B. O’Connor

School of Psychology, University of Leeds, GB
About Daryl B.

PhD

X close

Abstract

There has been much talk of psychological science undergoing a renaissance with recent years being marked by dramatic changes in research practices and to the publishing landscape. This article briefly summarises a number of the ways in which psychological science can improve its rigor, lessen use of questionable research practices and reduce publication bias. The importance of preregistration as a useful tool to increase transparency of science and improve the robustness of our evidence base, especially in COVID-19 times, is presented. Moreover, the benefits of using Registered Reports, the article format that allows peer review of research studies before the results are known, are outlined. Finally, the article argues that the scientific architecture and the academic reward structure need to change with a move towards “slow science” and away from the “publish or perish” culture.

How to Cite: O’Connor, D. B. (2021). Leonardo da Vinci, preregistration and the Architecture of Science: Towards a More Open and Transparent Research Culture. Health Psychology Bulletin, 5(1), 39–45. DOI: http://doi.org/10.5334/hpb.30
456
Views
166
Downloads
19
Citations
29
Twitter
  Published on 04 Mar 2021
 Accepted on 21 Feb 2021            Submitted on 17 Dec 2020

There has been much talk of psychological science undergoing a renaissance. I wonder if that is what Leonardo da Vinci thought back in the late 1400s, “Today I will apply the principles of linear perspective to my paintings to create the illusion of depth”? Or did he just think; it’s time to try something different and make some changes to my working practices? Well, this is the main message I want the reader to take away from this article. If you haven’t done so already, it is time to start integrating some open research practices into how you conduct your science, as it will make it bolder, brighter and better than the rest. Okay, perhaps it won’t, but it will help increase openness, integrity and reproducibility in scientific research and ultimately improve the robustness of our evidence base.

For many psychologists, the publication of the Open Science Collaboration’s (2015) paper estimating the reproducibility of psychological science marked the beginning of the ‘open science movement’. This large scale investigation set out to replicate 100 experimental and correlational studies from three leading journals. The findings were stark; less than 40% of psychology studies were replicated. A myriad of factors, known as questionable research practices, have been proposed to explain these low levels of replication including low statistical power, hypothesizing after the results are known (HARKing), p-hacking, the ‘garden of forking paths’ and failure to control for biases (see Gelman & Loken, 2013; Kerr, 1998; Munafo et al., 2017 for further discussion). Of course, there were a number of important key earlier papers that were equally as provocative. For example, in science more generally, back in 2005, Ioannidis (2005) published an article entitled “Why Most Published Research Findings Are False”. In psychological science specifically, Wagenmakers and colleagues (2011) made the case for “Why psychologists must change the way they analyze their data” and in the same year Simmons, Nelson and Simonsohn (2011) presented their “false-positive psychology” treatise and also offered six concrete solutions for improving psychological science. I suppose the year it all began doesn’t really matter, what matters is that, perhaps optimistically, we have moved out of the early renaissance and now entered the middle renaissance period. We’ve gone from Giotto to Mantegna and we have got Leonardo, Michelangelo, Raphael and friends in our sights, though, they’re still far in the distance.

To be fair to health psychology, as a sub-discipline, it has been an early active agent of open science practices. Health psychologists have been leading the way in a number of respects, perhaps due to the fields closer relationship with medicine, where reporting and registering of trials have been more common. For example, for quite some time, they have been preregistering their randomised controlled trials, behaviour change interventions and experimental work in relevant repositories (e.g., https://clinicaltrials.gov/; https://www.isrctn.com/) as well as pre-registering their systematic reviews and meta-analyses (e.g., https://www.crd.york.ac.uk/PROSPERO/). It is normal practise for health psychologists to follow the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA; http://www.prisma-statement.org/) and the Consolidated Standards of Reporting Trials (CONSORT; http://www.consort-statement.org/) guidelines. However, there is certainly room for further transparency in relation to the reporting of interventions and health psychologists are urged to use the template for intervention description and replication (TIDieR) checklist and guide (Hoffman et al., 2014).

One area where psychological science, and health psychology, can improve further is in relation to preregistration. The importance of preregistering clinical trials was very clearly demonstrated by Robert Kaplan (a health psychologist and past editor of Health Psychology) and Veronica Irvin in a notable paper in 2015 (Kaplan & Irvin, 2015). These authors evaluated whether the number of published null results increased over time in US National Heart, Lung and Blood Institute (NHLBI) funded clinical trials. In particular, they were interested in exploring whether the introduction of registration for large clinical trials on clinicaltrials.gov around the year 2000 influenced the significance of study results. Their findings were clear-cut and rather concerning; 57% of studies published prior to 2000 reported beneficial intervention effects on the primary outcome compared to only 8% of trials published after 2000. Numerous factors may account for these findings, although, industry sponsorship was not one of them nor was the increased use of active comparator conditions. Interestingly, these findings also cannot be explained by the ‘file drawer’ problem, whereby negative and null findings are less likely to be published compared to positive, statistically significant findings, as the opposite is true here. Kaplan and Irvin (2015) argue that the year 2000 marks the beginning of a natural experiment that resulted in greater constraints being placed on authors in relation to the reporting of their clinical trial results. All large scale NHLBI clinical trials were required to declare their primary outcomes prospectively prior to publication. Therefore, these findings suggest that the prospective declaration of outcomes together with an increased transparency in reporting standards may explain these rather dramatic findings. Moreover, they point to the importance of preregistration as a useful tool to help improve the reliability, transparency and robustness of this particular evidence base.

Another exciting development relating to pre-registration comes in the form of Registered Reports (https://osf.io/rr/). The aim of this relatively new type of article is to improve rigor, reduce publication bias and to increase the transparency of science by allowing peer review of research studies before the results are known. At its simplest, there are five stages punctuated with two peer review stages (see below):

Develop an idea >> Design a study >> Collect & analyse data >> Write report >> Publish report

Once the researcher has developed an idea and designed their study including details of measures, sample size, inclusion/exclusion criteria and developed an analysis plan they submit the “Introduction” and “Method” for peer review. This is known as a Stage 1 Registered Report and it will be reviewed by two or three reviewers and an editor will make an editorial decision in the normal way based on the peer reviews (i.e., “reject” outright or invite a “revise and resubmit”). The key difference to the conventional peer review process is that you do not commence data collection until your Stage 1 Registered Report has been accepted or has received what is known as an In Principle Acceptance (IPA). Crucially, once the data are collected, the full registered report will be accepted for publication irrespective of the significance of your findings. The latter is important as this will also help reduce publication bias that favours statistically significant effects. It is also worth noting that Stage 1 Registered Reports with an IPA would only ever not be published if there were failings of quality control, a major deviation from the registered protocol or some unsolvable problem in reporting clarity or style (see https://osf.io/rr/). Moreover, once you submit the full paper containing the results and discussion, this undergoes a Stage 2 peer review, thereby, subjecting your paper to further quality control and the opportunity to respond to additional constructive feedback.

Therefore, taken together, it is hoped that this new publication format will help reduce the use of questionable research practices while improving the quality of our research protocols. Numerous health psychology journals have introduced registered reports as a new article type including this journal, Health Psychology Bulletin, as well as Psychology and Health, the British Journal of Health Psychology, and other leading health psychology journals (see Peters et al., 2017 for a summary of the open research approach adopted by this journal). As we’ve written elsewhere (see Norris & O’Connor, 2019), despite the growing number of journals that offer registered reports, uptake has been rather slow. Anecdotal feedback suggests that the main barriers relate to lack awareness, concerns about “stifled creativity”, worries about being “scooped” and resistance to change existing working practices. Another often (informally) cited barrier is that “registered reports are a lot more work and they take too long”. When one hears these concerns, two thoughts come to mind. First, it is important to remember that this new format is front loaded, whereby, the researcher undertakes a large amount of the work in advance of data collection, and ultimately in advance of publication. Second, as you’ll see below, a reward structure that has prioritized “fast science” has helped, in part, get us into this replication and reproducibility mess in the first place. Nevertheless, it is also recognised that there are still many cases when integrating registered reports into a research programme is not possible (e.g., tight grant timelines and research student deadlines), however, as outlined below, preregistration is still a very enviable and important option.

These issues notwithstanding, the uptake of registered reports is growing steadily (see Chambers, 2019; Hardwicke & Ioannidis, 2018) and there is emerging evidence to suggest that their introduction is beginning to reduce publication bias. One of the first evaluations of the impact of preregistrations and registered reports on the scientific literature was performed by Allen and Mehler (2019). These authors examined 113 registered reports in the biomedical and psychological sciences compiled by the Center for Open Science. For each study, the team counted the number of clearly stated, a priori, discrete hypotheses that were not supported and compared these data with the wider literature. The results were startling; 60.5% of the hypotheses in the registered reports were not supported, whereas, in contrast, for the broader scientific literature, it was estimated to be between 5% and 20% of hypotheses that were not supported. Yes, that’s right; in the latter case, at best the wider literature finds support for its hypotheses 95% of the time, at worst, 80% of the time. We, as psychological or biomedical scientists, are not that good at predicting our findings and of course, understanding human behaviour is complex and nuanced. Allen and Mehler (2019) suggest the difference in incidences of null findings between the registered reports and the broader literature can provide an estimate of the file drawer problem. Another investigation by Scheel, Schijen and Lakens (2020) found very similar findings. These authors compared 71 published registered reports in psychology with a random sample of 152 hypothesis-testing studies from the standard literature. This time 96% positive results were found in the standard reports, but only 44% positive results in the registered reports.

As noted earlier, informal feedback from colleagues from all career stages has identified a number of barriers – real or perceived – to giving registered reports a go. Therefore, it is important to bear in mind, that there are other ways of preregistering your work that do not involve a ‘full blown’ registered report. For example, it is easy for researchers to register their research hypotheses and analysis plans in advance of the data analysis phase, and ideally, before data collection commences. Platforms such AsPredicted (https://aspredicted.org/) provide an easy method to preregister studies. A researcher simply answers nine questions about their research design and analyses, and once all co-authors agree, AsPredicted generates a time-stamped document with a unique URL for verification. The Open Science Framework also offers a similar approach (https://osf.io/prereg/) that is expanding to include preregistration for qualitative research and systematic reviews. There has also been a collaborative effort on establishing “Preregistration Standards for Psychology” led by the American Psychological Association, the British Psychological Society and the German Psychology Society in partnership with the Leibniz Institute for Psychology and Center for Open Science (https://www.psycharchives.org/handle/20.500.12034/4042.2).

Moreover, preregistration is particularly important for research during and beyond the coronavirus disease 2019 (COVID-19) pandemic. Shortly after the World Health Organisation declared COVID-19 a global pandemic, there was an explosion of research activity aimed at trying to help understand the impact of COVID-19 on psychological, biology, health, social and economic outcomes (O’Connor et al., 2020). The primary weapons to mitigate the pandemic have been behavioural, such as encouraging people to observe government instructions, self-isolation, quarantining, and physical distancing. Therefore, psychological science, and health psychology specifically, have been incredibly well placed to make important contributions to our understanding of the effects of, and recovery from, the pandemic, and this will continue now that a vaccine is available and uptake will be key. Given the speed that things have been changing, researchers have needed to design, develop and execute studies at breakneck speed, and as a result, it has not always been possible to use registered reports. However, preregistering hypotheses and analysis plans in advance of data analysis is a feasible method to ensure that more open and rigorous research standards are maintained.

To stretch the renaissance analogy one dangerous step further. As well as being an outstanding artist and notable polymath, Leonardo da Vinci was a hugely accomplished architect. I wonder what he would make of the current scientific architecture and academic reward structure. Would he think that many of the structures were broken, beginning to crumble and possibly are no longer fit for purpose? He might also think about the merits of publishing high quality, robust and replicable scientific papers compared to publishing an increased number of low quality studies, though, he would likely recognise that it would take much longer for the former. I suspect he’d be alarmed by the “publish or perish” culture that exists in modern day science, and perhaps, he would agree with the esteemed and highly respected developmental psychologist, Uta Frith, writing in Trends in Cognitive Sciences: “Fast Science is bad for scientists and bad for science”. Frith continues “Slow Science may actually help us to make faster progress, but how can we slow down?” (Frith, 2020, p. 1). Moreover, it is highly likely that the inherent academic pressure to publish “fast science” leads researchers to cut corners, rush important aspects of the scientific process and to engage in questionable research practices.

Did you know that the global scientific output doubles every nine years? Well, this is the headline of a blog published by Nature back in 2014, reporting the findings of a bibliometric analysis based on the number of publications and cited references in Web of Science (Bornmann & Mutz, 2015; Van Noorden, 2014). These analyses were based on data collected up until 2012; therefore, if they are a good estimate, the number of scientific outputs will have doubled again by 2021. Relatedly, a recent study has shown a substantial increase in COVID-19 related publications. Teixeira da Silva, Tsigaris and Erfanmanesh (2020) estimated that 23,634 unique COVID-19 related articles have been indexed on Web of Science and Scopus between 1st January and 30th June 2020. Obviously the latter increase is unprecedented, but it is clear, in terms of the numbers that “fast science” has taken over, and it is time to slow down, not least because the stress levels in academia have also increased dramatically which can have very serious effects on the mental and physical health (e.g., Kinman & Johnson, 2019; O’Connor, Thayer & Vedhara, 2021).

So returning to the question, how can we slow science down? Frith (2020) offers a number of very useful suggestions. For example, we should be assessing quality rather than quantity, we should look differently at our research timescales, and we should promote a more transparent research culture that acknowledges teamwork. Other colleagues have suggested similar approaches and have highlighted the other ways that research culture can be changed which will help promote a more reproducible science (e.g., Munafo et al., 2017, Nosek, Spies & Motyl, 2012). Personally, I feel the entire academic system requires an overhaul. However, this is a longer term goal that will require radical top down and bottom-up changes. National, peer-led consortia that aim to promote robust research, promote training activities and disseminate best practice in universities and related research organisations are growing and beginning to have real impact (e.g., see the UK Reproducibility Network, https://www.ukrn.org/). However, in the meantime, we should be prioritising the next generation of early career researchers by ensuring they have the tools, training and the support as they embark on their career journey. We should take steps to try to remove “quantity” related metrics from recruitment and promotion panels and replace these with rewards for engaging in open and transparent science.

So finally, what can health psychology specifically bring to the “open science” table? As I’ve outlined above, health psychology and health psychologists have already been active agents of a number of open science practices (O’Connor, 2020). And of course, it is important to remember that science is behaviour (Norris & O’Connor, 2019). Conducting scientific research consists of a series of discrete behaviours (e.g., planning study design, formulating hypotheses, choosing statistical tests). Similarly, conducting ‘bad science’ is also a series of discrete behaviours – or questionable research practices (e.g., p-hacking, HARKing, selective reporting). However, health psychologists are experts in behaviour change, and therefore, we have the tools, approaches and interventions to help facilitate behaviour change. In a recent article, Norris and O’Connor (2019) applied the Behaviour Change Wheel (BCW) approach to help understand how Open Science behaviours may be identified, how barriers towards these behaviours may be addressed and how interventions can be developed to increase Open Science. Moreover, the barriers and facilitators were mapped onto the COM-B (see Table 1, taken from Norris & O’Connor, 2019) and the numerous different ways to target Capability, Opportunity and Motivation were highlighted. Nevertheless, there remain huge opportunities for health psychologists to apply their methods, theories and approaches to the Open Science domain and future research ought to use the full BCW methodology to provide far more insight into Open Science behaviours.

Table 1

Barriers and facilitators to Open Science behaviours mapped to COM-B.


COM-B COMPONENT OPEN SCIENCE EXAMPLES

Physical Capability Ability to use Open Science platforms such as Open Science Framework, AsPredicted, GitHub

Psychological Capability Remembering to upload updates to data and analysis

Physical Opportunity Availability of free training to learn R, webinars on Registered Reports

Social Opportunity Principal Investigator encouraging implementation of Open Science.
Institution recognising Open Science in promotion and appraisal (Munafò et al., 2017)

Reflective Motivation Having beliefs that putting in the effort to get a Registered Report published will mean your final results paper will be accepted (Chambers, Dienes, McIntosh, Rotshtein, & Willmes, 2015)

Automatic Motivation Developed habit of uploading pre-print as soon as a paper is written

Note: Based on published research without COM-B analysis and authors’ own experiences; Table taken from Norris & O’Connor (2019).

To end, this article has briefly summarised a number of the ways in which psychological science can improve its rigor, lessen use of questionable research practices and reduce publication bias. The importance of preregistration as a useful tool to increase transparency of science and improve the robustness of our evidence base, especially in COVID-19 times, has been presented. In particular, the case for the increased adoption of Registered Reports, the article format that allows peer review of research studies before the results are known, has been outlined. Finally, the article has suggested that the scientific architecture and the academic reward structure need to change with a move towards “slow science” and away from the “publish or perish” culture. Ultimately, we all have a role to play. In the opening paragraph of this article I said that I wanted the take home message to be “If you haven’t done so already, it is time to start integrating some open research practices into how you conduct your science”. Well, I hope after reading this article that your levels of open science motivation are high, as it is time to implement those Open Science intentions.

Competing Interests

The author has no competing interests to declare.

References

  1. Allen, C., & Mehler, D. M. A. (2019). Open science challenges, benefits and tips in early career and beyond. PLoS Biology, 17(5), e3000246. DOI: https://doi.org/10.1371/journal.pbio.3000246 

  2. Bornmann, L., & Mutz, R. (2015). Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. Journal of the Association for Information Science and Technology, 66, 2215–2222. DOI: https://doi.org/10.1002/asi.23329 

  3. Chambers, C. (2019). The registered reports revolution: Lessons in cultural reform. Significance, 16, 23–27. DOI: https://doi.org/10.1111/j.1740-9713.2019.01299.x 

  4. Chambers, C. D., Dienes, Z., McIntosh, R. D., Rotshtein, P., & Willmes, K. (2015). Registered reports: realigning incentives in scientific publishing. Cortex, 66, A1–A2. DOI: https://doi.org/10.1016/j.cortex.2015.03.022 

  5. Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Retrieved from http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf 

  6. Hardwicke, T. E., & Ioannidis, J. P. A. (2018). Mapping the universe of Registered Reports. Nature Human Behaviour, 2, 793–796. DOI: https://doi.org/10.1038/s41562-018-0444-y 

  7. Hoffman, T. C., Glasziou, P., Boutron, I., Milne, R., Rafael, P., Moher, D., Altman, D. G., Barbour, V., Macdonald, H., Johnston, M., Lamb, S. E., Dixon-Woods, M., McCulloch, P., Wyatt, J. C., Chan, A.-W., & Michie, S. (2014). Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ, 348, g1687. DOI: https://doi.org/10.1136/bmj.g1687 

  8. Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. DOI: https://doi.org/10.1371/journal.pmed.0020124 

  9. Kaplan, R. M., & Irvin, V. L. (2015). Likelihood of null effects of large NHLBI clinical trials has increased over time. PLoS One, 10(8), e0132382. DOI: https://doi.org/10.1371/journal.pone.0132382 

  10. Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196–217. DOI: https://doi.org/10.1207/s15327957pspr0203_4 

  11. Kinman, G., & Johnson, S. (2019). Special section on well-being in academic employees. International Journal of Stress Management, 26, 159–161. DOI: https://doi.org/10.1037/str0000131 

  12. Munafò, M. R., Nosek, B. A., Bishop, D. V., Button, K. S., Chambers, C. D., Du Sert, N. P., Simonsohn, U., Wagenmakers, E.-J., Ware, J. J., & Ioannidis, J. P. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), 0021. DOI: https://doi.org/10.1038/s41562-016-0021 

  13. Norris, E., & O’Connor, D. B. (2019). Science as behaviour: Using a behaviour change approach to increase uptake of Open Science. Psychology and Health, 34, 1397–1406. DOI: https://doi.org/10.1080/08870446.2019.1679373 

  14. Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615–631. DOI: https://doi.org/10.1177/1745691612459058 

  15. O’Connor, D. B. (2020). The future of health behaviour change interventions: Opportunities for open science and personality research. Health Psychology Review, 14, 176–181. DOI: https://doi.org/10.1080/17437199.2019.1707107 

  16. O’Connor, D. B., Aggleton, J. P., Chakarabati, D., Cooper, C. L., Creswell, C., Dunsmuir, S., Fiske, S. T., Gathercole, S., Gough, B., Ireland, J. L., Jones, M. V., Jowett, A., Kagan, C., Karanika-Murray, M., Kaye, L. K., Kumari, V., Lewandowsky, S., Lightman, S., Malpass, D., Meins, E., Morgan, B. P., Morrison Coulthard, L. J., Reicher, S. D., Schacter, D. L., Sherman, S. M., Simms, V., Williams, A., Wykes, T., & Armitage, C. J. (2020). Research Priorities for the COVID-19 pandemic and beyond: A call to action for psychological science. British Journal of Psychology, 111, 603–629. DOI: https://doi.org/10.1111/bjop.12468 

  17. O’Connor, D. B., Thayer, J. T., & Vedhara, K. (2021). Stress and health: A review of psychobiological processes. Annual Review of Psychology, 72, 663–688. DOI: https://doi.org/10.1146/annurev-psych-062520-122331 

  18. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. DOI: https://doi.org/10.1126/science.aac4716 

  19. Peters, G.-J. Y., Kok, G., Crutzen, R., & Sanderman, R., (2017). Health Psychology Bulletin: Improving Publication Practices to Accelerate Scientific Progress. Health Psychology Bulletin, 1(1), 1–6. DOI: https://doi.org/10.5334/hpb.2 

  20. Scheel, A. M., Schijen, M., & Lakens, D. (2020, February 5). An excess of positive results: Comparing the standard Psychology literature with Registered Reports. DOI: https://doi.org/10.31234/osf.io/p6e9c 

  21. Simmons, D., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366. DOI: https://doi.org/10.1177/0956797611417632 

  22. Teixeira da Silva, J. A., Tsigaris, P., & Erfanmanesh, M. Publishing volumes in major databases related to COVID-19. Scientometrics. DOI: https://doi.org/10.1007/s11192-020-03675-3 

  23. Van Noorden, R. (2014). http://blogs.nature.com/news/2014/05/global-scientific-output-doubles-every-nine-years.html. Retrieved on 7th December 2020. 

  24. Wagenmakers, E. J., Wetzels, R., Borsboom, D., & van der Maas, H. (2011). Why psychologists must change the way they analyze their data: The case of psi. Journal of Personality and Social Psychology, 100, 426–432. DOI: https://doi.org/10.1037/a0022790 

comments powered by Disqus