Become a Reviewer

Reading: Hardwired… to Self- Destruct? Using Technology to Improve Behavior Change Science


A- A+
Alt. Display

Theoretical/debate paper

Hardwired… to Self- Destruct? Using Technology to Improve Behavior Change Science


Rik Crutzen

Department of Health Promotion, Maastricht University/CAPHRI, P.O. Box 616, 6200 MD Maastricht, NL
X close


Many societal problems are related to human behavior. To change behavior, it is crucial to be aware of Lewin’s formula indicating that behavior is a function of a person and their environment. Technology provides opportunities with regard to (measurement of) all three elements of this formula. This raises the question how existing technologies can be used to improve behavior change science.

This article provides two answers to this question: application and innovation of theory. Technology can be used to apply behavior change methods in practice. For example, providing computer-tailored feedback based on a social-cognitive profile. Technology can also be used to innovate theory, which is less common, but results in more progress. For example, technology provides opportunities to triangulate ecological momentary assessment (EMA) with smartphone native sensor data to track behavior and environmental factors. If the opportunities provided by technology are combined with a rationale on how and which data to collect, then these data can be used to answer theoretically driven questions. Answering such questions results in better theories to both explain and change behavior. This is highly relevant for more effective and more efficient solutions to all societal problems related to human behavior.

How to Cite: Crutzen, R. (2021). Hardwired… to Self- Destruct? Using Technology to Improve Behavior Change Science. Health Psychology Bulletin, 5(1), 70–80. DOI:
  Published on 21 May 2021
 Accepted on 04 May 2021            Submitted on 18 Oct 2020

The title of this article is perhaps somewhat curious (Metallica, 2016). However, it is chosen on purpose as it is meant to draw attention, which is a first step towards raising interest (Crutzen & Ruiter, 2015). The topic of this article is, in my opinion, of interest across the domain of behavioral sciences. Of course, there is also a link between the title and the content of this article, which is explained throughout this article.

The first section of this article describes how human behavior causes the vast majority of deaths. Subsequently, this article describes the key elements of my inaugural lecture when accepting my chair in Behavior Change & Technology. These three words are also the three sections that follow in this article, starting with behavior. The second section describes the role of theories in explaining behavior in a problem-driven context. The third section discusses how these insights into behavior are relevant to behavior change in general. The final and main section elaborates on using technology to innovate theory - and therewith behavior change science.


The point of departure is the end of the title. Self-destruction might sound negative, while there are reasons to encourage optimism (Peters & Crutzen, 2017a). In his latest book, Steven Pinker makes a strong case for reason, science, and progress (Pinker, 2018). He discusses data on outcomes regarding multiple topics, such as economy, inequality, peace, and health. The point of departure here is the final outcome: death.

Figure 1 shows the life expectancy by world regions since 1770 (Roser et al., 2019). This figure shows two remarkable patterns. First, life expectancy has more than doubled in the past 150 years. This is remarkable because previous research conducted at Early Medieval cemeteries (Scull, 2009) shows that life expectancy was stable from this time period until the end of the 19th century. Second, in some regions of the world, especially Africa, life expectancy is much lower than in Europe, the Americas, and Oceania. The increase in life expectancy started later in these regions and the initial life expectancy was also lower. However, also in these regions of the world, life expectancy has more than doubled.

Figure 1 

Life expectancy by world regions since 1770 (Roser et al., 2019).

Source: Riley (2005), Clio Infra (2015), and UN Population Division (2019).

Note: Shown is period life expectancy at birth, the average number of years a newborn would live if the pattern of mortality in the given year were to stay the same throughout its life. • CC BY

Despite the increase in life expectancy, there are still many risk factors contributing to death. Figure 2 provides an overview of the worldwide number of deaths by risk factor (Ritchie & Roser, 2019). This overview clearly shows that human behavior causes the vast majority of deaths. Hence, the use of the verb self-destruct in the title. The question mark has been added to indicate that behavior is not only a function of the person. In fact, one of the first formula in psychology indicates that behavior is a function of a person and their environment (Lewin, 1936). Changing this behavior is not easy (Kelly & Barker, 2016).

Figure 2 

Number of deaths by risk factor (Ritchie & Roser, 2019).

Source: IHME, Global Burden of Disease (GBD). • CC BY


Behavior refers to behavior in its widest sense. What we eat and drink, if and how we are physically active, but also decisions we take to register as a donor or to choose a certain treatment for a disease. Behavior also concerns how health professionals deal with guidelines and how people in general deal with new opportunities provided by technology. And behavior is hard to understand (e.g., Steenaart et al., 2018). This does not apply to all behavior, of course, but it does apply to behavior causing problems that are hard to solve. If there would be an easy solution, then experts in behavior change are often not involved. If it is hard to solve, then expertise in behavior change is always required (Ruiter & Crutzen, 2020).

Theories are useful to help behavioral scientists explain behavior. The plural of theory is used deliberately here as there are multiple theories. Theories are reductions of reality. That is not a shortcoming, but part of the definition of what a theory is (Peters & Kok, 2016). There is no all-encompassing ‘Theory of Everything’ (Peters & Crutzen, 2017b). In fact, in a problem-driven context, the problem – and behaviors causing it – are often so complicated that insights from multiple theories need to be combined to come to a solution (Bartholomew Eldredge et al., 2016). Or, to cite philosopher of science Karl Popper, whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve (Popper, 1972).

The Reasoned Action Approach is a commonly used theory in the field of applied health and social psychology (Fishbein & Ajzen, 2010). The most recent book describing this theory consists of 518 pages. A summary of a couple of sentences does no justice to the richness of this theory. This is done anyway, as this theory is used as an example to raise three points. In short, the Reasoned Action Approach states that intention, the readiness to engage in the behavior, is the most important predictor of behavior. This intention is shaped by three determinants: (1) attitude: a latent disposition or tendency to respond with some degree of favorableness (or unfavorableness) to the target behavior; (2) perceived norm: the perceived social pressure to perform (or not to perform) the target behavior; and (3) perceived behavioral control: people’s perceptions of the degree to which they are capable of, and have control over, performing a given behavior (Peters et al., 2020).

The first reason to use this theory as an example is that these determinants are relevant to all types of reasoned behavior, but insight needs to be gained into the beliefs underlying these determinants. These beliefs might differ per behavior. For example, the reasons why people think they are capable of taking their medication as prescribed most likely differ from the reasons why people think they are capable of going to their work by bicycle. However, these beliefs underlie the same determinant; perceived behavioral control. So, the Reasoned Action Approach provides an overview of important determinants for reasoned behavior, but to give these determinants flesh and blood, insight needs to be gained into the beliefs underlying these determinants among a specific target group with regard to a specific behavior.

The second reason to use this theory as an example is that not all behavior is reasoned. This is true and this theory does not claim anything about other types of behavior. So, it cannot explain all possible variance in behavior. However, this is not a reason to reject this theory. Instead, this is exactly in line with the point raised earlier that theories are reductions of reality (Kok & Ruiter, 2014). Even within a theory, certain aspects can overlap with explanations from other theories. Within the Reasoned Action Approach, for example, beliefs ultimately lead to behavior. However, behaving in a certain way also affects the beliefs that people hold. This reciprocal influence between beliefs and behavior is the focus of the cognitive dissonance theory (Festinger, 1957). If a person’s beliefs and behavior are not congruent, then one way in which people deal with this is by changing their beliefs.

In sum, to explain behavior in a problem-driven context it is needed to integrate determinants – both variables and processes – from multiple theories and to gain insight into the beliefs underlying these determinants in a specific target group with regard to a specific behavior in a specific context. In other words, application of (multiple) theories is behavior and context dependent, as these beliefs might vary accordingly. This might give the impression that determinants itself are not important, because beliefs among a specific target group with regard to a specific behavior need to be investigated. However, this is a false impression that is directly related to the third reason to use this theory as an example: insight into determinants is crucial to behavior change.


First of all, behavior change is a misnomer. There are not 6 (Cialdini, 2008) or 93 (Michie et al., 2013) tricks that can be applied to change behavior directly. All overt behavior results from activation patterns of firing neurons in the motor cortex, and activation patterns in the motor cortex are the result of activation patterns elsewhere in the brain (Pinel & Barnes, 2017).

When people learn, this results in changes in activation patterns of firing neurons. This idea is based on the role of enzymatic neurons in the selection circuits model (Kampfner & Conrad, 1983). According to this model, the firing properties of neurons are controlled by postulated enzymes called excitases. Learning is mediated by changes in the excitase composition of enzymatic neurons. These changes are guided by processes analogous to evolution by variation and selection of networks. Christian Lohmann, affiliated to the Netherlands Institute for Neuroscience, compares this to dating. Neurons mess around and test their partners. Most of the time they decide that it is not going to work out, but in the end, they form stable connections (Van Sprundel, 2019).

This is in line with a reinforcement learning perspective on behavior: people acquire behaviors by learning to obtain (real or conjectured) rewards and to avoid punishments (Vlaev & Dolan, 2015). Subsequently, the brain can decide which action to take based on ‘computation’ of value (i.e., expected rewards) (Rangel et al., 2008). In a colloquial way, the use of words such as ‘decide’ and ‘value’ might give the impression that this only concerns goal-directed behavior based on reflection. The same applies, however, to habitual behaviors (Gardner, 2015). Moreover, the values might be based on various rewards such as comfort, control, status, and belonging (Vlaev & Dolan, 2015). Thus, despite choice of words, no rationality, logic, or reflection is implied: these theoretical foundations are not bounded to specific aspects of human behavior or psychology. In other words, the reinforcement learning perspective concerns behavior in general and is not limited to behaviors on which a person reflects before acting.

The way in which different learning processes evolved over time runs in parallel with the development of the brain over time. And this is not limited to the human brain. The evolution of these learning processes was needed to solve particular problems that organisms came across during evolution (Aunger & Curtis, 2015). This start with very basic problems like how to find food? And where to sleep safely? By means of habituation and sensitization, cnidarians can find food for more than 650 million years (Ginsburg & Jablonka, 2007). Taking a leap forward, capuchin monkeys are able to sort objects based on their shape (Truppa et al., 2011). This way of learning abstract concepts is also used by humans. This is very useful, because we do not have to think deeply every time we see a pen or a car in order to decide what it is and what we can do with it. That same learning process, however, also underlies pigeonholing – categorizing people based on their shape; or worse. So, these learning processes are not informative regarding what we learn and whether that is ‘good’ or ‘bad,’ but is provides insight into how we learn. This is important for behavioral scientists when using methods aimed at behavior change. These methods should be based on one or more of these learning processes. This enables people to learn something – and change where needed (Crutzen & Peters, 2018).

Implementation intentions are an example of a behavior change method based on the most recent learning process (in evolutionary terms): reflective learning. This enables people to learn from mistakes made in the past and use this to make better plans for the future. With some hesitation, an example is provided of a study in which this behavior change method is used. The hesitation comes from the fact that it is hard to draw strong conclusions based on one study (Lakens et al., 2012). To draw strong conclusions regarding the effectiveness of behavior change methods, data from multiple studies need to be combined in a meta-analysis (Peters et al., 2015). Examples are provided anyway, to illustrate how behavior change methods are used in a specific setting. For all examples provided, there is a meta-analysis available, which is cited as well.

The first example of a behavior change method concerns the implementation intentions referred to earlier (Gollwitzer & Sheeran, 2006; meta-analysis: Cohen’s d = .65 [95%CI .60–.70]). This behavior change method employs people’s ability to plan and aims to help them with this. In one exemplary study on implementation intentions, women attending the Weight Watchers program were randomly assigned to one of two groups (Luszczynska et al., 2007). In the control group, women attended the weekly meeting of the Weight Watchers program. This was the same in the experimental group, but on top of that there was one extra session in which women had to write down a detailed plan concerning food intake and physical activity for the upcoming week. This plan had to be as specific as possible in terms of when, where and what. Subsequently, women had to make if-then plan for situations in which they thought it would be difficult to stick to their plan, for example when it is raining or when attending a party. At two-months follow-up, women in the experimental group had lost twice as much weight in comparison to women in the control group (4.2 vs. 2.1 kg.). This is an example of how implementation intentions are applied in practice. At the same time, it might raise the question whether it works because women participating in Weight Watchers are already motivated? And this is true. In fact, having an intention to change is a condition for this behavior change method to work optimally (Kok et al., 2016). When there is lack of motivation, other behavior change methods are more appropriate.

A second example of a behavior change method that could be more appropriate in this situation is self-affirmation (Epton et al., 2015; meta-analysis: Cohen’s d = .32 [95%CI .19–.44]). Imagine one wants youngsters to be more physically active. These youngsters might not be motivated to be physically active, nor have a positive attitude towards this behavior. In one exemplary study on self-affirmation, youngsters were randomly assigned to one of two groups (Cooke et al., 2014). In the control group, youngsters received a fact sheet explaining the advantages of being more physically active. This was the same in the experimental group, but before receiving the fact sheet, youngsters had to write down values that were important in their lives and describe why they are important. After a week, youngsters in the experimental group had a more positive attitude than youngster in the control group, and they were also more physically active (77.9 vs. 44.5 metabolic equivalent of task [MET]).

In both examples provided, there was one behavior change method targeting one specific determinant. As described earlier in this article, in a problem-driven context, multiple determinants are relevant to explain – and also change – the behavior at hand. And, the relevance of certain determinants might also differ between people. Therefore, behavior change could benefit from tailoring. And this is where technology could play a role. For example, in computer-tailoring, people received interactive feedback based on an assessment of their social-cognitive profile. This tailored feedback is more relevant and more effective than a one-size-fits-all-approach (De Vries & Brug, 1999).

So far, it has been explained that behavior is ‘hardwired’ in the sense that is results from activation patterns of firing neurons. Learning is required to ‘rewire’ these activation patterns and, therefore, behavior change is not easy. The final main section describes the potential of technology in behavior change science.


Technology refers to technology in its widest sense. Every year, Gartner (2019) presents emerging technologies and projects them on the hype cycle, depicting a recurring pattern. After a new technology is introduced, expectations about its possibilities and impact increase. The sky is – or seems to be – the limit. After a while, it appears that these expectations cannot be met in the short run. Often expectations then swing in the other direction. And, after some time, truth lies in the middle. This does not mean that all technologies stand the test of time. Many new technologies end with hopeful expectations. When looking back at the hype cycles as of 2000, it is striking to see that both technologies that we now use on a daily basis as well as those that are mostly unknown follow more or less the same pattern: too high expectations followed by a reality check. We are very bad, however, in predicting which technologies will stand the test of time. That being said, the focus of this article is not on developing new technologies, but on how existing technologies can be used to improve behavior change science. So, technology is a means to an end. This raises the question, what end? In my opinion, there are two answers to this question, which are schematically depicted in Figure 3.

Figure 3 

Application and innovation of theory.

The first answer is to apply technology to come to a solution for a problem (Figure 3; left side departing from application of theory). This refers back to the earlier sections of this article. In a problem-driven context, multiple theories are applied to gain insight into why people behave as such. And subsequently select behavior change methods that are related to these determinants. Technology can be used to apply such behavior change methods in practice. For example, providing computer-tailored feedback based on a social-cognitive profile. Such applications can be improved by new technological possibilities. In computer-tailoring, for example, one of the challenges is that many beliefs underlie behavior. All these beliefs need to be assessed, which is detrimental to the user experience, but needed to provide tailored feedback. That is because the feedback is targeted at those beliefs that are most relevant and where there is most room for improvement for a specific person. This process can be optimized by means of recommender systems – a technique from the field of artificial intelligence (Giabbanelli & Crutzen, 2015). Better recommendations can be made by using more advanced algorithms. These recommendations are based on earlier choices made by a user and by choices made before by other users of the system. In this way, the system improves over time and this also contributes to a better user experience. So, technology can contribute to better applications aimed at solving a problem.

The second answer is using technology to innovate theory (Figure 3; right side resulting in innovation in theory). This is less common, but results in more progress (Hekler et al., 2016; Moller et al., 2017). Before explaining why, let’s go back to Lewin’s formula indicating that behavior is a function of a person and his or her environment: B = ƒ(P,E). Technology provides opportunities with regard to all three elements of this formula.

To start with behavior (B). Technological possibilities to measure behavior are improving constantly. For example, possibilities to measure physical activity and sleeping, but also measures associated with behavior, such as heart rate. And the technology that is needed to measure these is incorporated in objects that we use on a daily basis, such as mobile phones and watches. We hardly notice that a lot of aspects of our daily life are tracked. And these developments are very fast. For example, there are already sensors available that can track eating behavior (Chun et al., 2018). This might sound rather futuristic and this is not the place to make any predictions whether this technology will stand the test of time. More in general, however, it can be stated that many technologies that are now used on a daily basis were not that long ago seen as science fiction.

Subsequently, the environment (E) in which behavior takes place. There are already a lot of possibilities with existing technologies in terms of capturing aspects of the environment. Mobile phones, for example, constantly track the location. It becomes even more interesting, from a behavioral science point of view, when data regarding the environment can be linked to data regarding the behavior. For example, when studying the effect of infrastructural changes in the built environment on physical activity (Stappers et al., 2018). But also on a much smaller scale, when looking at the relationship between exposure to light and sleep behavior (Faulkner et al., 2020).

Finally, the person (P) itself. This is much more complicated, because we cannot directly measure what is going on in people’s brain. Technology has also advanced in this regard. Using fMRI techniques, brain activity can be measured by detecting changes associated with blood flow. The accuracy (e.g., spatial resolution) of these measurements has improved over time. We can zoom in more and more to specific locations in the brain. The human brain consists of 86 billion neurons (Azevedo et al., 2009). Currently, we cannot get a complete picture of all these neurons at the same time. And even if we could, we are confronted with a problem that reminds us of Daniel Dennett’s explanation of the ‘frame problem’ (Dennett, 1984). By the time we have a complete picture of the activation of all these neurons, the picture is already changed. That is because activation is a continuous process. More importantly, our insight into what activation of specific neurons means in terms of specific thoughts and feelings that people have, is rather limited.

For now, we have to rely on indirect measures, such as reaction times of answers to questions. Every way of measuring has its pros and cons. A questionnaire, for example, is a relatively cheap and fast way to ask people what their thoughts and feelings are with regard to a certain topic or behavior. This way of measuring is often used to ask questions at a certain point of time and then again three or six months later. This provides insight into changes of determinants of behavior. It is less appropriate, however, to gain insight into the process of change. In other words, it provides insight into whether there was a change, but not how this change has taken place. To gain such insight it might be needed to ask questions every week, every day, or maybe even multiple times per day. This is not only a logistic challenge, but also in terms of participant burden as it requires people to complete many questions multiple times.

Technology provides solutions to facilitate this. Ecological momentary assessment (EMA) concerns repeatedly asking questions at certain time slots during the day or linked to specific events. People get a signal, for example on their mobile phone, and subsequently have to answer a couple of questions. This is not an elaborate questionnaire, but a couple of items related to a topic that is important in the context of a specific behavior. An example is provided based on the results of a meta-analysis (Haedt-Matt & Keel, 2011).

The example concerns people suffering from bulimia nervosa and binge eating disorder. An essential feature is that these people engage in binge eating; consumption of unusually large amounts of food coupled with a sense of loss of control over eating. According to the affect regulation model, there are two hypotheses related to this. First, increase in negative affect is an antecedent of binge eating. Second, binge eating results in an immediate decrease in negative affect – it is a way to cope with negative affect. There is especially debate about this second hypothesis. Other models predict that negative affect will increase directly after binge eating, because people realize what has happened.

The results of the meta-analysis show that binge eating is indeed preceded by negative affect (within-subjects standardized mean = .63 [95%CI .45–.82]). People feel worse pre-binge eating in comparison with the rest of the day, and also worse than before regular eating (within-subjects standardized mean = .68 [95%CI .40–.95]). However, after binge eating, people feel even worse than before (within-subjects standardized mean = .50 [95%CI .35–.64]). This is not in line with the second hypothesis. Furthermore, this contradicts retrospective participant reports that binge eating reduces negative affect. This is an example of how EMA can be used to gain more insight into certain behaviors and to improve theories regarding behavior. Subsequently, this has an impact in how to deal with a specific problem. In this example, this would imply psychoeducation regarding increases in negative affect as a consequence of binge eating as well as exploiting incongruency between retrospective participants reports and more in-the-moment assessments using EMA in therapy.

This is a clinical example, but insight into processes related to behavior is relevant to behavior change in general. Time of the day and the situation a person is in, for example, influence engaging in and maintaining a certain behavior (Millar, 2017). We can make a leap forward (towards innovation, as depicted in Figure 3; right side) by triangulating data regarding behavior, environment, and person.

Triangulating data

Triangulation refers to the practice of using multiple sources of data in order to gain a more comprehensive understanding of the phenomenon of interest (Salkind, 2010). Technology provides opportunities to triangulate EMA with smartphone native sensor data to track behavior and environmental factors. Triangulation of these sources of data is not easy. Ethical, legal, and technical challenges might come into play (Crutzen et al., 2019; Ienca et al., 2018). And, of course, data need to be analysed and interpreted. The role of behavioral scientists is crucial with regard to the latter. This is illustrated by a study of Bastiaansen et al. (2020). They provided data that was collected by means of EMA to twelve research groups around the world. These data concerned scores on 23 items related to depression and anxiety (e.g., felt down or depressed, felt a loss of interest or pleasure, felt frightened or afraid) over time. The question these research groups had to answer was what their advice would be to a clinician in terms of treatment. These advices varied enormously because all research groups conducted different analyses. However, none of these analyses was wrong. In their rationale for the choices made during analyses it appeared that these research groups differed in their ‘working hypothesis’ regarding what is deemed important for treatment. The analyses were in line with their hypotheses, but the hypotheses themselves differed. This example touches upon a more overarching issue, which is even more relevant when there is a lot of data available.

The more data, the more opportunities. Even so many opportunities that it has been suggested that theories and hypotheses can be skipped. Chris Anderson, former editor of Wired, suggested this in his article entitled “The end of theory” (Anderson, 2008). An example provided in this article as pleading for this statement is in my opinion actually pleading against it. The example is that Google Translate in principle is as good in translating Spanish to English as it is in translating Limburgian to Laotian. The only reason why this is not the case, in practice, is because there is less data available regarding the latter combination. No semantic or causal analysis is needed, as long as there is enough data available to detect regularities. That’s why Google can translate languages without actually “knowing” them, claims Anderson (2008). That knowing is written between quotation marks is, exactly what puts the finger on the sore spot. Google Translate does not ‘know’ grammar based on the amount of data. The same applies to behavioral sciences. Not much progress can be made by collecting as much data as possible about any possible behavior. Instead, insight needs to be gained into the principals underlying behavior. Based on this, predictions can be made.

So, sifting through a pile of data is not substitute of theory, rather a supplement (Haardörfer, 2019). This does not exclude that interesting patterns can be found. However, these patterns should lead to hypotheses that could be tested in other data (Janssen & Kuk, 2016) and fit within a causal diagram (Pearl & Mackenzie, 2018). And those hypotheses guide how and which data need to be collected. This is a key element in Popper’s seminal work on the logic of scientific discovery (Popper, 1935). This seminal work appeared in 1934, but is still relevant. Perhaps even more than ever before.

Although the possibilities to collect data have increased, the collection of data for scientific purposes does not happen arbitrary. There is a rationale behind how and which data are collected (Mazzocchi, 2015). In fact, the most recent regulation of the European Parliament regarding protection of personal data requires that processing of such data is purposeful and limited to what is necessary in relation to the purposes for which they are processed (European Parliament and Council, 2016). And this is not limited to data collected for commercial purposes, but also applies to data collected for scientific purposes (Mondschein & Monda, 2018). Furthermore, participation in research is voluntary and participants are a source of data that is finite and scarce. Redundant use of this source of data is unethical (Crutzen & Peters, 2017). So, thinking about how and which data to collect is relevant from scientific, legal, and ethical point of view. Both aspects are discussed next.

Collecting data

First, how to collect data. In the examples provided above, participants were randomly assigned to an experimental group and a control group. This study design is commonly referred to as ‘experiment’ in some fields of psychology (e.g., social, cognitive) and ‘randomized controlled trial’ (RCT) in medicine and other fields of psychology (e.g., clinical, health). When conducting a study, this is seen as the best possible way to answer the question whether an intervention was effective or not (Murad et al., 2016) – provided that they are adequately planned and executed (Kaptchuk, 2001). The reason why this is perceived as ‘the gold standard’ is that participants are assigned ‘at random.’ Therefore, differences between groups are due to the intervention – also known as causality. This is a good way to prove differences between people.

The opportunities provided by technology, such as facilitating collection of different types of data, also resulted in a renewed interest in n-of-1 research (McDonald et al., 2017). This is a family of research methods where the common denominator is to follow people over time. The research questions that can be answered when using these methods concern differences within persons. This renewed interest is of importance for behavior change, because this often concerns a process of change. At the same time, the need to make causal statements based on research findings remains. There are ways to get the best of both worlds. For example, by means of aggregated n-of-1 RCTs in combination with multilevel models (Cushing et al., 2014). There are also challenges when applying such methods (Vieira et al., 2017). For now, the focus is on the theoretical challenge, which is related to the second aspect: which data to collect?

The answer to the question which data to collect should originate from a theoretically driven question. This could be aimed at both developing or improving theories. An example is planning coping responses, which boils down to preparing to deal with difficult situations after behavior change. This behavior change method is aimed at changing habits. People often succeed in changing, but relapse to old habits after a while. This behavior change method is based on theory about relapse prevention (Larimer et al., 1999). According to this theory, identification of high-risk situations and practicing coping responses are two conditions for this behavior change method to work optimally. These conditions are based on the learning processes referred to earlier (Crutzen & Peters, 2018) and should be taking into account when applying this behavior change method in practice (Kok et al., 2016).

Planning of coping responses is originally applied in one-on-one treatment settings. In that case, the therapist helps the patient with identifying high-risk situations. The question is whether people are able to do this themselves? By means of triangulating data regarding behavior, environment, and person it is possible to identify when relapse actually happens. And in that way gain insight into whether perceived high-risk situations are the same as actual high-risk situations. A follow-up question would be whether to intervene in situations of which people think they are high risk or those that were high risk in the past? The other condition is practicing coping responses. An additional question would be whether it is of added value to remind people when they are in such a situation? And how such a just-in-time adaptive intervention should look like (Thomas & Bond, 2015)?

These are a few examples of questions that can be answered by opportunities provided by technology. Answering such questions results in better theories to both explain and change behavior (as summarized in Figure 3; right side). This is highly relevant for more effective and more efficient solutions to all societal problems related to human behavior.

Competing Interests

The author has no competing interests to declare.


  1. Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. 

  2. Aunger, R., & Curtis, V. (2015). Gaining Control: How Human Behavior Evolved. Oxford University Press. DOI: 

  3. Azevedo, F. A., Carvalho, L. R., Grinberg, L. T., Farfel, J. M., Ferretti, R. E., Leite, R. E., Jacob Filho, W., Lent, R., & Herculano-Houzel, S. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. Journal of Comparative Neurology, 10, 532–541. DOI: 

  4. Bartholomew Eldredge, L. K., Markham, C. M., Ruiter, R. A. C., Fernández, M. E., Kok, G., & Parcel, G. S. (2016). Planning Health Promotion Programs: An Intervention Mapping Approach (4th ed.). Jossey-Bass. 

  5. Bastiaansen, J. A., Kunkels, Y. K., Blaauw, F. J., Boker, S. M., Ceulemans, E., Chen, M., Chow, S.-M., De Jonge, P., Emerencia, A. C., Epskamp, S., Fisher, A. J., Hamaker, E. L., Kuppens, P., Lutz, W., Meyer, M. J., Moulder, R., Oravecz, Z., Riese, H., Rubel, J., … Bringmann, L. F. (2020). Time to get personal? The impact of researchers choices on the selection of treatment targets using the experience sampling methodology. Journal of Psychosomatic Research, 137, 110211. DOI: 

  6. Chun, K. S., Bhattacharya, S., & Thomaz, E. (2018). Detecting eating episodes by tracking jawbone movements with a non-contact wearable sensor. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2, 4. DOI: 

  7. Cialdini, R. B. (2008). Influence: Science and Practice. Allyn & Bacon. 

  8. Cooke, R., Trebaczyk, H., Harris, P., & Wright, A. J. (2014). Self-affirmation promotes physical activity. Journal of Sport & Exercise Psychology, 36, 217–223. DOI: 

  9. Crutzen, R., & Peters, G.-J. Y. (2017). Targeting next generations to change the common practice of underpowered research. Frontiers in Psychology, 8, 1184. DOI: 

  10. Crutzen, R., & Peters, G.-J. Y. (2018). Evolutionary learning processes as the foundation for behaviour change. Health Psychology Review, 12, 43–57. DOI: 

  11. Crutzen, R., Peters, G.-J. Y., & Mondschein, C. (2019). Why and how we should care about the General Data Protection Regulation. Psychology & Health, 34, 1347–1357. DOI: 

  12. Crutzen, R., & Ruiter, R. A. C. (2015). Interest in behaviour change interventions: A conceptual model. The European Health Psychologist, 17, 6–11. 

  13. Cushing, C. C., Walters, R. W., & Hoffman, L. (2014). Aggregated n-of-1 randomized controlled trials: Modern data analytics applied to a clinically valid method of intervention effectiveness. Journal of Pediatric Psychology, 39, 138–150. DOI: 

  14. De Vries, H., & Brug, J. (1999). Computer-tailored interventions motivating people to adopt health promoting behaviors: Introduction to a new approach. Patient Education and Counseling, 36, 99–105. 

  15. Dennett, D. C. (1984). Cognitive wheels: The frame problem of AI. In Minds, Machines and Evolution. Cambridge University Press. 

  16. Epton, T., Harris, P. R., Kane, R., Van Koningsbruggen, G. M., & Sheeran, P. (2015). The impact of self-affirmation on health-behavior change: A meta-analysis. Health Psychology, 34, 187–196. DOI: 

  17. European Parliament and Council. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC. 

  18. Faulkner, S. M., Dijk, D. J., Drake, R. J., & Bee, P. E. (2020). Adherence and acceptability of light therapies to improve sleep in intrinsic circadian rhythm sleep disorders and neuropsychiatric illness: A systematic review. Sleep Health, 6, 690–701. DOI: 

  19. Festinger, L. (1957). A Theory of Cognitive Dissonance. Stanford University Press. 

  20. Fishbein, M., & Ajzen, I. (2010). Predicting and Changing Behavior: The Reasoned Action Approach. Taylor & Francis Group. DOI: 

  21. Gardner, B. (2015). A review and analysis of the use of habit in understanding, predicting and influencing health-related behaviour. Health Psychology Review, 9, 277–295. DOI: 

  22. Gartner. (2019). Hype cycle for emerging technologies. 

  23. Giabbanelli, P. J., & Crutzen, R. (2015). Supporting self-management of obesity using a novel game architecture. Health Informatics Journal, 21, 223–236. DOI: 

  24. Ginsburg, S., & Jablonka, E. (2007). The transition to experiencing: I. Limited learning and limited experiencing. Biological Theory, 2, 218–230. DOI: 

  25. Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement: A meta-analysis of effects and processes. Advances in Experimental Social Psychology, 38, 69–119. DOI: 

  26. Haardörfer, R. (2019). Taking quantitative data analysis out of the positivist era: Calling for theory-driven data-informed analysis. Health Education & Behavior, 46, 537–540. DOI: 

  27. Haedt-Matt, A. A., & Keel, P. K. (2011). Revisiting the affect regulation model of binge eating: A meta-analysis of studies using ecological momentary assessment. Psychological Bulletin, 137, 660–681. DOI: 

  28. Hekler, E. B., Michie, S., Pavel, M., Rivera, D. E., Collins, L. M., Jimison, H. B., Garnett, C., Parral, S., & Spruijt-Metz, D. (2016). Advancing models and theories for digital behavior change interventions. American Journal of Preventive Medicine, 51, 825–832. DOI: 

  29. Ienca, M., Ferretti, A., Hurst, S., Puhan, M., Lovis, C., & Vayena, E. (2018). Considerations for ethics review of big data health research: A scoping review. PLoS ONE, 13, e0204937. DOI: 

  30. Janssen, M., & Kuk, G. (2016). Big and Open Linked Data (BOLD) in research, policy, and practice. Journal of Organizational Computing and Electronic Commerce, 26, 3–13. DOI: 

  31. Kampfner, R. R., & Conrad, M. (1983). Computational modeling of evolutionary learning processes in the brain. Bulletin of Mathematical Biology, 45, 931–968. DOI: 

  32. Kaptchuk, T. J. (2001). The double-blind, randomized, placebo-controlled trial: Gold standard or golden calf? Journal of Clinical Epidemiology, 54, 541–549. DOI: 

  33. Kelly, M. P., & Barker, M. (2016). Why is changing health-related behaviour so difficult? Public Health, 136, 109–116. DOI: 

  34. Kok, G., Gottlieb, N. H., Peters, G.-J. Y., Mullen, P. D., Parcel, G. S., Ruiter, R. A. C., Fernández, M. E., Markham, C., & Bartholomew, L. K. (2016). A taxonomy of behavior change methods: An Intervention Mapping approach. Health Psychology Review, 10, 297–312. DOI: 

  35. Kok, G., & Ruiter, R. A. C. (2014). Who has the authority to change a theory? Everyone! A commentary on Head and Noar. Health Psychology Review, 8, 61–64. DOI: 

  36. Lakens, D., Haans, A., & Koole, S. L. (2012). Één onderzoek is géén onderzoek: Het belang van replicaties voor de psychologische wetenschap. De Psycholoog, 9, 10–18. 

  37. Larimer, M. E., Palmer, R. S., & Marlatt, A. (1999). Relapse prevention: An overview of Marlatt’s cognitive-behavioral model. Alcohol Research & Health, 23, 151–160. 

  38. Lewin, K. (1936). Principles of Topological Psychology. McGraw-Hill. DOI: 

  39. Luszczynska, A., Sobczyk, A., & Abraham, C. (2007). Planning to lose weight: Randomized controlled trial of an implementation intention prompt to enhance weight reduction among overweight and obese women. Health Psychology, 26, 507–512. DOI: 

  40. Mazzocchi, F. (2015). Could Big Data be the end of theory in science? A few remarks on the epistemology of data-driven science. EMBO Reports, 16, 1250–1255. DOI: 

  41. McDonald, S., Quinn, F., Vieira, R., O’Brien, N., White, M., Johnston, D. W., & Sniehotta, F. F. (2017). The state of the art and future opportunities for using longitudinal n-of-1 methods in health behaviour research: A systematic literature overview. Health Psychology Review, 11, 307–323. DOI: 

  42. Metallica. (2016). Hardwired… To Self-Destruct. Blackened Recordings. 

  43. Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., Eccles, M. P., Cane, J., & Wood, C. E. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Annals of Behavioral Medicine, 46, 81–95. DOI: 

  44. Millar, B. M. (2017). Clocking self-regulation: Why time of day matters for health psychology. Health Psychology Review, 11, 345–357. DOI: 

  45. Moller, A. C., Merchant, G., Conroy, D. E., West, R., Hekler, E., Kugler, K. C., & Michie, S. (2017). Applying and advancing behavior change theories and techniques in the context of a digital health revolution: Proposals for more effectively realizing untapped potential. Journal of Behavioral Medicine, 40, 85–98. DOI: 

  46. Mondschein, C. F., & Monda, C. (2018). The EU’s General Data Protection Regulation (GDPR) in a Research Context. In Fundamentals of Clinical Data Science (pp. 55–71). Springer. DOI: 

  47. Murad, M. H., Asi, N., Alsawas, M., & Alahbad, F. (2016). New evidence pyramid. BMJ Evidence-Based Medicine, 21, 125–127. DOI: 

  48. Pearl, J., & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Penguin Random House. 

  49. Peters, G.-J. Y., & Crutzen, R. (2017a). Confidence in constant progress: Or how pragmatic nihilism encourages optimism through modesty. Health Psychology Review, 11, 140–144. DOI: 

  50. Peters, G.-J. Y., & Crutzen, R. (2017b). Pragmatic nihilism: How a Theory of Nothing can help health psychology progress. Health Psychology Review, 11, 103–121. DOI: 

  51. Peters, G.-J. Y., Crutzen, R., Roozen, S., & Kok, G. (2020). The Reasoned Action Approach represented as a Decentralized Construct Taxonomy (DCT). DOI: 

  52. Peters, G.-J. Y., De Bruin, M., & Crutzen, R. (2015). Everything should be as simple as possible, but no simpler: Towards a protocol for accumulating evidence regarding the active content of health behaviour change interventions. Health Psychology Review, 9, 1–14. DOI: 

  53. Peters, G.-J. Y., & Kok, G. (2016). All models are wrong, but some are useful: A comment on Ogden (2016). Health Psychology Review, 10, 265–268. DOI: 

  54. Pinel, J. P. J., & Barnes, S. J. (2017). Biopsychology (10th ed.). Pearson. 

  55. Pinker, S. (2018). Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. Penguin Random House. 

  56. Popper, K. (1935). Logik der Forschung: Zur Erkenntnistheorie der modernen Naturwissenschaft. Mohr Siebeck Verlag. DOI: 

  57. Popper, K. (1972). Objective Knowledge: An Evolutionary Approach. Oxford University Press. 

  58. Rangel, A., Camerer, C., & Montague, P. R. (2008). A framework for studying the neurobiology of value-based decision making. Nature Reviews Neuroscience, 9, 545–556. DOI: 

  59. Ritchie, H., & Roser, M. (2019). Causes of death. In Our World in Data. 

  60. Roser, M., Ortiz-Ospina, E., & Ritchie, H. (2019). Life expectancy. In Our World in Data. 

  61. Ruiter, R. A. C., & Crutzen, R. (2020). Core processes: How to use evidence, theories and research in planning behavior change interventions. Frontiers in Public Health, 8, 247. DOI: 

  62. Salkind, N. J. (2010). Triangulation. Encyclopedia of Research Design. DOI: 

  63. Scull, C. (2009). Early Medieval (late 5th-early 8th centuries AD) Cemeteries at Boss Hall and Buttermarket, Ipswich, Suffolk. Routledge. 

  64. Stappers, N. E. H., Van Kann, D. H. H., Ettema, D., De Vries, N. K., & Kremers, S. P. J. (2018). The effect of infrastructural changes in the built environment on physical activity, active transportation and sedentary behavior—A systematic review. Health & Place, 53, 135–149. DOI: 

  65. Steenaart, E., Crutzen, R., & De Vries, N. K. (2018). The complexity of organ donation registration: Determinants of registration behavior among lower-educated adolescents. Transplantation Proceedings, 50, 2911–2923. DOI: 

  66. Thomas, J. G., & Bond, D. S. (2015). Behavioral response to a just-in-time adaptive intervention (JITAI) to reduce sedentary behavior in obese adults: Implications for JITAI optimization. Health Psychology, 34, 1261–1267. DOI: 

  67. Truppa, V., Mortari, E. P., Garofoli, D., Privitera, S., & Visalberghi, E. (2011). Same/different concept learning by capuchin monkeys in matching-to-sample tasks. PLOS ONE, 6, e23809. DOI: 

  68. Van Sprundel, M. (2019). Zenuwcellen zoeken stabiele partners om brein te bedraden. NEMO Kennislink. 

  69. Vieira, R., McDonald, S., Araújo-Soares, V., Sniehotta, F. F., & Henderson, R. (2017). Dynamic modelling of n-of-1 data: Powerful and flexible data analytics applied to individualised studies. Health Psychology Review, 11, 222–234. DOI: 

  70. Vlaev, I., & Dolan, P. (2015). Action change theory: A reinforcement learning perspective on behaviour change. Review of General Psychology, 19, 69–95. DOI: 

Peer Review Comments

Health Psychology Bulletin has blind peer review, which is unblinded upon article acceptance. The editorial history of this article can be downloaded here:

PR File 1

Reviewer A. DOI:

PR File 2

Reviewer B. DOI:

comments powered by Disqus