No COM-B For Old Men

The Deceptive Simplicity of the Behavior Change Wheel

Jeff Brodscholl, Ph.D
Greymatter Behavioral Sciences

If you work in an applied discipline that makes heavy use of behavioral science to effect behavior change – and particularly if your work is in healthcare – chances are that you’ve come across the Behavior Change Wheel, or what is sometimes referred to as the "COM-B model" of behavior change. Originally pioneered by scientist-practitioners working in the field of “implementation science”, the Behavior Change Wheel was developed to solve a tricky problem:

How do we take the vast collection of theories that characterizes the behavioral science literature and distill it into an easy-to-understand “framework”, or collection of explanatory concepts, that:

  1. Can guide the development of behavior change interventions
  2. using what research has shown to be the drivers of behavior, and
  3. do so in a way that’s comprehensive enough to be applied across a broad range of behavior change challenges?

It’s an important question to answer, yet it's also one that eludes an easy response. Without an evidence-based understanding of behavioral drivers, efforts to change behavior can end up relying on a shaky foundation of intuitions and guesses, leading interventions to be based on little more than whatever ideas happen to sound good in the moment. The efforts might succeed out of luck or whatever momentary stroke of insight might have informed them at the time, but the odds of success won’t be optimal, and the ability to generalize from their outcome will be hampered by the lack of deeper knowledge about what made the interventions operate the way they did once they were implemented.

Yet, try to use behavioral science to inform intervention design in a theory-driven way, and what one is bound to encounter is the veritable Noah’s Ark of causal concepts that characterizes the behavioral science literature. This is the consequence of a field that has long struggled to develop a unifying account of human behavior, yielding a thicket of findings and claims that can be challenging for even a well-credentialed behavioral scientist to have to wade through, let alone an intervention design team with only a cursory knowledge of the science and its pitfalls.

The Behavior Change Wheel looks to address these problems. It goes about this by building on a foundation of ambitious literature reviews that have sought to create comprehensive taxonomies of behavioral drivers, behavior change techniques (BCTs), and broader ways of influencing behavior culled from upwards of 33 behavioral theories, 19 competitor frameworks, and six BCT classification systems [1-4]. These reviews use expert consensus to make sense of this literature, distill it to something more manageable, and construct a roadmap for which BCTs are most likely to work in the presence of which behavioral drivers [5].

The framework then uses the metaphor of a wheel, made up of concentric circles, to artfully surface the various taxonomies and communicate the idea that behavioral drivers and their interventions can be linked in various ways by working across the wheel’s layers. Of course, not all combinations of drivers and interventions make sense, so the wheels are supplemented with tables that tell the user how drivers (aka "TDF domains"), intervention functions, and policy categories (i.e., ways of implementing interventions) go together, as well as what BCTs will work with what drivers. And all of this is packaged in a handy manual, filled with worksheets and tools, that pulls the reader through an eight-step process for applying the Behavior Change Wheel to the design of a behavioral intervention [6].

Behavior change wheel

The full Behavior Change Wheel. Reprinted from “Breaking barriers: Using the behavior change wheel to develop a tailored intervention to overcome workplace inhibitors to breaking up sitting time”, by S.O. Ojo, D.P. Bailey, M.L. Brierley, D.J, Hewson, and A.M. Chater (2019), BMC Public Health, 19, 1126. Used under Creative Commons Attribution License: https://creativecommons.org/licenses/by/4.0/. https://doi.org/10.1186/s12889-019-7468-8.

All this alone would likely be enough to make the Behavior Change Wheel appealing to hard core intervention designers with a background in behavioral science. But what really gives the Behavior Change Wheel its extra kick is what it places at its center. It’s the same type of secret sauce that makes the Fogg Behavior Model feel like an explanatory holy grail – a reduction of the complexity of behavior to just three things, rendered in such an intuitive, accessible way as to make the reduction feel both sublime and immediately useful to practitioners at any level of expertise:

For behavior change to happen, there needs to be:

  • The capability to change,
  • The opportunity to change, and
  • The motivation to change
BehaviorChangegraph

The COM-B model. Reprinted from “The behaviour change wheel: A new method for characterising and designing behaviour change interventions”, by S. Michie, M.M. van Stralen, & R. West (2011), Implementation Science, 6, 42. Used under Creative Commons Attribution License: https://creativecommons.org/licenses/by/2.0/. doi: 10.1186/1748-5908-6-42.

It’s this feature of the framework that helps to explain its broader diffusion within the marketplace of ideas. Functionally, the model acts as a more parsimonious way to organize behavioral drivers, reducing them to three driver categories rather than to the larger set of 14 that appear in one of the framework’s outer circles. But what the model really does is inch the framework closer to functioning like a coherent, unified explanation of behavior, rather a being a mere collection of drivers, intervention features, and behavior change techniques. And, by expressing itself in such elegant terms, it pulls off a bit of a coup by creating the impression that all a person needs to do is boil a behavioral challenge down to one or more of three things and they’ll be set to apply the framework to generate insights and pull them through to the development of theory-based interventions.

This is exactly how I’ve seen the Behavior Change Wheel pitched elsewhere, including in projects in which I’ve been involved. In one case, the COM-B model was presented as the ideal tool to guide analysis for a project in which the goal was to formulate recommendations for making a particular digital asset highly effective in supporting certain desirable everyday health self-management behaviors among its users. No steps were taken to hide the framework’s broader components, but the COM-B model was positioned so deep into the spotlight that it had the effect of pushing the other taxonomies into the background in exchange for the model’s purported ease of use. Admittedly, the broad brushstrokes with which the model was painted left me a bit suspicious, but there was little about it that I could find conceptually objectionable: It provided a clean, common language that everyone on the team could speak, and I ultimately had no trouble retrofitting specific drivers to COM-B categories, which I should have if the model were simply invalid.

Then I went to apply the model in the nitty-gritty of my analytic work. In short time, I found myself drifting away from it. The rest of this post explains why – and provides broader lessons for anyone wanting to pick up the Behavior Change Wheel and run with it in their projects.

What the Behavior Change Wheel Leaves Unspoken

Before I go into what I think is an unappreciated challenge with the Behavior Change Wheel, it’s important to discuss what we mean when we talk about “theories”, and “models”, as opposed to simply “frameworks”. The Behavior Change Wheel bills itself as a framework for driving theory-based intervention development, and it provides a model of behavior to organize its content – yet each of these concepts is distinct, with its own implications for how scientific knowledge is represented and used to generate insights and design interventions.

In a theory or a model, we don’t just have some facet of human behavior that we’re looking to explain and a collection of high-level constructs that we think can explain it. Instead, what we have is a representation of the world that’s much more granular and, often, more tightly interwoven. In the case of a "theory", we have:

  • A set of statements, or “propositions”, about the nature of the relationships between the theory’s constructs, which help us to say things like, “the more there is of A, the more there will be of B, but not vice-versa”
  • An additional set of statements that precisely define the constructs that the theory considers important
  • Yet other statements that put boundaries on what it is that the theory is looking to explain

These components of a theory can be quite complex. They may specify causal relationships in which the impact of A on B is dialed up or down by some third variable C, causal pathways in which A impacts B through C but there’s no direct pathway from A to B, and so on. They may also define constructs in ways that go well beyond the everyday understanding of the labels that are affixed to them. Either way, they become the key underpinnings of a “model”, which acts as a more formal (i.e., logical or schematic) way of representing a theory’s constructs and their relationships – sometimes quite crisply, sometimes more conceptually, spanning a broad range of findings and leaving room for pockets of uncertainty (and sometimes even standing in for theory itself) – but always with enough detail and connection to granular, theory-like statements as to make them meaningful bases for prediction and testing in concrete, well-defined contexts.

It can be easy to dismiss the process of theory or model building as if it were little more than specifying how many angels might dance on the head of a pin – but it is hardly so. On the contrary, it’s the care given to the specification of models and theories that makes it possible to conduct research that can speak to whether they are correct or not, as well as to whether the behavioral processes they posit are behind the behaviors observed in a specific context of interest. It’s also what makes it possible to make connections between the ideas a theory or model expresses and other pockets of scientific knowledge. And it’s these efforts combined that make it possible to take the learnings associated with a theory or model and extrapolate them to real world applications. 

What’s important to note about the Behavior Change Wheel is that, while it might be "grounded" in theories and models, it doesn’t look to embrace the nuances found in either. That’s because it’s a framework, and, as a framework, it deliberately avoids the details of a causal theory in favor of reducing scientific knowledge to a few manageable taxonomies and simple hard-coded rules for what goes with what. It may tip its hat to theory with the COM-B “model”, but the model’s constructs are painted with far broader strokes than is typically true of most models found in the literature so as to make them applicable across an exceptionally broad range of problems. It then substitutes a behavioral scientist’s way of using theoretical knowledge with guidebooks and tools, as well as with labels that are meant to provide a semblance of similarity between formal scientific ideas and everyday concepts – all with the laudable goal of making the material accessible, and spreading the power of theory-based insight and intervention-building to a broad base of practitioners.

All intuitive, all quite practical. And yet it is exactly these features of the Behavior Change Wheel that can cause its application to go off the rails. 

I don’t know how I could have completed my work on the earlier asset had I not activated a considerable amount of background knowledge about the processes implicated in the behavior the asset was meant to impact. This wasn’t just a matter of knowing something about the processes involved in goal setting, preference formation, self-regulation, learning, coping, and reasoning (to name just a few); it was also a matter of knowing something about how these processes work together and unfold over time, as well as how they connect to other processes that can work to influence them. It was the development of this rich, interconnected representation of behavioral drivers that allowed me to make educated guesses about how the user’s behavior might break down and the tactics that could be incorporated to make the asset more behaviorally meaningful. To get there, I had to step outside of the Behavior Change Wheel and reference back to several literatures so that I could pull in new material, evaluate it, put it all together, and render it to a level of detail where I could use it to support recommendations in which I could feel comfortable – none of which the Behavior Change Wheel alone could have assisted with. That kind of backfilling might be fine for someone with a behavioral science PhD, but it’s not so fine to push onto a team that must rely on the framework as a substitute for knowledge of the literature on which it’s based.

Alas, I’m not the only one to have come up against the Behavior Change Wheel’s unspoken gaps. As both published literature reviews and qualitative interviews have revealed, one of the biggest challenges practitioners experience when they use the Behavior Change Wheel is in figuring out how to interpret COM-B categories and theoretical domains, and how to prioritize relevant BCTs, using just the framework alone [7-10]. These struggles show up in the amount of confusion teams experience when they try to use the framework’s taxonomies to code the results of literature reviews or original research – a key step in figuring out what’s driving current behavior, and a critical input for identifying the BCTs, functions, or policies that should be considered for intervention design. Tellingly, teams have reported solving the problem through what might best be described as “judgment by committee”, in which they set aside the guide and revert to common sense to figure out how to fit findings to the framework’s constructs [10]. These solutions might very well work in a pinch, but they also introduce through the back door the very “it sounded good at the moment” thinking that the Behavior Change Wheel was designed to avoid.

The Behavior Change Wheel's Other Rough Corners

So, the Behavior Change Wheel poses a bit of a conundrum: 

  • On the one hand, it exists to distill the behavioral science literature into a set of practical tools that multidisciplinary teams can use to identify issues in behavior and develop effective interventions to address them, retaining enough of the essence of the literature to ground their work in theory-led, evidence-based ways of thinking. 
  • On the other hand, its approach leaves enough unsaid that the only way the framework can be effectively applied is if the silences are broken with the very expert knowledge that the framework was designed to sidestep. 

But this raises a question: If applications of the Behavior Change Wheel aren’t supplemented with outside expert knowledge but are being backfilled with everyday guesswork, then are these applications really as evidence-based as advertised? And, if not, do they really represent an advance from the days when intervention development was about what sounded like a good idea at the time?

An advocate might say that, if the framework helps intervention designers anchor 50% of their work on the literature’s learnings, and if the rest is based on pretty good common sense about what makes people tick, then, ultimately, the Behavior Change Wheel is doing its job. Certainly, teams do report appreciating having the framework available to guide them given its foundation in a large body of evidence-based work into behavioral drivers and behavior change techniques [9]. If that knowledge helps them feel more confident and shapes their thinking in more sophisticated ways, then might the result not very well be interventions that are of better quality?

It's an argument worth hearing out – but, at the end of the day, I don’t know if it's particularly compelling. It requires assumptions that are hard to sustain absent evidence that interventions designed with the framework are, in fact, more effective than those designed without – a matter on which the jury has not yet spoken. But, more importantly, it ignores other aspects of the framework that really do require expert intervention, lest applications not only become overly reverent, but drive teams to new dead ends by systematically biasing their thinking away from what the literature would have to say if it were applied by different means:

The Behavior Change Wheel can encourage an “independent effects” way of thinking. This is a pitfall with any framework that offers up a grab bag of behavioral drivers. If all I’m told is that a particular behavior can be influenced by A, B, or C, then it becomes easy to focus on each driver in isolation and treat it as an intervention target that operates independently of the others. That’s not how behavior works:

  • A person who envisions the time and location when they will take their medication is more likely to reach for their medication the next time they walk into the envisioned context – unless their motivation to adhere to treatment was modest to begin with, in which case the mental planning will have been for naught [11].
  • Likewise, a person who copes with cancer treatment by looking online for cancer success stories may very well find a wellspring of motivation and education in these hopeful narratives – but there may be times when, depending on their personal circumstances and beliefs, they receive greater emotional benefit by making comparisons to patients who have fared worse than themselves [12,13].

These examples typify the rule rather than the exception. They reflect the fact that, if you want to understand why a behavior occurs, you need to look at the places where drivers and barriers interact, rather than merely rattling them off a checklist one by one. The COM-B model hints at these interactions, but the hint is weak – and the preponderance of the Behavior Change Wheel is otherwise tilted toward cataloging drivers, and mapping BCTs, in exactly the one-by-one fashion that encourages looking at a dynamically interacting world through the lens of independent effects [14]. Users who are accustomed to thinking about behavioral drivers in a more synergistic fashion may not be sucked in as easily by this, but the same might not prove true for a more naïve user – and that can have consequences for the insights and recommendations that end up being generated as the framework is applied.

The knowledge captured by the Behavior Change Wheel is narrower than advertised. Much is made about the Behavior Change Wheel’s theoretical comprehensives, yet a review of the framework’s source papers reveals a strong tendency for it to index heavily on certain corners of the literature at the expense of others. This is particularly true of the driver domains and BCTs, which are heavily steeped in the theories and interventions found in health psychology and health services research. That is no small thing: While applied fields such as health psychology may take their cue from the domain-independent efforts of basic science research, those fields have their own traditions with respect to what they borrow from or listen to – and what they do not. It’s not surprising, then, that the Behavior Change Wheel fits nicely with the social-cognitive approaches common in health psychology, but struggles to give voice to concepts drawn from other areas -- behavioral economics being one. In this case, it’s not just pockets of vagueness in the framework’s constructs that need to be backfilled; it’s the framework itself, with the choices it makes about what to include or exclude, amplify or dampen down, that ends up requiring interrogation and adjustment to ensure protection from its blind spots. (One significant omission: The role of culture in behavior [8].)

The Behavior Change Wheel’s foundation is good – but it's not impregnable. This last point is more about a feature of the framework that can encourage excessive reverence when a far looser approach should be advised. As noted earlier, the framework is anchored in reviews which have used rigorous methodologies to elicit and integrate expert opinion about what the scientific literature says about behavior and the effective routes to changing it. These methods are what one would expect of a strongly evidence-based effort – but that doesn’t mean that the methodological choices in these efforts have always been wise. It’s noteworthy, for instance, that some of the later taxonomies that were developed out of considerable quantitative grilling have actually proven harder to implement, and more prone to overlooking key content (e.g., certain types of behavioral drivers), than earlier versions developed via simpler means [e.g., 15]. Alas, some of that grilling turns out to have occurred in the presence of small samples with considerable respondent-level heterogeneity, yielding only modest model fit indices – exactly the conditions under which one can expect analytic outputs to have difficulty generalizing to later contexts [e.g., 2]. This is just one example, of course, and none of it is to say that the efforts themselves have been lacking in earnestness – but it is to say that they need to be taken with a certain grain of salt rather than deferred to on the basis of their apparent rigor alone.

Where This Leaves Us

So, what does all of this mean for the Behavior Change Wheel's place in the intervention designer’s armamentarium? As already noted, teams feel like they get a lot from the Behavior Change Wheel, and there’s a good reason why the framework should be expected to be of some service to practitioners with a deep behavioral science background:

  • Its taxonomies provide an excellent catalog of “don’t forget about” items to reference when diagnosing a behavioral challenge and thinking through ways to intervene.
  • Its method of organizing its material is congenial to both everyday understandings of people and much of what we know from the literature about what drives behavior and encourages it to change.
  • Lastly, it comes with a nice set of tools for supporting aspects of one’s work that would be a part of the job even if the framework weren’t used as the primary vehicle for analysis or intervention design. (The tools for identifying and defining key behaviors to change are, themselves, worth the price of the guidebook.)

Yet, none of this is to deny a fundamental fact about the Behavior Change Wheel, which is that it really can't act as an effective substitute for expert-supported insight generation or intervention development: It has too many gaps, and too many limitations, to be something that can be taken off the shelf and run with absent the support of a behavioral science subject-matter expert to provide guidance on its use. That’s not so much a reflection on the Behavior Change Wheel as it is on any attempt to create a cookbook approach to behavioral science applications. But it does mean that there can be dangers of both frustration and ersatz application when the framework is taken up without the proper support. And that means that teams will want to have a behavioral scientist on board if they want to leverage the Behavior Change Wheel the right way in bringing behavioral science to bear in their projects.

References (Were We Got All of This)

  1. Michie, S., Johnston, M., Abraham, C., Lawton, R., Parker, D., Walker, A., and the Psychological Theory Group (2005). Making psychological theory useful for implementing evidence-based practice: A consensus approach. Quality and Safety in Health Care, 14, 26-33. https://doi.org/10.1136/qshc.2004.011155
  2. Cane, J., O’Connor, D., & Michie, S. (2012). Validation of the theoretical domains framework for use in behaviour change and implementation research. Implementation Science, 7, 37. https://doi.org/10.1186/1748-5908-7-37
  3. Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., Eccles, M.P., Cane, J., & Wood, C.E. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Annals of Behavioral Medicine, 46, 81-95. https://doi.org/10.1007/s12160-013-9486-6
  4. Michie, S., van Stralen, M.M., & West, R. (2011). The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Science, 6, 42. https://doi.org/10.1186/1748-5908-6-42
  5. Michie, S., Johnston, M., Francis, J., Hardeman, W., & Eccles, M. (2008). From theory to intervention: Mapping theoretically derived behavioural determinants to behaviour change techniques. Applied Psychology: An International Review, 57, 660-680. https://doi.org/10.1111/j.1464-0597.2008.00341.x
  6. Michie, S., Atkins, L., West, R. (2014). The Behaviour Change Wheel: A Guide to Designing Interventions. Silverback Publishing.
  7. Cowdell, F., & Dyson, J. (2019). How is the theoretical domains framework applied to developing health behavior interventions? A systematic search and narrative synthesis. BMC Public Health, 19, 1180. https://doi.org/10.1186/s12889-019-7442-5
  8. Dyson, J., & Cowdell, F. (2021). How is the Theoretical Domains Framework applied in designing interventions to support healthcare practitioner behaviour change? A systematic review. International Journal for Quality in Health Care, 33, mzab106. https://doi.org/10.1093/intqhc/mzab106
  9. Phillips, C.J., Marshall, A.P., Chaves, N.J., Jankelowitz, S.K., Lin, I.B., Loy, C.T., Rees, G., Sakzewski, L., Thomas, S., To, T., Wilkinson, S.A., & Michie, S. (2015). Experiences of using the Theoretical Domains Framework across diverse clinical environments: A qualitative study. Journal of Multidisciplinary Healthcare, 8, 139-146. https://doi.org/10.2147/JMDH.S78458
  10. Whitall, A., Atkins, L., & Herber, O.R. (2021). What the guide does not tell you: Reflections on and lessons learned from applying the COM-B behaviour model for designing real-life interventions. Translational Behavioral Medicine, 11, 1122-1126. https://doi.org/10.1093/tbm/ibaa116
  11. Keller, L., Bieleke, M., & Gollwitzer, P.M. (2019). Mindset theory of action phases and if-then planning. In K. Sassenberg & M.L.W. Vliek (eds.), Social Psychology in Action (pp. 23-37).  Springer, Cham. https://doi.org/10.1007/978-3-030-13788-5_2 
  12. Taylor, S.E., & Lobel, M. (1989). Social comparison activity under threat: Downward evaluation and upward contacts. Psychological Review, 96, 569-575. https://doi.org/10.1037/0033-295x.96.4.569
  13. Buunk, B.P., Collins, R.L., Taylor, S.E., VanYperen, N.W., & Dakof, G.A. (1990). The affective consequences of social comparison: Either direction has its ups and downs. Journal of Personality and Social Psychology, 59, 1238-1249. https://doi.org/10.1037//0022-3514.59.6.1238
  14. McGowan, L.J., Powell, R., & French, D.P. (2020). How can use of the Theoretical Domains Framework be optimized in qualitative research? A rapid systematic review. British Journal of Health Psychology, 25, 677-694. doi: https://doi.org/10.1111/bjhp.12437
  15. Mosavianpour, M., Sarmast, H.H., Kissoon, N., & Collet, J.P. (2016). Theoretical domains framework to assess barriers to change for planning health care quality interventions: A systematic literature review. Journal of Multidisciplinary Healthcare, 9, 303-310. https://doi.org/10.2147/JMDH.S107796

© 2023 Grey Matter Behavioral Sciences, LLC. All rights reserved.