Beyond Nudges to Behavioral Strategy
The Case of Sluggish Uptake of a Cutting Edge-Class Therapy
Jeff Brodscholl, Ph.D
Greymatter Behavioral Sciences
A Puzzle
Physicians sometimes work in categories where there have been only modest advances in treatment and the scientific understanding of underlying disease processes has been slow to develop. In one such category, there’s a chronic condition that has a clinically significant long-term risk to the patient if the condition is not effectively managed. Physicians who treat this condition care deeply about this risk, yet a substantial number tend to believe that its rate of occurrence is rare. They sometimes hold to this belief even after seeing epidemiological data indicating that the risk is more common than thought, and despite its consequence being one they vividly recognize from their own practice as one that can precipitate an all-hands-on-deck upending of the patient’s treatment once it emerges. There’s a newer drug for the condition that can reduce the risk if given early, but its uptake has been hampered by a tendency for physicians to be hesitant to move toward new emerging treatment paradigms even when they represent demonstrable advances from the condition’s long-held standard of care. Evidence suggests that this hesitancy is co-occurring with the perception-reality gap in the physicians’ risk perceptions, lending credence to the view that the way to break through is to help physicians appreciate the true long-term disease risk and thereby increase their urgency to do something about it.
- Question: Presuming the link between risk perceptions and treatment use is correct, what tactics could be used in communications to help physicians be more receptive to the true risk of disease complication estimated in peer-reviewed epidemiological reports?
I was called in several years back to be the lead behavioral scientist on an engagement that had something like this case study as the centerpiece. The project was intriguing as it prompted the opportunity to look carefully at the reasoning and treatment behaviors of a fascinating group of physicians, much of which lent itself to interesting hypotheses about the root causes of the physicians’ judgment and decision tendencies and the opportunities to tailor communications to suit them once a behavioral science lens was applied.
But what eventually struck me about this project was what it would come to say about what we stand to gain when the firepower of behavioral science is trained away from narrow tactic development to the business of full-on strategic thinking – precipitated, in this case, by a robust application of evidence-based knowledge to the constellation of concerns, thought patterns, and behaviors of physicians whose beliefs and practices were proving unshakable in the face of an evolving treatment landscape, and even some forms of credible scientific evidence. This shift wasn’t part of the original project remit, as the client considered their strategy to target risk perceptions to be settled, leaving it only a matter of finding the tactics that could work to implement it – a matter that now had considerable urgency given where the client was in their timeline to turn their strategy into effective action.
Yet, as the project evolved, it became difficult to ignore how much was being lost by making tactical problem-solving the sole reason for turning to behavioral science for the client’s needs – a risk that can be exacerbated when notions about nudges, habit-forming methods, and other technocratic behavior change techniques come to dominate the science’s application, but which starts to dissolve when we engage in the more thorough work of behavioral analysis and solutions-building that I’ve argued for in other posts. Indeed, I’d argue that it’s precisely in a case like this one that the deeper work becomes essential to unlocking the science’s potential, pointing our thinking in directions that can be a much better match for the problem we need to solve than is possible when all we have is a grab bag of nudges and other shiny behavioral tactics to support our efforts. And I’d argue that it’s the inherent limitations of those techniques that often requires us to do the heavy lifting to ensure their proper use in any case – so why not push the envelope and follow the resulting leads as far as they take us, anyway?
I’ll use this post to talk a little bit more about the case, including how I leveraged behavioral science to develop a picture of the physicians that could be used to find evidence-based methods for communicating disease complication risk that could improve upon what the client had been inclined to do given the tools in their communications toolkit. I’ll also show why the result paves the way toward a strategic idea that, in my judgment, would have had more going for it than any communications tactics that followed from the analysis, however much they might have improved over the tactical status-quo. And I’ll then talk about what I see as the implications for how we can get more from behavioral science when we make it a partner in strategic, systems-like thinking rather than relegating it to the task of tactic development alone.
The Case in Depth: A Diagnosis to Drive Tactical Prescriptions
First, the case: This involved a chronic condition that has a history of being difficult to properly diagnose and treat. It is a serious condition, given to unpredictable periods of flares and remission, with a broad range of effects that can vary considerably from person to person. Symptoms can be debilitating, and broader long-term complications can emerge that have negative impacts on both lifespan and the patient’s quality of life. Underlying disease mechanisms remain mysterious, but there is a candidate set of treatments, cutting across different classes, that has been built up over decades from which physicians can select to successfully tackle flares, reduce symptoms, and stall disease progression, the trick being finding the one that will be right for the patient at hand.
It was in this context that the client’s treatment was situated, itself part of a newer, cutting-edge class of therapies that had already shown great benefit in this condition and in multiple others. In this case, the client’s treatment had the edge of data demonstrating not only that its efficacy could beat the standard of care, but that it could also be effective in mitigating one of the condition’s long-term complication risks as alluded to earlier. Yet, despite increasing clinical experience and comfort with this class of treatments, a nontrivial percentage of specialists were staying with older therapies, and not demonstrating openness to the client’s drug, when it came to treating the condition in question. As it turned out, many of these physicians were also expressing confidence that the long-term risk the drug could mitigate was rare, sometimes discounting epidemiological evidence of the risk’s true prevalence when it stood in contrast with the physicians’ own expectations.
The case seemed, in short, like a repeat of the late- and non-adopter problems that are ubiquitous in healthcare – albeit likely driven in this instance by dynamics that would have been unique to the combination of physician specialty, condition, and professional context that was central to the case at hand. To adequately address the core challenge, it was necessary to develop a better understanding of these dynamics so that tactics could be selected that would have the best chance of increasing physician receptivity to data, given the forces that were likely attenuating the risk perceptions and standing in the way of the client’s drug being utilized. And that meant pulling together as many facts as could be marshaled about the physicians and their world that could act as clues to the concerns, practices, and mental processes they were relying on as they treated the condition of interest.
What the facts seemed to suggest was this:
- These were hardly incurious or habit-bound physicians. On the contrary, they understood the gravity of the conditions they were tasked with treating, took their implications for patients seriously, and seemed to revel in the opportunity to unravel the puzzles that each patient presented them with, often playing an active, creative role in clinical judgment and decision-making in lieu of formal published guidelines to support them.
- They also thought in ways that were exceptionally concrete and experiential: They emphasized what made each patient unique, attended closely to the phenomenological manifestations of disease (i.e., symptoms) which they then made central to their case representations, and then made treatment decisions by recalling past cases, defined by similar symptom, sign, and lab value clusters, for whom a particular treatment had appeared to be effective. They had a well-developed sense of the set of treatment options available to them, and they relied on the if-then rules implicit in past case exemplars to flexibly select and tailor treatments based on the unique, granular constellation of case features that was presented to them in the moment.
The fact that the physicians relied on a curious, experience-based reasoning style to problem-solve made sense in a specialty area known for conditions with significant heterogeneity, little associated disease mechanism knowledge, and scant guidelines – circumstances to which the physicians would have needed to have been well-adapted if they were going to be effective. What it didn’t do, though, was explain why they would be reluctant to use the client’s treatment or would downplay the long-term risk that the treatment could address, particularly given their own recognition of the limitations of standard therapies.
Some other clues proved to be illuminating:
- The first had to do with the evidence used to assess long-term complication risk. As with diagnosis and treatment, the physicians extended their experience-based reasoning style to their risk perceptions, prioritizing personal clinical experience over what epidemiological data might tell them – at times, going so far as to actively disparage such data on various post-hoc grounds. Their experiences with the condition often included fluently recalled, disturbing memories of cases in which the risk came to pass – but it also included steps the physicians had taken with patients to vigilantly monitor for signs of disease progression, along with steps they had actively pursued, and clearly understood to take, once complications emerged. As the physicians based their risk estimates on their own local prevalence rate, they also now opened themselves to experiences in which the occurrences of the complication in question could have been distributed sparsely across time and cases – quite possible given the magnitude of the true aggregate rate and how it would be expected to play out locally based on purely statistical grounds.
- Then there was the nature of what the physicians focused on when they evaluated treatment pros and cons, which was frequently downside-heavy. In effect, they recognized the limitations of current treatment practices, but they worried about new treatment risks as much as they worried about the condition they were treating. And they did so in a context where the standard of care had conditioned them to treat reactively, forcing them to adopt a state of vigilance with a condition that, historically, had always been unpredictable and never curable.
With the facts assembled, a behavioral diagnosis was developed that emphasized the presence of three things:
- A bias toward underestimating the complication risk owing to the physicians’ high levels of personal involvement in monitoring and responding to it (effectively encouraging a feeling of personal control that is known to drive risk perceptions downward), combined with the statistical artifact of how the risk would likely manifest at a practice level – all potential counterweights to any tendency to overestimate the risk arising from fluent recall of its past occurrences;
- A chronic inclination toward a reasoning style, well-adapted to the nature of the domain in which the physicians are specialized, that prioritizes experiential data, pattern recognition, naturalistic decision-making, and intuition over theory-based, hypothetico-deductive reasoning – in this case, placed at the service of problem-solving conducted with high levels of engagement and curiosity; and
- A motivational orientation that sensitizes physicians to the negative consequences of the condition (including making them vigilant in risk detection, however unlikely they judged it), but also makes it likely for them overlook treatment potential over the possible downsides of change.
These components had a good chance of being synergistic, with the reasoning style reinforcing the emphasis on intuitive-based risk judgments over abstract data, the style’s focus on concrete, local facts being a good fit with the known cognitive preferences of an avoidance-like orientation, the orientation itself being amplified by the heightened levels of engagement, and even the active disparaging of data over experience being explained, in part, by the prevention-minded desire to avoid untoward influences on beliefs felt to have good standing. Together, they painted a portrait of the physicians approaching the focal condition less like bench scientists than like hunter gatherers who, without the benefit of clear guidelines or solid disease process theories, needed to adapt their thinking and motivational styles to feel their way through a clinical terrain that was heterogeneous, unpredictable, and threatening – all requiring curiosity, intuition, and fast-and-frugal thinking to keep up with the terrain while taking steps to protect against behavioral disruptions, missed signals, and mistakes that could harm the patient along the way. The portrait needed to be treated as provisional absent the opportunity for further testing, but it provided the most coherent, parsimonious fit to the available evidence relative to other alternatives that were considered. And it created a solid base from which to recommend evidence-based tactics, from data presentation formats and outcome framing methods to methods for building causal narratives, that could potentially deliver information with the right resonance, support more hypothetico-deductive thinking styles, and allow epidemiological data to land with greater force.
When the Problem Seems Big: From Diagnosis to Strategic Idea
What continued to itch at me as the project moved forward, though, was the sense that what was emerging from the diagnosis was pointing to something much bigger than what could possibly be adequately addressed within the remit of finding better communication tactics alone. As noted, there was good reason to suspect that whatever might have been driving physicians’ reluctance to use the client’s treatment likely went well beyond perceptions of the long-term risk the treatment could mitigate, even if those perceptions made some direct contribution to the reluctance in and of themselves. There was also good reason to believe that the treatment decision and risk perception drivers were embedded in a system of mutually-reinforcing processes that would need to be addressed holistically if the most potent solution was going to be identified. And while published data and practical experience suggested that the recommended tactics stood a good chance of having some systematic effect on risk perceptions if implemented with care, they also suggested that the effects would likely be modest, diminishing the more the perceptions were being held in place by a multitude of influences, and having even less effect on treatment behavior given the many other forces likely converging on treatment decision-making.
Yet, suppose the project had not been constrained as described, and that there had been room to take a more expansive approach to the bigger problem the client was truly trying to solve. Suppose that, in that case, we brought the above insights to the center of attention and followed their implications in whatever direction they might lead us, even if it wasn’t toward tactics aimed at the physicians’ perceptions of long-term complications risks. What, then, might the suggested solution have looked like?
The inference regarding the physician’s motivational orientation suggests an intriguing possibility. Fundamental to a chronic concern with avoiding negative outcomes is a strong desire not to take actions that bring about those outcomes through freely-willed, self-inflicted errors. Error avoidance in judgment and decision-making means, among other things, not allowing one’s judgments and preferences to be biased by outside influences – but it also means paying attention to signals that one’s beliefs are sound, and that one’s choices are on the right course. When rising to the level of self-inflicted errors, it means not actively choosing in a way that brings one’s beliefs and actions under the control of influences that are bad, but it can also mean not actively resisting aligning those beliefs and actions with ones that are better.
Social signals are a well-understood source of information about the correctness of actions – the phenomenon often captured in notions of “social proof” and “descriptive norms”, the latter referring to information available to observers about the number of people who hold a particular belief or behave a certain way in a certain context. These signals can be highly facilitative, but they can also be fraught: The mere fact that a large number of people do X doesn’t mean that X is a behavior worthy of emulating, the obvious error being to give in to the behavior of the crowd when the crowd is faulty. People intuitively understand this, and it’s one of the reasons they rely, however imperfectly, on other sources of information to infer whether the behavior and opinions of others is worth taking into account. But if evidence is available that the crowd’s behavior reflects sound choice, then the error can start to lie with the decision to resist the prevailing wind – and the inference that one is making this error willfully can become increasingly nonignorable as the number of people moving in that direction grows over time.
Thinking about behavior change through this social-psychological lens shifts the conversation away from discrete communication tactics to a broader strategy based around something like the building and nurturing of macro-level social events – a strategy that's already within reach when one has connections to an emerging vanguard that has the wind at its back and the proper proof points to legitimize its existence. In this case, the strategy gains its appeal by directing certain prevailing winds among physicians toward a motivational process that, itself, has multiple downstream consequences for thinking, feeling, and action. Medical science is not lacking precedent for the role of such movements in the shift toward newer, better treatment practices, and the methods to harness their dynamics are not alien to organizations that are attuned to way innovation diffusion often occurs through social processes. Those methods are quite different from the tactics one might use to communicate disease risks or treatment benefits, but there’s a body of practical and social scientific knowledge that can be leveraged to identify them, and they can be turned into a cohesive plan of attack when selected and organized within a strategy that is, itself, anchored in behavioral insights.
The Takeaway: When Nudges Need to Take a Back Seat to Behavioral Strategy
I surface this example because I think it helps support an important point about the benefits we stand to gain when we pull behavioral science onto broader strategic ground. In a case like this one, we profit when we turn attention away from the narrow effort to find behavior change techniques and, instead, bring something more like a strategic systems mindset to the battle – one that encourages us to see the full nexus of influences on the thoughts, feelings, and behaviors that are relevant to our objectives, and then helps us find the category of solutions that is going to be a match for the complexity of the system our diagnostic work has illuminated. Acting on this principle means looking strategically for the set of triggers associated with the one system component that can have the broadest downstream consequences within the system as whole – or, at least, taking steps to cobble together as many different triggers that can be pulled to chip away at the system in a strategically coherent way from multiple smaller angles. In our example, the idea of harnessing social forces to promote a breakthrough in the physicians’ decision-making and perceptions emerges as a credible idea because it fits the bill: It takes seriously what the science can tell us about the mental processes implicated in the physicians’ concerns, thinking styles, and decision preferences and how they work in toto, zeroes in on the subset of processes that stands to have the biggest impact on thinking and behavior given how the broader set of processes is expected to interact, and then reaches for the strategic hammer that, in principle, has the most to offer given its relevance to the identified target and what’s known about its ability to command attention and motivate. What emerges is the foundation for a behavioral strategy – one that temporarily foregoes the focus on tactics to see the system holistically, find what will matter to provoke change in the bigger scheme of things, and create the frame within which actions to promote change can be selected, organized, and designed.
But just as the example points to what we can gain when we bring behavioral science up to the level of big-picture strategic ideas, it also forces us to confront the ways in which a purely tactical approach to problem-solving, focused on the promise of behavior change techniques derived from the behavioral science literature, can be genuinely small. To be sure, simple maneuvers such as message frames and information presentation formats, as well as choice architectures, incentive manipulations, motivating cues, and quick interactive exercises, can have demonstrable impacts on the way people understand and act within their world, and it is this reality among others that has made behavioral science attractive to professionals in product design, communications, and policy implementation alike. These techniques might very well have something going for them under certain circumstances, but the magnitude of that “something” can be easy to overstate: Witness the fate of “nudges”, which have had a history of yielding small, heterogeneous, and sometimes unreproducible effects – a fact sufficiently well-established at this point that any one nudge should be presumed to have a small aggregate effect that varies significantly by person, context, time, and realization unless proven otherwise [1-6]. A similar lesson turns out to apply to techniques that go beyond nudges narrowly defined [1], and it needs to be generalized to almost any method for influencing people’s thoughts, feelings, or actions that has as its defining feature the attempt to fix the complex with the simple and small.
Summary
In closing, it’s worth stressing that there’s nothing in the above argument that forecloses on the effort to apply nudges, behavior change techniques, or other behavioral science-based maneuvers to behavioral tactic or solutions design when the applications are clearly appropriate. Sometimes, tactic development alone is the remit and there simply isn't room to revisit strategy, while the need to find good tactics can still benefit from what the field's collection of tested maneuvers stands to tell us. Moreover, there may be times when the application of these maneuvers in tactic development is likely to be relatively safe, either because the behavior to be supported or changed is simple and one-off, the timescale from intervention to behavior is going to be short, the solution is going to have a form that’s self-enclosed, and development will proceed under a testing regime that allows us to see the real-world effect of the solution we're implementing.
Yet, none of this is to contradict the broader point of the argument either, which is that there are times when the behavioral science nose needs to be lifted into the realm of strategy if we're going to benefit from what the science can do for us – and that there are real costs that can be incurred when tactics alone become the focal point for behavioral science applications. Sometimes the challenge with a behavior we need to confront isn't going to be so easy to solve: Its drivers may be complex and entangled, the desired outcome may be nontrivial (e.g., a change in thinking that persists well beyond the point of intervention), and the possible solutions may live in a space that is limited and constrained. In these cases, taking a step back to see the forest from the trees becomes vital to finding and evaluating the best overall targets for action – key to informing a behavioral strategy from which can then flow all the thinking that goes into finding, evaluating, selecting, and organizing the specific tactics that can bring about the outcome we seek. Bringing a strategic mindset to the table can be critical even when strategic decisions have already been made, as it can incentivize us to keep our eyes on the way behavior is embedded in complex dynamic systems, size up tactics for their likely value in proportion to the complexity of those systems, and make choices that go beyond resting our laurels on the ostensible magic of any one behavior change technique alone. Unleashing behavioral science for this broader ambition can be much more fruitful than restricting ourselves to the technocratic business of finding this tactic or that, but this effort can be short-circuited when the discourse around behavioral science becomes bound up in an endless discussion of nudges, tiny habits, and other clever minutiae. We can avoid that fate by giving ourselves the tools to develop behavioral insights with the depth and breadth to match the complexity, and then making a habit of thinking about behavioral science as being a valuable partner in strategy, not just in tactic development.
References (Were We Got Some of This)
- Albarracin, D., Fayaz-Farkhad, B, & Samayoua, J.A.G. (2024). Determinants of behaviour and their efficacy as targets of behavoural change interventions. Nature Reviews Psychology, 3, 377-392. https://doi.org/10.1038/s44159-024-00305-0 .
- Luo, Y., Li, A., Soman, D., & Zhao, J. (2023). A meta-analytic cognitive framework of nudge and sludge. Royal Society Open Science, 10, 230053. https://doi.org/10.1098/rsos.230053.
- Maier, M., Bartos, F., Stanley, T.D., Shanks, D.R., Harris, A.J.L., & Wagenmakers, E. (2022). No evidence for nudging after adjusting for publication bias. PNAS Proceedings of the National Academy of Sciences of the United States of America, 119, 1-2. https://doi.org/10.1073/pnas.2200300119.
- Mertens, S., Herberz, M., Hahnel, U.J.J., & Brosch, T. (2022). The effectiveness of nudging: A meta-analysis of choice architecture interventions across behavioral domains. PNAS Proceedings of the National Academy of Sciences of the United States of America, 119, e2107346118. https://doi.org/10.1073/pnas.2107346118.
- Szaszi, B., Goldstein, D.G., Soman, D., & Michie, S. (2025). Generalizability of choice architecture interventions. Nature Reviews Psychology, 4, 518-529. https://doi.org/10.1038/s44159-025-00471-9.
- Szaszi, B., Higney, A., Charlton, A., Gelman, A., Ziano, I., Aczel, B., Goldstein, D.G., Yeager, D.S., & Tipton, E. (2022). No reason to expect large and consistent effects of nudge interventions. PNAS Proceedings of the National Academy of Sciences of the United States of America, 119, 1. https://doi.org/10.1073/pnas.2200732119.
© 2026 Grey Matter Behavioral Sciences, LLC. All rights reserved.
