
When Do, and Don’t, the Lists of Biases Make Sense?
A Foray Into a Common Use of Behavioral Science
Jeff Brodscholl, Ph.D
Greymatter Behavioral Sciences
If you were a future historian who wanted to understand how people in our time made use of the science of behavior, you’d probably want to take a close look at the infographics, card sets, Miro boards, and other artifacts that have emerged over recent years to capture the lists of people’s mental shortcuts and biases in their judgments and decisions. These lists, which aggregate behavioral phenomena uncovered across many decades’ worth of behavioral science research, contain many of the behavioral peculiarities that have been of interest to researchers in the behavioral sciences, often for good reason: They’ve provided the fodder for recognizing the ways that people are “predictably irrational” and for developing models that reflect the way people actually think, feel, and act, as opposed to the way they are assumed to behave in everyday discourse or in traditional economic models of preference and choice.
That these lists would eventually become ubiquitous in applied work is not surprising when you consider that it was the emergence of behavioral economics as a recognizable, credible school of thought that had sparked most of the contemporary interest in applied behavioral science to begin with. It’s through behavioral economics that most designers, strategists, and other professionals get introduced to concepts like the “endowment effect”, “loss aversion”, “framing”, and “hyperbolic discounting” for the first time – and it’s their packaging into narratives about “nudges” and “System 1 processes” that has made them appear part of a better way of understanding human behavior and taking effective steps to intervene with it.
Yet I think the popularity of these lists has always gone well beyond their association with behavioral economics per se to what makes their individual items so appealing unto themselves:
- They highlight regularities in human behavior that aren’t always obvious, creating opportunities for surprise and delight that seem to explain the more puzzling quirks in people’s patterns of thinking and doing;
- They’re “snackable” in the sense that you can read a single-sentence description of each bias or heuristic and have enough to come away feeling like you “get it”;
- There are a lot of them – enough to cover almost any case you can imagine, not to mention ladder up to the kind of optic that leaves one feeling one is in the presence of something profound;
- And they’re easy to disseminate, allowing them to capture the imagination of a broad base of professionals while being consumable in even the most fast-paced day-to-day practitioner work
The question, though, is whether the proliferation of these lists has necessarily been a good thing – or, at least, an unqualified good the way, I suspect, the producers and consumers of these lists have sometimes assumed them to be.
In this post, I provide my own opinion on this matter based on my personal experience using biases, heuristics, and other behavioral phenomena as part of my “lens” when solving problems in life sciences work, combined with the way I’ve seen these concepts leveraged within market research, communications, design, and other industry contexts. As with everything in life, the answer to the above question is, I believe, complex. Yet, I do think there are some clear distinctions we can make between when these lists are of practical value and when they cease to help. And I think that, if we take a little bit of time to unpack this, we can learn some valuable lessons about when and how to use the items on these lists – and when to back away from them.
The Common Approach: How Biases and Heuristics Have Often Been Used in Applied Work
To ground the conversation a bit, I’ll start by presenting a rather striking example of the type of bias list that has often made the rounds in industry – this one in the form of an infographic called the “Cognitive Bias Codex” that first emerged in 2016 and has popped up in a number of digital venues ever since:

"The Cognitive Bias Codex", by Buster Benson and John Manoogian III. Data by Wikipedia. Used under CC BY-SA 4.0: https://creativecommons.org/licenses/by-sa/4.0/.
Now, to be clear, the Codex wasn’t ever intended to be the equivalent of a peer-reviewed summary of the literature on biases up to its publication date; it was simply one technology professional’s good-faith attempt to take the list of biases he found online and organize it into something that would be manageable for practitioners who might want to use these concepts in their own work. (It’s this origin story that helps explain some interesting peculiarities of the Codex, a matter to which I’ll return later.) That said, its high volume, line-item approach is hardly unrepresentative of the types of bias lists that have appeared elsewhere; indeed, the approach has echoes in the relatively more content-heavy lists such as The Decision Lab’s biases webpage which, while containing more exposition, often lead with the type of between-the-eyes, one or two-sentence gist definitions that encourage snacking, followed by the kind of content that adds texture while staying within Behavioral Science 101 territory.
So, you work in marketing or communications or product, program, or service design, you’re motivated to leverage behavioral science in your work in some way, and you come across an infographic like the Cognitive Bias Codex. If you’re lucky, you come across the version that already has a single sentence definition of each bias so that you don’t have to go look it up online. What do you take away from it?
Well, for starters, there’s the “food-for-thought” angle, or the fact that you now have a bunch of things about people that you might not have thought to consider had it not been for your encounter with the infographic. In other words, what you have is a tool for ideation and inspiration – a set of concepts that helps you look at people and take actions with them in a new way that may prove useful under certain well-contained circumstances, potentially for the better than had you not had the tool at all. Some examples:
- You have observational data from a virtual store study and, looking at the verbal self-reports and choice behavior of a key customer segment, you come to suspect that they may be falling prey to “anchoring” and “decoy effects” in their product comparisons, and also showing signs of “congruence bias” in product information search, whenever they confront visually busy shelves with large numbers of competing brands. You provide this interpretation to your client along with your rationale, with the goal of simply helping them understand that their customers may have information processing limitations that could preclude strategies predicated on ever-expanding choice sets or customers' ability to sift through many subtle variations on a core product.
- Alternately, you need to design the user interface for an app, and there are steps in the user journey where you need to create a behavioral speed bump, or add “sludge”, to keep the user from taking a mindless action that might not be consonant with their goals. You’ve come across something about a “right visual field preference” that people evince when searching for words or interacting with tool-like objects [1,2], and you decide to include this in your consideration set when creating menus that are explicitly designed to interrupt the user's sense of flow (e.g., positioning an “OK” button to the left of “Cancel” in a pop-up that is designed to confirm an irreversible action, e.g., “Clear trash”).
These aren’t necessarily unreasonable ways to make use of biases and heuristics in insight, strategy, or solution design work – and the fact that the applications are more “creative” than rigorous doesn’t necessarily doom them to producing bad outcomes. A product team tasked with developing an e-commerce website that’s relatively frictionless and results in minimal customer complaints is very well likely to do better to have notions about “choice overload”, “salience”, and “default bias” in their quiver than if they simply sat in a room talking amongst themselves with only common wisdom to inform their notions about their users' needs or make sense of any user data they might have at their disposal. A similar comment could be made about the development of communications materials, or the interpretation of market research data.
In every one of these cases, more than enough room exists for a practitioner to benefit by taking a high-level understanding of biases and heuristics and turning into grist for a creative thought process, irrespective of the depth of knowledge they have about these concepts, or even how well they truly align with what drives the behavior of the people to whom an action is targeted. That may seem like it shouldn’t be so, yet it’s not unlike what may, in fact, be a bit true in other forms of applied social science – most strikingly, in the treatment of mental illness, where, for a long time, research suggested that, for a broad range of mental health conditions:
- Some psychotherapy is usually better than none; and
- In the aggregate (and that’s key), it almost doesn’t matter which form of psychotherapy is used [3]
That’s a bizarre conclusion when you consider the ways in which different schools of psychotherapy often make wildly different assumptions about people that are, at times, impossible to reconcile with one another – yet it makes some sense once you allow for the way that any systematically learned approach to people and how to intervene with their mental health may elevate the practice of psychotherapy in ways that have genuine benefit, even if the mechanism by which the benefit occurs is not understood. Just as it is with psychotherapy, so may it be with the application of ideas about biases and heuristics to the development of products, services, communications, or other interventions, where having just enough systematic knowledge of behavior, anchored in just enough reality, may be sufficient on average to create a noticeable improvement from where one would have landed had one only used a lay understanding of people to guide the work. And that may be perfectly fine, provided:
- You’re up-front and honest about how you’re using these concepts in your work (e.g., as reasonable guesses, hypothesis generators, etc.);
- The consequences of being wrong in the specific case you’re working aren’t going to be particularly significant;
- If the consequences of being wrong could be significant, there's going to be opportunity to put whatever follows from your applications of these concepts to the proper test before they’re unleashed on the world or given further investment.
Why the Common Approach Doesn’t Work (and Shouldn’t Be Trusted)
The problem, of course, is that the conditions we just outlined are rarely ever met. Most decision-makers aren’t particularly tolerant of living within shades of grey as it is; most often, they need to feel that the statements you make about the people with whom they need to take action are firm enough that the can feel comfortable proceeding on them – all the more perverse incentive to leverage concepts from behavioral science to give one’s ideas the patina of authority even when all one is doing is lightly “surfing the science” for thought-starters and storytelling purposes. Decision-makers can take their risk-aversion to an extreme, but that doesn’t mean their attitudes are entirely irrational: Most actions have the potential to do nothing or, worse, produce unintended bad outcomes, with consequences that can range from lost investments and competitive failures to customer backlash and, in some cases, outright harms. Tech companies may be able to test and iterate their way out of the risk, but such rapid-fire testing regimes are not always possible in other domains – and that creates pressure to get the front-end thinking as close as possible to the mark so that the solutions that follow will be in the best position to succeed or, at least, not backfire (and even then, without guarantees).
If the items on lists like the Cognitive Bias Codex were solid, simple, unconditional, and easily mappable to what we learn about people when we research their behavior, then their straight-out-of-the-box applications to front-end insight and solutions ideation work would be the salve for all these issues. Unfortunately, they aren’t – and it’s this reality that quickly causes the utility of these lists to rapidly break down.
What Biases and Heuristics Look Like Under the Microscope
Start with the size of the phenomena on these lists. Just because a bias or heuristic has appeared in the literature with a formal label and a tidy definition doesn’t mean it’s necessarily impressive. Some biases have small effect sizes that may be observable only under the right combination of high measurement sensitivity and well-controlled laboratory conditions. Indeed, some have turned out to evaporate once they’ve been subject to exact replication attempts or the original research methods have been tweaked to account for potential confounds and other technical issues. That’s increasingly looking like the direction in which the famous "Dunning-Kruger effect" may be headed [4-8], and the replication problem has already plagued phenomena such as incidental priming effects that were once all the talk but then fell off the pedestal once their shaky reproducibility started coming to light [9-12].
Yet, even when a bias or heuristic does prove capable of withstanding scientific scrutiny, that doesn’t mean we can assume it’s going to be a particularly strong driver of the specific behavior we’re wanting to understand or influence. The problem isn’t only one of misapplication; it’s also one of excessive reductionism, in which we’re seduced into thinking that a bias or heuristic we believe lurks in a behavior we care about must contribute to that behavior in an important way when the true path from cause to effect is likely to be much more complex. To give an example:
- A physician who’s comfortable treating a particular condition with the standard of care could very well have an “illusion of control” that causes them to underestimate a disease risk that persists even under standard therapy, but that doesn’t mean that their disinterest in a new drug that promises to reduce the risk isn’t due to some bigger factor, such as untoward fear of introducing yet other new, unforeseen risks as a consequence of initiating a treatment switch.
- Similarly, an investor might show a preference for taking risks immediately after they’ve experienced a sizable loss, but that doesn’t mean the initial pit-in-the-stomach preference will carry over to a delayed decision once they’ve had the opportunity to process their feelings about the loss and restore themselves to a positive emotional state [13].
As these examples illustrate, it’s simply not enough to take a bias or heuristic we think we've identified and say that it's necessarily a key driver of current behavior or something that will work as a good target for creating or sustaining desired behavior change. On the contrary, efforts must be made to try to consider all the forces that might go into driving a behavior we’re interested in if we’re going to truly understand and take the right steps with it – and it’s only in that accounting that we can begin to see the likely significance of any heuristic or bias we think might be relevant to the behavior in question.
This brings us to another issue, which is that, even if we get past the hurdle we just described, the phenomena that exist on lists like the Codex don’t, themselves, behave in the universal, unconditional way that their definitions sometimes have a way of implying they do. Some biases might be the product of universal processes that play out the same way no matter what, but many others reflect quirks in cognition that serve motivational functions or arise from specific thinking styles, if-then mental operations, and ways of structuring knowledge that, in turn, are a function of people’s personalities, their cultural and learning histories, their cognitive abilities, and the features of context that are relevant to their concerns and what's "top of mind" for them. Thus, “optimism bias” can disappear once people are motivated to maintain a state of hypervigilance against losses or as they get psychologically closer to the “moment of truth” in their goal pursuits [14,15], while “correspondence bias”, or the tendency to attribute people’s behavior to stable internal dispositions, tends to be weaker in cultures where the role of context is emphasized in people’s actions [16-19]. What this means is that we can’t just turn to a bias and assume it’s going to be operative in the people whose behavior we’re wanting to influence in the moment in which we intend to influence it, absent further information about those people and the context in which the behavior of interest occurs. It also means that we need to be prepared for a bias failing to be applicable or leverageable as we move from one combination of context, group of people, or time within people to another – something for which the typical list of biases rarely ever prepares us.
Then there’s the practical problem of even knowing whether a particular behavioral phenomenon is present or likely to be so in a specific case to begin with. To give an example, suppose you do research with the caregivers of elderly COPD patients, and you discover that a subset of them has a preference to defer to pulmonologists’ treatment preferences as opposed to being an active participant in a shared decision-making process. You interpret this as a sign of “authority bias”, taking care not to claim that this is “the” driver of the caregiver behavior you’re interested in, or to extend the interpretation beyond the caregivers in which you think you’ve observed it or the context in which it seems to emerge. Sounds great – except how do you know that what you’ve observed is exactly “authority bias”?
This points to an issue concerning how biases are used in applied work as opposed to how they’re typically understood in basic research. Many biases are often defined in the scientific literature in a very precise way and are then discernible only under the specific combinations of behaviors and stimuli that are used by behavioral scientists themselves to study them. The latter often ends up following for two reasons:
- The combinations specified in these operational definitions are inseparable from the biases’ conceptual definitions (e.g., the "recency effect" is the better recall of more recently encountered items on a list);
- Even if they aren’t, they create the necessary conditions for determining whether a given behavior represents a systematic deviation from what some normative model would say a person ought to do – key to claiming that what has been observed is, in fact, a “bias”.
Those combinations may be realizable in well-designed experiments or captured adequately in rigorous observational studies – but in qualitative self-reports? Hardly to be presumed – certainly not without qualification. And yet that may not stop a researcher from inferring and reporting out the bias in an unqualified way based strictly on an analogy or a widening of the bias’s definition beyond how it was ever understood within the scientific community itself.
So, far from fortifying front-end insight and solution ideation work, what the naïve application of biases and heuristics from a list like the Codex can end up buying us is a chase down a rabbit hole for phenomena that:
- Are not particularly significant to understanding current behavior or pointing to something that could be leveraged in actions to successfully change, support, or accommodate it;
- Do not have their potential adequately opened up to us because their eliciting conditions and qualifying factors have not been subject to proper accounting; or
- Are not even truly applicable to the thoughts, feelings, and behaviors that are going to be the target of our actions.
And that can not only cause us to pursue solutions that are “bad”, but even have consequences that are far more negative than had we not bothered consulting a list of biases to begin with:
- It can cause us to curtail consideration of other better alternatives that we might have otherwise been open to, owing to an illusion of authoritative insight that forestalls deeper exploration;
- It can set the stage for a loss of empathy with the people we’re trying to understand and design for – particularly when it devolves into a reductive labeling exercise that tells us little about why they have the biases they have or do the things they do.
The Better Approach: Biases and Heuristics as Windows Onto Behavioral Processes
From the foregoing, then, we might think that the attempt to use items from a biases list for anything more than creative inspiration would be a hopeless endeavor – but that’s not necessarily so. While no amount of front-end hypothesizing about people can ever be anything more than what it is absent tests to show that the resulting hypotheses have some merit, that doesn’t mean that the hypothesizing can’t, itself, be rigorous and evidence-based. There are ways to put hypothesis generation on a more rigorous foundation than had we simply used everyday wisdom to guide us – and there is a role that notions about biases and heuristics can play in this endeavor. But to realize the potential, we need to fill a void that’s created when these phenomena are tossed onto a list with only one-or-two sentence definitions to make them consumable by a broader audience.
Explanations vs. Descriptions: Still Nothing More Useful Than a Good Theory
Earlier, we noted that the Codex has some peculiarities owing to its origins. One such peculiarity has to do with the way the Codex is organized: Rather than being based on a deep understanding of the biases and their causes, the items are grouped based on thematic similarities that might make sense from a surface-level perspective but wouldn’t really be natural when looked at from the point of view of what the science itself has to say about them. That becomes evident when you look more closely at the inner categories, where, even from a thematic perspective, the groupings tend to be a bit bizarre: Witness the placement of the “endowment effect” with “processing difficulty effect” and “generation effect” under the heading, “to get things done, we tend to complete things we we’ve invested time and energy in”, which implies that every one of these biases is the product of a willed outcome – hardly true of the “endowment effect”, which, by definition, arises from the mere possession of an object irrespective of the amount of effort that's required to attain it [20,21].
What makes this feature of the Codex interesting is that it points to a larger issue that can end up plaguing a list of heuristics and biases when it fails to capture existing knowledge about the conditions under which they occur or the mechanisms that are potentially responsible for them. In the quest to make these lists consumable, they can become collections of seemingly-isolated phenomena with simple, self-enclosed definitions that rarely speak to why they exist beyond the usual uber-narratives around System-1 processes and the limits of human rationality. At the extreme, they can end up presenting the items as existing simply because they do, as if all that were necessary to make them useful were knowledge about the patterns of behavior in which they’re reflected and the general circumstances under which they are thought to occur – the latter, at times, with a considerable loss of fidelity, as we see with the "endowment effect" example above.
The irony here, of course, is that this approach almost never reflects the way these phenomena are treated by the very behavioral scientists who study them. If anything, biases and heuristics command scientists’ attention not because they are inherently interesting, but because they have implications for our theoretical understanding of the causal mechanisms, or processes, involved in domains such as perception, memory, reasoning, decision-making, or other key areas of human behavioral functioning. Sometimes, the phenomena are discovered serendipitously or due to a researcher dreaming up a “what if?” that then happens to pan out once it’s investigated in a formal study. In other cases, the discovery is due to a deliberate attempt to test a larger theory by deriving a hypothesis that predicts the existence of the phenomenon in question. What’s important, though, is that the discovery almost always leads to further attempts to understand the mechanisms behind what’s been discovered – and it’s this payoff that makes the phenomenon interesting enough to earn a fancy label and place in the publication record.
Why this matters to us is that it gives us the ammunition we need to use notions about biases and heuristics in a far more sophisticated way than is ever encouraged by an ostensibly practically-minded tool such as the Codex. Rather than being little more than endpoints in our thinking about what makes people tick (e.g., labeling some pattern of behavior a bias and then saying we’ve explained it because we’ve labeled it, or motivating a particular design decision by saying it leverages a presumedly existing bias and leaving it at that), the research on biases and heuristics allows us to take what's known about the drivers of a bias, or the mechanisms underlying a heuristic, and turn it, rather than the bias or heuristic itself, into fodder for making sense of people and thinking through what actions to take with them. Indeed, when the process explanations for these phenomena plug into a larger set of propositions about behavioral and mental processes, we can use these connections to bring attention to a broader web of subterranean dynamics that may be far more important for us to consider than if we stopped our thinking short with any one bias or heuristic alone. We can do this provided we take the time to dig through the science to understand what these phenomena are all about and where they sit with respect to the wider range of causal mechanisms that contribute to their existence. That’s not indulging academic curiosity; it’s actually the pathway to addressing many of the issues that were surfaced in the prior section:
- If you do the homework, you’ll come to discover what, if any, role a given bias or heuristic might play, or what its significance might be, in a behavior that’s of interest to you, looking at the phenomenon both in isolation and in the broader scheme of other potential drivers of the behavior in question;
- You’ll also be able to discover what’s known to date about the who-when-and-where of the biases and heuristics you might consider, often with reference to underlying processes which determine when they surface;
- Along the way, you’ll also learn something about the quality of the evidence for these phenomena, including what’s known vs. not about them and just how impressive they really are;
- And you’ll develop a better sense of the diagnostic criteria you need to meet to presume that a bias or heuristic is in play in a behavior you’re focused on.
And that, in turn, has two additional payoffs that are worthy of discussion:
Payoff #1: Use in Behavioral Driver Diagnosis
At the end of the day, behavioral insight is about developing a picture of behavioral drivers and barriers to support effective behavioral support, accommodation, or intervention – and that means building a conceptual model of the drivers of the behavior you care about to understand how they operate and where the actionable lever points might reside. To do that properly, one needs to respect that fact that people are not so much collections of interesting behavioral factoids as they are like complex dynamic systems – messy ones for sure, but systems nonetheless, with processes and drivers that feed into, inhibit, and amplify one another, with consequences for what actions will work to impact behavior as intended. Attempts to reduce behavior to a disparate collection of biases and heuristics can’t, by their nature, act as a proper substitute for this work – but they can certainly contribute to it once we learn about the mechanisms behind them and how they might fit in a broader nexus of applicable behavioral processes.
The payoff starts with the use of these concepts for behavioral driver diagnosis:
- In the context of other clues, evidence for a given bias can lend credence to an emerging story about the dynamics that may be responsible for a particular target behavior and what might work to change it once process explanations for the bias are included in the analysis. To give an example, consider the case of chronic approach and avoidance orientations, which have been shown to drive a host of predilections ranging from inferential reasoning and problem-solving styles (e.g., trying out many different hypotheses and solutions vs. only a few) to differing degrees of ambiguity tolerance, omission bias, and preferences for stability over change [22-26]. These individual predilections may not be particularly diagnostic on their own, but they can start to hint at a specific tendency to be focused on approaching positives or avoiding negatives that may be relevant to understanding a key behavior, such as attending to an innovation’s downsides vs. upsides, or resisting belief contamination from perceived partisan influencers, once they coalesce into a pattern that's consistent with a particular orientation – an insight that's made possible by published research highlighting the function these predilections play in approach / avoidance self-regulation.
- The process of deduction and inference can, of course, also work the other way around: An observed behavioral pattern might be ambiguous with respect to whether it reflects a specific bias, but the interpretation might be sharpened if other clues point to broader dynamics that could give rise to the suspected bias. Thus, a desire to see real-world evidence before trying a new therapy could reflect reasonable concerns about the limitations of clinical trial data or a personal intolerance for ambiguity, but it can start to look more like the latter once it appears alongside markers of a chronic avoidance orientation, such as persistent omission and status-quo biases in a physician’s decision-making – the underlying orientation then providing the connective glue for the interpretation.
These applications have further benefits by sensitizing us to the types of information we want to have on people if we want to be in a position to properly diagnose biases or use them in a diagnostic way, which can then broaden our thinking about how we design future research to make sure this content is captured. And, on the back end, the conceptual model we build can integrate the biases or heuristics we think we’ve identified, or even anticipate ones that might emerge in future contexts, in such a way that we can more shrewdly understand the role they play as barriers or levers, how they might be accessed, and even whether they’re more like immovable objects for which we might need to consider end-runs.
Again, none of this is to say that what we get from this work is an iron-clad picture that we can take to the bank absent follow-up confirmatory testing – nor is it to say that the goal should be to force a narrative that’s going to be tidy and self-contained. Some behavioral quirks aren't a magic portal onto anything, and the body of research on the mechanisms underlying biases can, at times, be as squirrelly as the research attesting to the existence of the biases themselves. But, by taking the approach that’s advocated here, we can at least use an informed understanding of biases and heuristics to help develop the rich, nuanced picture of behavioral drivers we desire, with the promise of being closer to where we need to be because of the care and ingredients we put into it. As a bonus: We’re now able to put these phenomena in their proper place, treating them as simply one of many means to grapple with the dynamics behind a behavior and the actions that might work to impact it – which is what really need to care about.
Payoff #2: Use in a More Judicious Set of Principles for Solution Design
The foregoing might be helpful when we’re at the front-end behavioral insight stage, but, once we move into solutions design, we might simply want some rules-of-thumb about people that we can leverage in ideation and decision-making – particularly when the problem we’re working seems like it ought to be amenable to it. This brings us to the other payoff we can achieve by doing the proper homework on biases and heuristics – namely, the production of a smart list, likely more truncated and qualified than a list like the Codex, that at least has the proper curation to fulfill on the promise of putting design ideas on an incrementally more solid footing.
To illustrate, consider our earlier reference to something we called “right visual field preference” in which people show a tendency under certain circumstances to place their attention more on the right visual field than on the left [1,2]. On first blush, this seems like the type of bias that could potentially be straightforward and, thus, ripe for inclusion on the kind of list we have in mind. That would be tempting to expect were it not for entire bodies of research, falling under the headings of “pseudoneglect”, “left-side bias”, and “leftward bias”, which highlight a tendency for people to favor the left visual field as opposed the right [27-31]! The literature on these biases is not at all easy to disentangle: Just from the papers I’ve referenced here, one can begin to see how dependent each bias is on the types of stimuli and tasks that have been shown to elicit it, the different mental processes that appear to be implicated in the biases, how those processes unfold over time, and even where the current body of evidence points to contradictory conclusions about which bias is likely to emerge under a given set of circumstances.
Yet, as we dig into these papers, we do start to see the contours of a few learnings that look like they could have staying power, however tentative we might need to consider them – for instance:
- That the preferences for the left visual field are likely to emerge during scene viewing and human face processing, whereas those for the right are more likely to emerge when visually seeking out text or tools with which one might interact;
- That these preferences aren’t easily explained by motor habits owing to one’s primary written language and whether it requires reading left-to-right versus right-to-left;
- That they are better explained by the types of functions performed by the various brain hemispheres (the left hemisphere, responsible for the right visual field, being important to language, tool use, and other forms of symbolic representation, and the right hemisphere being important to decoding of visual and social stimuli).
Further literature reviewing would allow us to see just how well these learnings hold up, and to refine our understanding of the “when” and “why” of each bias, including how much credence to give each possible qualification and explanation we uncover. That, in turn, would allow us to say whether either bias would be trustworthy enough to put on a list, and, if it were, to define it with respect to a “when” that reflects what the latest science has to say about its eliciting conditions – key for determining the bias’s applicability to a specific behavioral design problem (e.g., choice option placement on a menu where the visual targets are words, and the word to look for is self-generated by the user as they pursue a specific goal). Again, none of this would be ironclad, such that proper testing of whatever is designed when using these concepts would still be the order of the day. Yet, wash, rinse, and repeat, and the result could still at least be a smart bias list that “makes the grade” far better than a typical list that lacks discretion and insufficient evidence-based guidance for trustworthy application.
Summary: Key Takeaways
I stated at the top of this post that, if you unpacked some the issues that exist with the bias and heuristics lists that have become popular over the past decade, you could start to understand when their applications make sense and when they don’t. My own point of view on the matter is probably clear from what I've described, though as we saw earlier, there are instances where one can see why a Codex-like approach to biases might be attractive, and might even make sense to lean on, depending on the use case and the circumstances surrounding the application.
That said, I do think that, once we look at the state of the science behind the phenomena that appear on these lists, we arrive at a set of “do’s” and “don’t’s” that we really need to observe if we're going to use these items in a valuable and honest way. To wit:
- If you’re going to use notions about biases and heuristics strictly for narrative and creative inspiration purposes with no opportunity or intent to test the end product, then you should do so only if you’re transparent about what you’re doing and the consequences of being “wrong” aren’t going to matter much.
- But – if the stakes do matter, and particularly if there’s no good way to hedge against the risks that might arise from shoot-from-the-hips applications, then you should probably forego the Wikipedia-like approach that's typical of the average bias list. Instead, do the digging to find out what’s known versus not about these phenomena. Use what you learn to understand when a bias or heuristic might lurk in a behavior, what might qualify it, and what it’s likely significance is, so that you can put them in their proper perspective, separate the wheat from the chaff, and turn what remains into proper building blocks for behavioral model construction and smart rules-of-thumb for strategy and solutions design. Obviously, continue to be transparent and be sure to communicate the proper hedges as you're using these concepts in your work. It’s not easy, and I highly recommend having a behavioral scientist assist you in the effort, but the payoff is worth the investment.
And, if what you’re really after is anchoring what you're doing in full-blown behavioral science, then, to the extent possible, be sure to rigorously test whatever hypotheses or solutions come from your application of these concepts – something that a behavioral scientist can help you with, too.
References (Were We Got Some of This)
- Garcea, F.E., Almeida, J., & Mahon, B.J. (2012). A right visual field advantage for visual processing of manipulable objects. Cognitive, Affective, & Behavioral Neuroscience, 12, 813-825. https://doi.org/10.3758/s13415-012-0106-x.
- Van der Cruyssen, I., Gerrits, R., & Vingerhoets, G. (2020). The right visual field advantage for word processing is stronger in older adults. Brain and Language, 205, 104786. https://doi.org/10.1016/j.bandl.2020.104786.
- Gonzalez-Blanch, C., & Carral-Fernandez, L. (2017). Cage up dodo, please! The tale of all psychotherapies being equally effective. Psychologist Papers, 38, 94-106. https://doi.org/10.23923/pap.psicol2017.2828.
- Dunkel, C.S., Nedelec, J., & van der Linden, D. (2023). Reevaluating the Dunning-Kruger effect: A response to and replication of Gignac and Zajenkowski (2020). Intelligence, 96, 101717. https://doi.org/10.1016/j.intell.2022.101717.
- Gignac, G.E. (2024). Rethinking the Dunning-Kruger effect: Negligible influence on a limited segment of the population. Intelligence, 104, 101830. https://doi.org/10.1016/j.intell.2024.101830.
- Gignac, G.E., & Zajenkowski, M. (2020). The Dunning-Kruger effect is (mostly) a statistical artefact: Valid approaches to testing the hypothesis with individual difference data. Intelligence, 80, 101449. https://doi.org/10.1016/j.intell.2020.101449.
- Lebuda, I., Hofer, G., Rominger, C., & Benedek, M. (2024). No strong support for a Dunning-Kruger effect in creativity: Analyses of self-assessment in absolute and relative terms. Scientific Reports, 14, 11883. https://doi.org/10.1038/s41598-024-61042-1.
- Magnus, J.R., & Peresetsky, A.A. (2022). A statistical explanation for the Dunning-Kruger effect. Frontiers in Psychology, 13, 840180. https://doi.org/10.3389/fpsyg.2022.840180.
- Harris, C.R., Coburn, N., Rohrer, D., & Pashler, H. (2013). Two failures to replicate high-performance-goal priming effects. PLoS ONE, 8, e72467. https://doi.org/10.1371/journal.pone.0072467.
- McCarthy, R.J., Skowronski, J.J., et al. (2018). Registered replication report on Srull and Wyer (1979). Advances in Methods and Practices in Psychological Science, 1, 321-336. https://doi.org/10.1177/2515245918777487.
- Pashler, H., Coburn, N., & Harris, C.R. (2012). Priming of social distance? Failure to replicate effects on social and food judgments. PLoS One, 7, e42510. https://doi.org/10.1371/journal.pone.0042510.
- Shanks, D.R., Newell, B.R., Lee, E.H., Balakrishnan, D., Ekelund, L., Cenac, Z., Kavvadia, F., & Moore, C. (2013). Priming intelligent behavior: An elusive phenomenon. PLoS One, 8, e56515. https://doi.org/10.1371/journal.pone.0056515.
- Seo, M.G., Goldfarb, B., & Barrett, L.F. (2010). Affect and the framing effect within individuals over time: Risk taking in a dynamic investment simulation. Academy of Management Journal, 53, 411-431. https://doi.org/10.5465/AMJ.2010.49389383.
- Gilovich, T., Kerr, M., & Medvec, V.H. (1993). Effect of temporal perspective on subjective confidence. Journal of Personality and Social Psychology, 64, 552-560. https://doi.org/10.1037/0022-3514.64.4.552.
- Hazlett, A., Molden, D.C., & Sackett, A.M. (2011). Hoping for the best or preparing for the worst? Regulatory focus and preferences for optimism and pessimism in predicting personal outcomes. Social Cognition, 29, 74-96. https://doi.org/10.1521/soco.2011.29.1.74.
- Blanchard-Fields, F., Chen, Y., Horhota, M., & Wang, M. (2007). Cultural differences in the relationship between aging and the correspondence bias. Journal of Gerontology: Psychological Sciences, 62B, P362-P365. https://doi.org/10.1093/geronb/62.6.p362.
- Choi, I., & Nisbett, R.E. (1998). Situational salience and cultural differences in the correspondence bias and actor-observer bias. Personality and Social Psychology Bulletin, 24, 949-960. https://doi.org/10.1177/0146167298249003.
- Miller, J.G. (1984). Culture and the development of everyday social explanation. Journal of Personality and Social Psychology, 46, 961-978. https://doi.org/10.1037/0022-3514.46.5.961.
- Miyamoto, Y., & Kitayama, S. (2002). Cultural variation in correspondence bias: The critical role of attitude diagnosticity of socially constrained behavior. Journal of Personality and Social Psychology, 83, 1239-1248. https://doi.org/10.1037/0022-3514.83.5.1239.
- Johnson, E.J., Haubl, G., & Keinan, A. (2007). Aspects of endowment: A query theory of value construction. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 461-474. https://doi.org/10.1037/0278-7393.33.3.461.
- Kahneman, D., Knetsch, J.L., & Thaler, R.H. (1990). Experimental test of the endowment effect and the Coase Theorem. Journal of Political Economy, 98, 1325-1348. https://doi.org/10.1086/261737.
- Chernev, A. (2004). Goal orientation and consumer preference for the status quo. Journal of Consumer Research, 31, 557-565. https://doi.org/10.1086/425090.
- Crowe, E., & Higgins, E.T. (1997). Regulatory focus and strategic inclinations: Promotion and prevention in decision-making. Organizational Behavior and Human Decision Processes, 69, 117-132. https://doi.org/10.1006/obhd.1996.2675.
- Itzkin, A., Van Dijk, D, & Azar, O.H. (2016). At least I tried: The relationship between regulatory focus and regret following actions vs. inaction. Frontiers in Psychology, 7, 1684. https://doi.org/10.3389/fpsyg.2016.01684.
- Liberman, N., Molden, D.C., Idson, L.C., & Higgins, E.T. (2001). Promotion and prevention focus on alternative hypotheses: Implications for attributional functions. Journal of Personality and Social Psychology, 80, 5-18. https://doi.org/10.1037/0022-3514.80.1.5.
- Liu, H. (2011). Impact of regulatory focus on ambiguity aversion. Journal of Behavioral Decision Making, 24, 412-430. https://doi.org/10.1002/bdm.702.
- Foulsham, T., Gray, A., Nasiopoulos, E., & Kingstone, A. (2013). Leftward biases in picture scanning and attention: A gaze-contingent window study. Vision Research, 78, 14-25. http://dx.doi.org/10.1016/j.visres.2012.12.001.
- Gray, O.J., McFarquhar, M., & Montaldi, D. (2021). A reassessment of the pseudoneglect effect: Attention allocation systems are selectively engaged by semantic and spatial processing. Journal of Experimental Psychology: Human Perception and Performance, 47, 223-237. https://doi.org/10.1037/xhp0000882.
- Li, C., Li, Q., Wang, J., & Cao, X. (2018). Left-side bias is observed in sequential matching paradigm for face processing. Frontiers in Psychology, 9, 2005. https://doi.org/10.3389/fpsyg.2018.02005.
- Meyyappan, S., Rajan, A., Mangun, G.R., & Ding, M. (2023). Top-down control of the left visual field bias in cued visual spatial attention. Cerebral Cortex, 33, 5097-5107. https://doi.org/10.1093/cercor/bhac402.
- Spotorno, S., & Tatler, B.W. (2025). What’s left of the leftward bias in scene viewing? Lateral asymmetries in information processing during early search guidance. Cognition, 254, 106009. https://doi.org/10.1016/j.cognition.2024.106009.
© 2025 Grey Matter Behavioral Sciences, LLC. All rights reserved.