Heuristics are the cognitive shortcuts through which humans make sense of a complex and uncertain world, distilling experience into patterns that allow for rapid judgment without exhaustive analysis. They influence nearly every aspect of daily decision-making, from clinical reasoning and financial choices to interpersonal judgments, and operate as an invisible framework that shapes how information is interpreted and acted upon.
What we often describe as a “gut instinct” or a “hunch” is simply heuristics operating at speed, where experience is compressed into an immediate judgment that feels intuitive rather than analytical.
While these shortcuts can prove efficient and often remarkably accurate, they also constrain thinking. Depending on the cognitive distortion, we may filter for overly negative or overly positive outcomes. What is gained in speed is frequently lost in depth, particularly when new situations arise.
AI began by imitating human heuristics, abandoned them for statistics, and is now reinventing them in ways that are less visible but increasingly powerful. Humans, by contrast, have remained anchored in heuristic thinking throughout.
Despite this apparent convergence, a critical gap persists: neither humans nor machines are well-equipped to recognize when the patterns guiding their decisions no longer apply. What lies beyond confined patterns that impact decisions—whether clinical or managerial—is often the most consequential. Judgement becomes most difficult and most important when our familiar signals and go-to patterns no longer map onto reality.
READ: Sreedhar Potarazu and Carin Isabel Knoop | Opening up the AI peephole: Toward not misunderstanding each other (April 8, 2026)
When data meets experience
The evolution of glaucoma management offers a clear illustration of this dynamic unfolding in medical practice. Historically, the diagnosis and treatment of glaucoma were grounded almost entirely in clinical heuristics, where elevated intraocular pressure, optic nerve appearance, and visual field loss were interpreted through experience-driven pattern recognition over decades allowing physicians to act decisively, to prevent vision loss even in the absence of precise quantification.
As imaging technologies such as Optical Coherence Tomography (OCT) and automated perimetry became more sophisticated, the field shifted toward a more statistical framework, incorporating metrics such as retinal nerve fiber layer thickness, probability deviation plots, and progression analyses that quantify change over time with greater precision.
At first glance, this transition appears to represent a move away from heuristics toward objectivity, yet in practice, decision-making has not become purely statistical, but rather layered. The clinician does not simply follow the numbers; instead, those numbers are interpreted through an existing framework of experience, expectation, and contextual judgment. A borderline thinning on OCT may be dismissed in one patient and treated aggressively in another, not because the data differ, but because the clinician’s internal judgement of risk, adherence, age, and disease trajectory shapes how that data is understood.
What emerges is not a replacement of heuristics by statistics, but a layered cake in which statistical outputs sit atop a foundation of experiential shortcuts, each influencing the other in ways that are often implicit rather than explicit.
This layering is now further extended by tools such as ChatGPT and others, used by patients to diagnose issues. ChatGPT will suggest all the possible scenarios as a second opinion, grounded in statistics and detached from context. The process introduces externally generated patterns into the decision process even before the clinical encounter—adding another interpretive layer rather than resolving the tension between data and judgment.
Such a layered approach highlights both the strength and limitations of human reasoning. The ability to integrate data with context allows for flexible and adaptive decision-making, particularly where data are ambiguous or incomplete. However, this ability also introduces variability, bias, and inconsistency, because the underlying heuristics are not always visible or systematically updated.
READ: Sreedhar Potarazu and Carin Isabel Knoop | Our fatal attraction to AI: Basic (or Python) instinct (February 25, 2026)
Machines approach the same problem from the opposite direction, beginning with statistical aggregation and pattern recognition across large datasets, identifying correlations and trends that may escape human perception, particularly in imaging and longitudinal analysis. In doing so, they often miss the same complexities that clinicians grapple with: interpreting the data themselves may be insufficient, misleading, or in conflict with the broader clinical picture. A model may detect progression based on numerical thresholds, yet fail to account for factors such as measurement variability, patient adherence, or atypical presentations that do not conform to prior patterns.
In this sense, AI does not eliminate this gap; it encounters it in a different form, because while it excels at refining what is known, it struggles to recognize when the known is no longer enough.
When data is not enough
These tensions are salient in many workplaces at the moment.
A manager considering a new project may observe a financial analysis that looks strong, showing healthy returns based on costs, market size, and expected benefits of new supply chain software. The data from AI suggests it is a good investment. But someone with experience inside the company may know how some risks do not show up in the numbers, such as delays between teams, unreliable vendors, or resistance to change that can slow down execution. These issues, often learned from experience rather than captured in data, can strongly affect whether the project succeeds. In this way, data shows what should happen, while experience helps explain what actually happens.
The more relevant question, therefore, is not whether heuristics are necessary. They are unavoidable. Between humans and machines, however, which is more capable of addressing what heuristics leave unresolved, particularly in moments when established patterns no longer provide reliable guidance?
Humans bring a form of adaptability that allows for questioning assumptions, reinterpreting the context, and generating new hypotheses even when evidence is incomplete. This enables the recognition that a familiar pattern may no longer apply despite superficial similarities. Machines, by contrast, bring the capacity to incorporate new information at scale, recalibrating probabilities as additional data becomes available, enabling continuous refinement in environments where patterns remain relatively stable over time.
Yet the environments in which the most consequential decisions are made rarely offer that stability. It is within these shifting conditions that the limitations of both approaches become most apparent, because neither intuition nor statistical inference alone can fully account for novelty, ambiguity, and change occurring simultaneously.
What ultimately distinguishes the two is not the presence of heuristics, but the ability to recognize when those heuristics have failed and to adapt accordingly, which remains, at least for now, an area in which human cognition retains an advantage.
The horizon of intuition
The intuitive layer of thinking plays a central role in how humans navigate uncertainty, allowing decisions to be made quickly even when information is incomplete or ambiguous, yet it also introduces a degree of opacity because the underlying reasoning is rarely accessible or easily examined. While AI can produce outputs that resemble this kind of rapid judgment by drawing on patterns learned from vast datasets, it does so without the contextual grounding or lived experience that shape human intuition, resulting in a form of approximation that can be highly effective when patterns are stable but less reliable when situations deviate from prior examples. In both cases, the appearance of instinct masks a deeper reliance on compressed knowledge, raising the question not of whether intuition exists, but of how well it adapts when the patterns it depends on begin to break down.
That advantage, however, is not guaranteed to persist, as advances in AI increasingly incorporate forms of adaptive learning systems that adjust their behavior based on new data, feedback, or interaction over time, rather than staying fixed after training. This narrowing gap raises the possibility that the distinction between human and machine reasoning may become less defined over time.
In that future, the central issue will no longer be whether heuristics or statistics will guide decisions, but how effectively each can learn in the spaces in which both fall short. The future of decision-making will depend on how well we navigate moments when patterns fail. As machines grow more capable at pattern recognition, the human role may shift from deciding within them to recognizing when patterns no longer hold. The question for us then is not whether we will use these tools but whether we do so wisely and continue to think beyond them.

