The paradox of prediction in the age of artificial intelligence is that the very tools designed to help us navigate uncertainty may be deepening our dependence on it. It is transforming a fundamental cognitive function into a continuous behavioral loop, in which the future is never awaited but constantly pursued, one prediction at a time.
Prediction and uncertainty
Prediction is a central function of the human brain, shaped over millions of years as a survival mechanism in an uncertain world. The ability to anticipate what might happen next often determines whether one remains safe and maintains a sense of control.
The brain evolved not to observe reality passively, but to actively construct it, continuously generating expectations about the future based on fragments of sensory input and on inherited biological coding.
Remarkably, even infants demonstrate capabilities that suggest the presence of predictive prior coding long before formal learning begins.
A newborn, for example, instinctively tracks faces, responds to human gaze, and expects objects to behave in continuous and coherent ways, reacting with surprise when those expectations are violated, which indicates that the brain is not starting from a blank slate but from a highly structured baseline that has been encoded through generations of adaptation.
READ: Sreedhar Potarazu and Carin Isabel Knoop | Our fatal attraction to AI: Basic (or Python) instinct (February 25, 2026)
These early predictive frameworks allow humans to navigate complexity with remarkable efficiency, enabling rapid inference in situations where data is sparse or incomplete, something that remains difficult for intelligent machines. Much of human prediction, however, is not simply a rational process but an emotional one, shaped by memory, bias, and our deep intolerance for uncertainty.
Faced with ambiguity, the mind does not remain neutral. Instead, it fills the void by projecting narratives, rehearsing possible futures, and revisiting past experiences to construct coherence where none fully exists.
In this way, prediction becomes less about accurately forecasting outcomes and more about managing the discomfort of not knowing, as the brain seeks to transform uncertainty into something that feels stable, even if that stability is illusory.
Prediction paradox
It is here that a paradox emerges, one that the rise of AI has dramatically amplified. While humans evolved to predict to minimize uncertainty, machines have been designed to predict by quantifying it, translating the unknown into mathematical probabilities derived from vast amounts of structured data.
This process means that, in artificial systems, prediction results from optimizing parameters using weights adjusted through repeated exposure to data. This serves to minimize errors between expected and observed outcomes. The outcome is often models that do not seek certainty but instead operate comfortably within distributions of likelihood. The machine does not need to resolve ambiguity; it simply refines it.
Yet as these systems become more sophisticated, they do not remain isolated from human cognition but begin to interact with it in ways that fundamentally reshape how we experience prediction itself.
The more we engage with AI-driven platforms, the more these systems learn about our preferences, behaviors, and emotional triggers, constructing increasingly personalized models that anticipate not only what we might want to see but also what will keep us engaged.
In doing so, they do not eliminate uncertainty but curate it, presenting information in a way that sustains anticipation while withholding full resolution, thereby amplifying the very cognitive processes that evolved to help us cope with the unknown.
READ: Sreedhar Potarazu and Carin Isabel Knoop | Shallow fakes: What GenAI teaches us about our voices (March 11, 2026)
When an Instagram user scrolls through Reels, the algorithm learns how long they stay, what they replay, and when they pause. Each swipe feeds the next prediction—and occasionally the algorithm delivers a highly engaging clip that provides a burst of satisfaction —but just as often it inserts less compelling content that reintroduces uncertainty and keeps the user searching.
As individuals are exposed to continuously updated forecasts, personalized recommendations, and dynamic probability streams, they are drawn into a cycle of comparison, anticipation, anxiety, and reassessment that deepens their reliance on external signals to interpret the future.
Rather than resolving uncertainty, this constant interaction can heighten insecurity, as each new data point introduces the possibility that prior assumptions were incomplete or incorrect, prompting further engagement to regain certainty.
The result is that uncertainty is not resolved but compounded, as each new input subtly undermines the last, driving the user to keep scrolling in search of consistency and certainty that the system, by design, never fully provides.
This dynamic also occurs when prediction is made explicit, as in sports forecasting or financial markets, where users are not only observing probabilities but actively participating in them. The more frequently these probabilities update, the more they create a rhythm of partial resolution, in which certainty is never fully achieved but is continually approached and deferred.
The brain, attuned to this pattern, begins to seek out these moments of near-resolution, reinforcing the behavior through cycles of anticipation and reward that mirror the mechanisms of addiction.
From watching sports to gambling on outcomes
This dynamic becomes clear in a live sports betting scenario on platforms like Polymarket or Kalshi during a close NFL game, where the win probability for a team might shift from 48% to 61% after a touchdown, only to fall back to 52% after a turnover on the next drive. Each update gives the user a brief sense that the outcome is becoming clearer, as if certainty is just within reach, but that feeling is quickly disrupted by the next play, which reintroduces doubt and forces a reassessment.
Dopamine and digital addiction
This convergence between human and machine prediction creates a profound shift. The machine becomes better calibrated to objectivity through continuous error correction, while the human becomes more dependent on the machine to manage the emotional experience of uncertainty.
Each improvement in algorithmic accuracy does not reduce engagement but increases it, because more precise predictions lead to more frequent updates, and more updates create more opportunities for the brain to seek resolution, modulated by dopamine.
Dopamine is released not when the outcome is known, but in the moments of anticipation, when the brain detects the possibility of knowing. Dopamine surges signal ‘value’ and provide information about the subjective worth, desirability, or usefulness of an object, action, or outcome.
Over time, with repeated exposure, the brain’s dopamine receptors adapt by becoming less sensitive, so it takes more frequent updates or stronger signals to produce the same level of satisfaction.
Finding the best of biology and math
When we turn to prediction engines designed to reduce error to try to feel safe, we do not just outsource our forecasts but also our tolerance for not knowing. When we check more, we might trust ourselves less and become more dependent on signals of the external world.
Searching for certainty where none is available is quixotic and exhausting.
The implication is not to reject these tools but to use AI in decision-making more consciously—to recognize when the prediction informs a decision and when it simply feeds a loop or fills a void.
Small acts of friction (turning off notifications, resisting the urge to refresh, and setting limits on scrolling time) can interrupt the rhythm of anticipation and partial reward, allowing the brain to step out of the loop rather than being continuously pulled back into it.


