Why our predictive minds misread the world—and how to slow down for better judgment in the age of AI.
By Sreedhar Potarazu and Carin Isabel Knoop
Could complaining about the shortcomings and risks of ChatGPT help us become better predictive machines ourselves? After all, we are, in a sense, the original or “OG” ChatGPT.
Even before you finish reading this piece, you will have an opinion about it. You might like it, dismiss it, scroll past it emotionally or algorithmically.
This is not a personal failing — it is how evolution shaped up and technology trained us.
Like large language models, our brains do not wait for complete data — they predict the next word, the next action, the next feeling — based on past patterns and context. The better the training data, we hope, the more solid and sometimes brilliant the predictions. But when the data are sparse, biased, or out of context, both humans and machines “hallucinate.” We fill gaps with stories that feel true but are not mistaking speed for accuracy. Social media makes it possible, in fact demands that we constantly respond and opine, which is a form of filling in the blanks. Be it about geopolitics or a Love Island-themed party.
Platforms reward speed over depth, conditioning us to judge instantly, react without context, and believe we “know” when we have only glanced. Technology pushes us to respond instantly, closing gaps without the practice of improving accuracy. The result is speed without depth.
We judge a book by its cover because that is how the brain works. It constantly responds to visual cues either intentionally or instinctively. It takes fragments of information and completes the story before the facts are all in. That ability keeps us moving in a complicated world, but it also leads us to conclusions that may not be true. How we fill in the blanks is based on prediction. Sometimes this prediction is based on learning from the past, sometimes from instinct and sometimes clouded by bias.
Potarazu: Teaching AI to care: Why empathy may be humanity’s last defense (August 17, 2025)
In this article we look at our collective experience of science, sports, business and life to help us become just a bit more conscious –and in the process hopefully a bit better at—understanding how we fill in the blanks.
How the brain sees what isn’t there (yet)
Filling in the blanks and being able to visualize what we do not really see just yet depends on two visual processing systems operating at different speeds. The first, called the feedforward sweep, processes what we see in about 120 milliseconds which is faster than a blink. It detects patterns and opportunities before we are even aware of them. The second known as recurrent processing brings the information into conscious awareness around 160 milliseconds where deliberate decision-making takes place. recurrent (reentrant) activity peaks around 160 MS, and that only the recurrent activity correlates with actual visual perception. (source)
This split explains why athletes can perform feats that look impossible. Carlos Alcaraz cannot wait to analyze Sinner’s serve; he reads the trajectory before the ball crosses the net and his conscious system times the swing.
A baseball outfielder does the same when tracking a fly ball, guided by predictive models built over thousands of catches in the brain.
Potarazu: The Fourth and Fifth Monkey: Has technology altered evolution? (July 31, 2025)
Golf demands a different balance. Scottie Scheffler studies the wind, slope and lie of the course. The feedforward system cannot solve that problem alone, so he relies on slower and conscious processing to calculate a putt or line up a drive.
The same dual systems operate in medicine, but what athletes train into instinct, doctors must sometimes resist. In an emergency code seconds matter and instinct are what prevails. In a planned surgery, as in the golf context, calculations are essential. Medicine adds another dimension: resisting reflex and slowing down. A surgeon in the middle of a procedure often relies on unconscious pattern recognition with the hands moving before conscious thought.
But in a complex case, or in deciding whether to proceed at all, the slower recurrent system might take over. So, medicine highlights yet another dimension: the need to resist reflex and deliberately slow down. As Amy Herman explains in Visual Intelligence, medical students can be trained to sharpen their diagnostic skills not by looking at charts, but by observing art. In museums, they are asked to describe paintings in detail, noticing brushstrokes, shadows, and subtle patterns before leaping to conclusions about what they see. A lesson on looking | Amy Herman
The tabloid in your head: Hallucination
The dual system also explains why we sometimes misfire. The fast feedforward sweep, built to detect threats, is tied closely to the amygdala. It reacts instantly to a frown or silence, convincing us we are being judged before reason catches up. Unchecked this process can lead us to constantly be writing a “tabloid in our head,” inventing headlines without context. A colleague’s pause becomes anger; a short message becomes an insult. The slower recurrent system eventually corrects us but not always in time. That is why an emoji or meme can change instantly shift the tone of a conversation –our minds leap from symbol to meaning without hesitation.
Potarazu: Is the ‘Ramayana’ relevant for our children? The gap in Indian American parenting (August 5, 2025)
What helps a batter anticipate a fastball or a surgeon act in a code can, in daily life, turn into the very thinking errors that distort how we judge others. The mind is quick to close gaps, but often it does so with distortion.
And sometimes, those distortions do not just change how we (mis)read others — they hijack how we see ourselves. Surgeon James Naples, in a Washington Post op-ed, described developing the “surgical yips,” the same phenomenon athletes and gymnasts call the yips or twisties. His hands, trained by years of practice, suddenly froze. To supervisors, it looked like incompetence. In truth, they were watching the body act out the false headline the brain had already written: “You can’t do this.” Just as large language models hallucinate fluent but false sentences, the brain can hallucinate loss of control. The skill remains, but anxiety and prediction errors rewrite the story — and the body obeys.
Psychologists call these distortions by thinking errors. As Daniel Kahneman explained in Thinking, Fast and Slow, these “System 1” judgments are fast, intuitive, and effortless but also prone to error. These errors show how the brain’s predictive engine designed for survival can distort human relationships and decision-making.
The same process is seen in blindness. When a person loses part of their visual field as in glaucoma or after a stroke the brain adapts by filling in the gaps. This “perceptual filling-in” lets someone with blind spots experience a seamless visual scene, even when parts of the image never reach the eye.
These same thinking errors now play out in digital life, where social media, texting and even AI responses give us fragments without context leaving our brains to fill in the blanks with bias rather than truth. Generations fill in the blanks differently because the way we communicate has changed. Baby Boomers rely on tone, expression and personal context, while many Gen Zer’s read an unplanned call as danger — and all caps as shouting.
Changing how we communicate starts with slowing down and considering who is the other “OG ChatGPT” on the other side of our missives. Before reacting to a fragment of text or an emoji, we can ask whether we have the full picture or whether we are rushing to headline like the tabloid in our heads.
From reflex to reflection
To respond better to cues, we might ask:
Am I seeing only what confirms what I expect, or am I noticing what doesn’t fit?
What else could this expression, gesture, or silence mean?
Am I rushing to act on the first thing I see, or have I taken a second look?
Is this true evidence, or just my brain completing the picture too quickly?
Do I trust an algorithm’s version of what I see without asking what was left out or manipulated?
And if my brain is the OG ChatGPT, am I training mine with depth and truth — or just speed and fragments?
These questions are simple in a way, but they create space for a slower, more deliberate form of seeing the kind that helps athletes track a ball, doctors read a patient, and all of us move from reflex to understanding. In the end, how we choose to see determines how well we understand because it is not the blanks themselves, but how we fill them, that shapes the truth.
On this path, technology, like an unreliable narrator, can mislead or teach. Working with AI may scare us or help us understand our patterns better and sharpen how we imagine and interpret risks and possibilities.
(Sreedhar Potarazu, MD, MBA, is an ophthalmologist, healthcare entrepreneur and author with more than two decades of experience at the intersection of medicine, business, and technology. Carin Isabel Knoop founded and leads the Case Research & Writing Group at Harvard Business School, an output-driven team of researchers who collaborate with faculty members and organizations worldwide to craft world-class curricular and pedagogical experiences on work and leadership. She is the co-author of Compassionate Management of Mental Health in the Modern Workplace (Springer).)

