For any event, from an altercation in the subway to street protests, we cannot see the full context. And especially not in charged moments that are widely doctored and tailored to our social media echo chambers. Neither side sees the full context of the moments before the encounter, conflicting statements about what preceded it, the broader enforcement operation underway, or the subsequent reactions from people present, witnesses, TikTokers, and the authorities. In fact, within seconds, we see a highly stylized and edited version of the context that algorithms think we would like to see.
What is missing from this viral moment is the larger frame in which it unfolded: a contentious regime that had drawn protests, sharply differing official narratives about a particular incident, and an ongoing investigation into the use of force.
Learning Machines: A scaffold for a more coherent year (Part 1): Attention (January 2, 2026)
The attention of the person watching latches onto the fragment that evokes the strongest reaction and proves them right in their own interpretation of what must have transpired, and the broader context fades.
Algorithms hijack our attention and then navigate context. This is how attention, context, and purpose can break down the scaffold of coherence.
In Part 1 of our coherence tripod, we looked at what attention is and how to command it. What we notice depends on what we let ourselves attend to, just as in LLMs, the prompt directs the model’s focus. Here, we look at context as the next pillar of the scaffold of coherence. The final and third part will be on purpose—the last part of the coherence tripod.
“Context” comes from the Latin word “contexere,” meaning “to weave together.” It’s not just background; rather it is the thread that turns scattered details we observe into meaning.
Context is what guides attention to be anchored to purpose. But context cannot be a cage either, because if it is too narrow, it blinds or biases us.
Context needs to be balanced to get a clear perspective of the pieces of a puzzle and how they fit together.
Too often, when we look at a situation, we miss the details and subtle cues that are relevant to understanding.
In “Visual Intelligence,” lawyer and art historian Amy Herman shows that seeing clearly is not about spotting isolated details; it is about taking in the whole scene: what’s there, what’s missing, and how it all fits together. As a doctor, this matters because good judgment does not come from a single test or observation; it comes from weaving together what is seen and what is unsaid that’s easily overlooked. As a case writer and management researcher, the job is to include enough facets of managerial decisions to mimic the context in which a non-obvious decision needs to be made.
When attention is fragmented, we can lose context of what happened, as in a short video clip on social media. Doctors can also make this mistake if they fail to notice subtle signs when they examine the patient or take their history and conflate signs (objective indicators of a disease like fever) with symptoms (subjective experience felt only by the patient). Both signs and symptoms are crucial clues that doctors use, alongside other tests.
Learning Machines: The ebb and flow of ‘thanks’ and ‘giving’ (November 26, 2025)
The evolution of context
Over millions of years, the human brain has evolved to focus on what matters most for survival, filtering sensory input into a coherent picture of the world. This selective attention, driven by the reticular activating system of the brain, was never meant to erase context but to shape it, as meaning emerges over time in response to our environment. But in a flash of evolution, modern technology has hijacked this process, short-circuiting our wiring.
The prompts on our phones and algorithms now act as an external reticular activating system, deciding for us what deserves attention and is not neutral in any way.
Context switching
Our typical day now unfolds as a continuous chain of context switches: we wake up and immediately scan messages, interrupt getting ready to respond to an email, scroll social media over breakfast, answer texts during the commute, then arrive at work already mentally fragmented. We start a task, get pulled into email, take a call, prepare for a meeting, multitask during the meeting, and then struggle to remember where we left off—repeating this cycle dozens of times before lunch.
Early in the pandemic, we joked that dolphins might be better at working from home than humans, since they are able to remain present in two worlds at once—they only sleep with half their brain at once. What we were really observing was not multitasking, but the strain of maintaining coherence when context collapses, and our attention is forced to split. At any given moment, our phones connect us to other worlds.
None of these interruptions feels significant on its own, but each one forces the brain to suspend one mental state and activate another, leaving residue behind. By midday, attention is scattered, memory is strained, fatigue has set in, and productivity quietly erodes—not because the work is hard, but because context has been repeatedly broken without ever being fully restored.
Learning Machines: Reclaiming agency before becoming semi-conscious humans (
Context switching drains both productivity and energy because the brain spends more time adjusting than working. Every time attention jumps from one task to another, the mind must reset—remembering what we were doing, figuring out what comes next; this all takes effort but produces nothing. As a result, tasks take longer, decisions slow down, and mistakes creep in.
By the end of the day, we feel tired and unaccomplished, even though we’ve been busy the whole time. Constant switching gives the illusion of getting things done while quietly undermining focus and leaving less room for meaningful work.
Cognitive overload
Cognitive overload sets in when information moves faster than our attention can keep up. In those moments, our mind struggles to focus on what actually matters, context begins to thin, and our decisions quietly shift from deliberate to reactive.
Imagine receiving an email from a colleague questioning a decision we made, copied to others, polite in tone but pointed enough to sting. Before we have fully absorbed the message, our attention narrows, and we begin drafting a response, not to clarify or move the work forward, but to defend ourselves.
We read selectively, lingering on words that feel accusatory. At the same time, the message’s larger purpose slips into the background, past grievances surface, and what began as a collaborative exchange slowly collapses into a single perceived threat. This is not a failure of intelligence or professionalism, but a failure of attention under strain, where overload shrinks context.
Machines falter in the same way. When they are fed rapid inputs or unclear goals, even sophisticated systems can generate responses that sound coherent yet miss the point entirely. This is because overload does not just slow processing; it scatters attention, blurs purpose, and pushes both humans and machines toward reaction rather than understanding.
Learning Machines: Humans vs. AI: Thinking in epochs (Part 1) (
Context bias: When the frame distorts judgment
Context is essential for understanding, but it can also narrow perspective and introduce bias—for both humans and machines. In medicine, clinicians know that experience shapes interpretation: two doctors can look at the same test and draw different conclusions simply because of the other patients or cases they have seen before. AI faces a similar challenge. Systems “remember” prior cases, and that memory can influence recommendations that may be irrelevant to the current patient. In glaucoma progression, for example, identical data might produce different outputs simply because the AI recalls prior patients. The system is not being intelligent; it is reflecting context, which can sometimes be misleading. Two humans on a bus with a disruptive passenger will react very differently, influenced by context bias stemming from differences in prior experiences.
And sentencing decisions can shift depending on fatigue or the cases judges handled earlier in the day. Even experts’ preferences can change with immediate circumstances, as highlighted in “The Winner’s Curse,” a patient might delay medication due to cost, even when that delay carries serious risks. Context does not merely provide background; it actively shapes what we notice, what we prioritize, and how we judge.
The cost of losing context
Dwindling context is a hidden cost of modern life. From checking phones at dawn to juggling emails, meetings, and social media, our attention constantly jumps from one task to another. Each switch forces the brain to pause and reset, draining energy and fragmenting focus. This fragmentation breaks context causing our understanding blurs, decisions lose coherence, and even simple tasks to feel heavier than they should.
Machines face the same challenge since AI systems can carry over remnants of prior tasks into new ones, producing outputs that seem coherent but miss the real goal. In humans and machines alike, fragmented attention erodes context, and without context, purpose, the force that gives meaning to our actions slowly fades.
Questions we might ask ourselves to enhance context:
- What part of the larger story may I not be seeing right now and how I can expand the aperture?
- Which details are my over weighing and what cues might I be missing?
- What is relevant to the greater purpose that I am not considering?
In Part 3, we examine the role of purpose within the coherence scaffold as the point of convergence for attention and context.

