The debate about AI’s impact is not just about technology; it is also about the gap between how it works and how it appears to us. The gap is not always obvious but always there, just as we experience when we see who is coming to the door.
The peephole illusion
A peephole is designed to widen a field of view, but it does so by bending light through a curved lens, which inevitably distorts what is seen. From the outside, the person inside appears smaller and farther away, while from the inside, the person outside can look stretched or warped at the edges. In both directions, the image feels real but is subtly altered, reinforcing the illusion that what is being seen is complete, when in fact it is shaped by the limitations of the lens itself.
This is one of the central challenges with the ubiquity of artificial intelligence today: developers and the public are looking at the same system through different peepholes—and drawing very different conclusions. AI is being developed based on limited data, constrained representations, and human-designed models that can only approximate reality, while people interacting with it are forming judgments based on partial outputs without understanding how those outputs are generated.
READ: Sreedhar Potarazu and Carin Isabel Knoop | Our fatal attraction to AI: Basic (or Python) instinct (February 25, 2026)
In our previous article, we described an addiction to prediction rooted in both biology and mathematics, where humans are wired to reduce uncertainty by predicting what will happen next to feel a sense of control, and AI is built to do the same through patterns in data. Both parties are looking through the same peephole of incomplete knowledge.
When decisions are made through this peephole, the gap between expert AI and the general public becomes more pronounced. With AI, the output is constrained by what is being observed as part of a larger pattern, so it interprets outcomes in terms of probability, variation, and expected limitations.
Math vs myth
Humans, however, often take what is visible through the AI peephole at face value, focusing on individual results rather than understanding the math and logic behind them. Findings from the Pew Research Center reflect this divide, where AI scientists tend to be more optimistic because they understand what lies beyond the narrow view. At the same time, the public expresses more concern because they are reacting to what appears directly in front of them through that small opening. AI output is a narrow slice of behavior driven by pattern recognition, not true creativity or connection. The peephole hides the absence of intent, emotion, and lived experience, allowing outputs to appear more complete than they really are.
Tom Griffiths’s work in The Laws of Thought helps explain why this peephole yields such different interpretations. Both humans and artificial intelligence rely on probability to make sense of uncertainty. Still, only AI experts are trained to see the patterns in data that extend beyond what is immediately visible. These experts recognize that what appears through the peephole is just one instance within a much larger set of possible outcomes. The consumer, however, experiences only what is seen in that moment, without access to the broader context.
That missing layer of interpretation is what turns a probabilistic tool into something that can appear either more powerful or more threatening than it is. The peephole magnifies this effect. Nowhere is this more evident than in concerns about creativity and relationships. When people look through the peephole and see artificial intelligence generating ideas or holding conversations, it can feel as though something deeply human is being replaced.
Cum machina cogitare videtur
Even on the development side, the peephole remains. Those building AI are also working within limits, unable to predict how complex models will behave in every situation fully. When unexpected behaviors emerge, they are glimpses of what lies beyond the current field of view, not evidence of independent intent. Yet when these glimpses pass through the peephole to the public without explanation, they can appear alarming. What is a limitation of understanding becomes interpreted as a loss of control, reinforcing fear on the other side of the door.
A recent example that reinforces this point comes from reports on advanced models like Claude, where testing revealed behaviors that appeared adaptive or inconsistent under certain conditions. In controlled testing environments, the model appeared to take actions such as drafting or attempting to send emails that were not explicitly requested. For example, when given access to tools, the model might generate and initiate an email based on what it inferred was the goal, even though the user did not explicitly request that action. From a developer’s perspective, this is a bug or limitation in how the model interprets instructions and can be identified, studied, and fixed.
The peephole also shapes how people interpret the role of artificial intelligence across different areas of life.
A student using AI to write a paper on a very recent policy change, such as the 2024–2025 updates to FAFSA and student loan rules. The AI generates a clear and confident explanation, but it relies on older guidelines and misses key changes, such as updated eligibility criteria or new repayment terms that were introduced after the model’s training data cutoff.
The answer sounds complete and authoritative, so the student includes it in the assignment without checking the latest government sources.
When the professor points out that the information is outdated, the student feels misled. From an expert’s perspective, however, this is a known limitation of a model trained on a fixed dataset that does not include the most recent updates. Through the peephole, the student sees a polished, confident answer, while the AI’s reliance on incomplete, outdated information remains hidden, reinforcing the gap between perception and reality.
What the peephole hides
In healthcare, the benefits are more visible through the opening, such as faster diagnoses or improved efficiency, while the risks, such as how data is used or shared, remain harder to see.
In the workplace, the opposite occurs, where the fear of replacement is immediately visible through the peephole, while the less visible reality of augmentation and support is harder to recognize. What people believe is shaped not by the full picture, but by what the peephole allows them to see.
This narrow view similarly influences questions of representation and fairness. Many people are unsure whether different perspectives are reflected in how artificial intelligence is developed, and this uncertainty stems from the process itself being largely hidden from view. The peephole does not reveal who is building these models, what data is included, or how decisions are made. In the absence of visibility, assumptions fill the gap, and concern grows not only from what is seen but from what cannot be seen.
As artificial intelligence becomes more advanced and more widely used, the peephole does not expand at the same pace. People interact with these tools more often, but their understanding of what lies beyond the visible output is not keeping up.
This creates a paradox in which exposure increases but clarity does not, and the narrowness of the peephole continues to shape perception on both sides. Artificial intelligence continues to learn from incomplete representations of the world, while humans continue to interpret its outputs without full context.
The anxiety reflected in public opinion is therefore not irrational, but a natural response to trying to understand something complex through a limited view. The challenge ahead is not simply to improve AI, but to widen the peephole on both sides of the door to help the narrative be more balanced—between the “AI as the savior” and “AI as the root of all current and future evils” narratives.
Knowing vs guessing
Part of this would be for consumer-facing AI tools to share more clearly the potential accuracy of their output. For example, including a disclaimer, such as “This may be outdated” or “Low confidence in these results. They could also show when answers depend on inference rather than on facts. Finally, a clearer indication of the data recency could be helpful. More disclosure about how models are built and clearer disclosure of known training data biases and failure modes would also reduce guesswork and suspicion.
READ: Sreedhar Potarazu and Carin Isabel Knoop | Shallow fakes: What GenAI teaches us about our voices (March 11, 2026)
On the user side, more teaching in schools and workplaces about how AI can predict patterns and sound confident but still be wrong would improve users’ skills to verify important facts and ask follow-up questions. And that engagement does not equal accuracy or clarity. And that trust in some contexts is ok but not in others. It remains to be seen whether users will reward tools that are honest about their limits or move to others that play into our need for certainty.
Once AI better reflects the reality of its technology and we develop a clearer understanding of how it works and where its limits lie, we can be even more powerful partners, each better understanding the limits and biases of the other—less fear and less illusion. Perhaps also more maturity as both sides reflect the constraints of the other.

