The recent profile of Sam Altman in The New Yorker appears as a story about power, leadership, and trust. But at a deeper level, it is really about how thinking errors develop when the stakes get high enough. To be clear, I’m not claiming to be an expert and am on my own journey of reflecting on these errors. When you look at it through the “telescope model of rationalization” what stands out is not just what decisions were made, but how they were justified.
At the center of the story around Sam Altman is a subtle but critical distinction between rationality, rationale, and rationalization. Rationality is the disciplined use of facts, evidence, and consistent logic to arrive at decisions, even when the outcome is inconvenient. Rationale is the explanation given for a decision—the story we tell ourselves and others about why something was done.
Rationalization, however, is what happens when that explanation is shaped after the fact to justify a decision that may not fully align with the original facts or principles. In high-stakes environments like the one described in The New Yorker profile, these three begin to blur. Decisions may start with some grounding in rationality, but as pressure builds, the rationale becomes more flexible, and over time, that flexibility can drift into rationalization. The danger is not that this is done intentionally, but that it becomes invisible to the person making the decisions, making it difficult to distinguish between what is true, what is explained, and what is justified.
READ: Sreedhar Potarazu | Is Claude AI safe? Anthropic’s most advanced model can go rogue (April 1, 2026)
Rationalization does not start as lying. It starts as small shifts in thinking towards justification. You begin with a clear principle, but over time, that principle bends to fit the goal because the goal and personal identity (ego) become singular. The mission starts to matter more than the method.
In the Altman story, there are moments where positions on safety, transparency, and regulation seem to change depending on the situation. That does not automatically mean intent to mislead. It often reflects something more subtle: the belief that flexibility is necessary because the outcome is so important and that failure is not an option.
That is the first thinking error when the outcome becomes the justification. Once that happens, consistency is no longer the standard. The question quietly shifts from “Is this true?” to “Does this help us get where we need to go?” And once that shift happens, it is very hard to reverse.
A second distortion is the replacement of evidence with personal narrative. The line between fact and fiction is distorted and this is dangerous. In complex fields like AI, very few people fully understand the underlying technology, which makes it easier for narratives to take hold. The New Yorker article raises concerns that certain risks and competitive threats may have been framed in ways that influenced policymakers and stakeholders. Over time, those narratives can stop feeling like strategy and start feeling like reality because its repeated so much and with confidence. That is the danger zone—when you are no longer aware that you are shaping the story because you now believe it yourself.
There is also the issue of moral licensing. When someone believes they are working toward a larger good—especially something as significant as shaping the future of artificial intelligence—it becomes easier to justify decisions that would normally raise concerns. This in technical terms is the Robin Hood effect.
The evolution of OpenAI from a nonprofit mission to a structure that includes strong financial incentives creates that exact tension. The thinking becomes “if the mission is right, then the path must be acceptable.” But that is not always true.
READ: Sreedhar Potarazu | What race box do Indian American kids check off? And why it matters (March 29, 2026)
What is more concerning is what happens as power increases. Feedback starts to weaken. People may raise concerns, but those concerns do not carry the same weight and this where serious issues of governance can creep in. People are afraid to question and even fear it. The events around Altman’s removal and rapid reinstatement show how quickly institutional forces—employees, investors, momentum—can override governance. This is not just about one person. It is what happens when stakeholders begin to protect the narrative instead of testing it.
There is also a pattern of holding multiple positions at once. AI is described as both an existential risk and an economic opportunity, something that needs regulation but also needs freedom to grow. Some of that is normal in a fast-moving field. But some of it reflects unresolved tension. Instead of choosing and defending a clear position, the narrative shifts depending on the audience. Over time, that creates confusion not just externally, but internally as well.
The important point here is that none of this is unique to Sam Altman. These are predictable thinking errors. At each step, the reasoning feels justified. Each decision makes sense in the moment. But the field of view keeps narrowing. You see less of what challenges you and more of what supports you.
In high-stakes environments like AI, this becomes dangerous very quickly. Speed increases pressure. And once that happens, rationalization is no longer an exception—it becomes the default way decisions are made.
READ: Sreedhar Potarazu | Tiger is not out of the woods: What golf teaches us about life (March 28, 2026)
The real takeaway from the New Yorker piece is not whether Altman can be trusted. It is how difficult it becomes to even answer that question when rationalization takes hold. When narrative replaces evidence, when the mission starts to justify the method, and when feedback loops weaken, you are no longer dealing with clear decision-making. You are dealing with a system that believes its own story.
And that is the real risk—not just in AI, but in any system where the stakes are high enough and the thinking is no longer being challenged.

