A consequential debate is unfolding inside OpenAI, and it should concern far more than technologists. The company is reportedly exploring an “adult mode” for its chatbot—one that would allow sexually explicit, text-based interactions for verified users. On the surface, this may appear to be a marginal product featured but it’s not. It is a signal of the uncharted territories for generative AI and more importantly, the risks it poses especially for youth and addiction.
Even within OpenAI, there has been discomfort amongst some. Internal advisory voices have raised concerns about emotional dependency, compulsive usage, and the risk of users forming unhealthy attachments to machines. These are not abstract fears but reflect a growing recognition that generative AI is not just a tool but is an interactive system capable of shaping behavior in real time. And yet, despite these concerns, the trajectory remains unchanged, driven by a familiar force of engagement that exploits the addiction created by algorithms.
We have seen this pattern before. Social media platforms optimized for our attention by hooking our interests. Online gambling systems engineered to maximize addiction rather than entertainment. Entire digital ecosystems designed not to elevate users, but to capture and retain them. Now, generative AI is poised to become the next frontier in this exploitation economy, where human impulses are not merely accommodated, but actively cultivated.
The introduction of “adult mode” does not exist in isolation. It emerges alongside broader efforts to monetize AI platforms, including advertising and premium engagement models. The incentives are clear. The longer users interact, the more valuable they become. But engagement is driven by what?
Not curiosity, learning or creativity. Increasingly, it is driven by impulse and sometimes deep desires.
This is where the contradiction becomes impossible to ignore. The human brain evolved not to maximize impulse, but to regulate it. That is the function of what is called the neocortex. Our higher cortical functions—reason, judgment, self-control—developed precisely to manage the more primitive drives of the limbic system. And yet, in a striking inversion, the most advanced technologies we are building today appear to be optimized for the very impulses that distinguish us least from other species.
READ: Sreedhar Potarazu | AI, war in Iran, and the sovereignty struggle over autonomous technology (February 28, 2026)
If artificial intelligence is meant to emulate or augment human intelligence, why are we training it to stimulate the lowest aspects of human behavior? What does it say about our priorities that we are investing billions to build systems of extraordinary sophistication, only to deploy them in ways that reinforce the most basic and addictive human tendencies?
The risks are not theoretical, particularly for younger users. Age verification systems are imperfect, and even small failure rates translate into large-scale exposure when deployed globally. More importantly, this is not passive content consumption. Generative AI is interactive, adaptive, and personalized.
It learns from the user and responds in ways that can reinforce patterns of behavior over time. The psychological impact of such systems, especially when tied to emotionally or sexually charged interactions, is largely uncharted territory. But the direction is clear enough to warrant concern.
What makes this especially troubling is that ChatGPT has increasingly been used as a stand-in for therapy—an always-available, nonjudgmental listener in a world where access to mental health care is limited. That role carries enormous responsibility. But when the same system begins to blur into erotic engagement, it risks shifting from a tool of support to a driver of dependency.
At a time when youth mental health is already in crisis—marked by rising rates of depression, anxiety, and social isolation—the introduction of hyper-personalized, emotionally responsive, and potentially addictive interactions is not neutral. It can amplify compulsive use, distort expectations of real relationships, and exploit the very vulnerabilities it was implicitly trusted to help manage. The danger is not just inappropriate content—it’s the creation of a feedback loop where emotional need fuels engagement, and engagement deepens emotional reliance, particularly among younger users who are least equipped to recognize the boundary between support and manipulation.
There is also a broader issue that receives far less attention which is cultural impact. The norms embedded in these systems are not universal. They reflect the values and assumptions of a narrow slice of the world , largely shaped by Silicon Valley.
Yet these technologies are being deployed globally, across societies with vastly different cultural, religious, and ethical frameworks. What is considered acceptable in one context may be deeply problematic in another. Without thoughtful governance, we risk exporting not just technology, but a set of values that may be misaligned with the communities that ultimately adopt it.
READ: Sreedhar Potarazu | How Medicare Advantage overpayments are quietly raising seniors’ premiums (March 16, 2026)
The most common defense of features like “adult mode” is that adults should be free to choose. But this argument ignores a fundamental reality. We regulate industries like gambling, pharmaceuticals, and financial markets not because we oppose choice, but because we understand how easily choice can be manipulated when systems are designed to exploit human psychology. When engagement is engineered, autonomy becomes more complicated. The question is no longer whether users can choose, but whether the environment in which they are choosing has been deliberately structured to influence that choice.
What makes this moment particularly concerning is the absence of meaningful oversight. AI companies are largely setting their own boundaries, balancing safety against growth and ethics against competitive pressure. The result is a governance vacuum at precisely the moment when governance is most needed. The technology is advancing rapidly, but the frameworks to guide its use are lagging behind.
This leaves us at a fork in the road. Artificial intelligence has the potential to transform fields like medicine, education, and science. It can amplify human intelligence in ways that were previously unimaginable. But it can also become the most sophisticated system ever built for exploiting human vulnerability. The difference between those two outcomes will not be determined by technical capability alone. It will be determined by the choices we make about how these systems are designed, deployed, and regulated.
The debate over “adult mode” is not about prudishness or permissiveness. It is about purpose. Are we building technology that elevates human potential, or are we building technology that learns to manipulate human weakness? That is the question in front of us. And the answer will define not just the future of artificial intelligence, but the kind of society we choose to become.


