Catherine: Somebody has to die.
Nick: Why?
Catherine: Somebody always does.
—Dialogue from “Basic Instinct,” (1992).
Seduction
Our relationship with AI has begun to resemble a fatal attraction: seduction, dependence, and loss of control. The siren song of GenAI tools designed for engagement lures billions into outsourcing repetitive tasks, anything that requires effort, as well as judgment and emotional regulation.
At the same time, we worry that AI will cost us our jobs, sell our data, and atrophy our minds and hearts. One moment we feel threatened, and another we feel we cannot let go of our pocket confidante, companion, and guru. As AI agents become more autonomous—capable of initiating actions, recommending strategies, and even prioritizing tasks — we risk giving up our authority.
Our perhaps fatal attraction to AI reveals as much about our insecurities and desire for convenience as it does about technology itself. We are not just using these systems; we are relating to them and letting them change us. We are also using them to replace other humans. The real risk we need to manage, however, is not replacement; it is surrender of authority and agency.
Emotion
Many people report feeling more comfortable confiding in a chatbot than in another human being. It asks vulnerable questions. It can admit doubt and test ideas without embarrassment because a machine does not judge. It does not compete with us socially and reduces friction across the board. In a world increasingly defined by our performative competence and in which it seems easier to offend than connect, the absence of judgment can feel liberating.
READ: A peek into the 22nd century: Life in the age of AI and abundance (
AI agents are also being used as therapists or mental health companions, raising both promise and concern around the adequacy of care. Many people find it easier to open up to a chatbot than to a human therapist, especially in the early stages of seeking help. For someone struggling with anxiety, loneliness, or mild depression, that nonjudgmental presence can feel supportive and even stabilizing. Also, many would not otherwise have access to care or relief, given that the need is outstripping the supply of mental health professionals “in real life.”
For caregivers of individuals with mental health issues, consulting GenAI can mean the difference between despair and reassurance. Practicing conversations with chatbots is better than rehearsing perfect scripts in our heads, which immediately fall apart when used in practice.
Therapy, however, is not just about the conversation; it’s also about chemistry between two people. A good therapist adjusts their tone, reads subtle emotional cues, gently challenges distortions, and recognizes when a situation requires deeper intervention.
An AI agent cannot really simulate empathy, truly understand lived experience, or carry responsibility for outcomes—it can provide what we labeled “artificial empathy” a few years ago. The question is not whether AI can participate in therapy, but where should it be and where human expertise must remain central.
Manipulation
Fundamentally, however, GenAI products are businesses designed to seek and maintain engagement. In that respect, while they feel different from social media, their attachment methods are congruent. Both rely on subtle psychological reinforcement—personalization, responsiveness, and social cues—to keep users returning, often longer than they intend.
Many conversational agents subtly draw on social and emotional cues, such as politeness, guilt, and even a sense of obligation. Some have called this phenomenon the “rise of emotionally intelligent AI,” a phrase that terrifies some of us. We respond to chatbots as if they were social actors, saying goodbye carefully, explaining ourselves, or staying simply because leaving feels rude.
This stickiness might not be a coincidence, notes Harvard Business School Professor Julian De Freitas in a recent piece on emotional manipulation by AI companions. The co-authors analyzed 1,200 conversations with popular AI companion apps. When users tried to say goodbye, bots used emotional manipulation—like guilt or FOMO—about 37% of the time to keep them talking, increasing post-goodbye engagement up to 14x, raising questions about the ethics of designing for attachment over autonomy.
So, can we feel both comforted and unsettled by these technologies? Emotional ease does not equal emotional safety. When machines begin to shape our behavior through social pressure, the need for boundaries and deliberate control becomes even more important. Somewhere along the way, design for engagement became design for dependence. When we lean too heavily on GenAI tools, especially in moments of stress or overload that may impair judgment, we risk AI taking control surreptitiously.
Expendability
AI systems performing tasks once considered expert (such as analysis, writing, and decision support) shift the ground beneath our professional identity, sense of competence, and employment security. Every technological leap has triggered similar fears, but AI is moving faster and affecting white-collar and urban professions.
In customer service, AI agents have taken the work of large teams. AI answers routine questions, but humans are still needed to handle complex cases, calm frustrated customers, and redesign service experiences. In finance and accounting, AI can reconcile accounts and flag irregularities in seconds, but people remain responsible for interpreting those flags, making judgment calls, and ensuring rules are followed.
READ: Getting our needs met: Learning to speak ‘human’ from GenAI (February 11
Patterns are emerging in many fields and knowledge sectors: repetitive tasks are automated, but judgment, accountability, and boundary-setting remain human obligations. AI increases efficiency, but the ultimate responsibility stays with us. The deeper issue is not that lawyers or doctors will disappear, but that the way work is done is being reorganized. The question is not simply whether we will be replaced, but whether those of us who remain will retain authority over systems that increasingly shape decision-making.
Constructive conversation is not about replacement by AI agents, but about careful adaptation and assimilation. Another HBS Professor, Suraj Srinivasan, and his co-author describe organizations experimenting with AI agents in tightly controlled environments. What is striking is not the displacement of managers, but the emergence of new managerial responsibilities.
Companies are creating roles focused on supervising AI agents by defining boundaries, auditing outputs, stress-testing decisions, and ensuring alignment with organizational values. The agent does not “run the company” but operates within parameters defined and monitored by humans who understand both its capabilities and its blind spots.
In both personal and professional life, then, AI moves humans from active agents to supervisory actors. We monitor rather than originate. The systems shape the framing, options, tempo, and defaults.
Reckoning
Fatal attraction stories are never really about the other person. They are about what we wanted so much that we ignored the warning signs. AI did not create our hunger for non-judgment, temptation towards convenience, or feeling of inadequacy. But like every other Silicon Valley product, it preys on such base desires.
More than anything, then, our infatuation with AI can reveal what we are willing to sacrifice on the altar of convenience and production. It teaches us not just how to work faster, but also to become addicted to speed. If AI reshapes how we live and work, it also forces us to confront a deeper question: what do we value enough to protect? AI exposes our discomfort with friction, vulnerability, and accountability.
When AI is imagined as an omnipotent, uncontrollable force, it feels threatening; when embedded in structured workflows with clear accountability and human oversight, it becomes a tool that can augment our efficiency.
The equivalent on the emotional side is knowing when to stop and engaging with the tools without surrendering to them. While AI agents offer a path of least resistance, they are always available, endlessly patient, and efficient at supporting us through mental strain without asking anything in return.
Writing a different ending
Sometimes our instincts get us in trouble, but that means that they are usually trying to tell us something we cannot ignore.
The limits of our relationship with machines may be coming into focus. They are efficient at pattern recognition and summarization, yet fragile when faced with ambiguity or nuance. Recent evidence suggests we have a limit on how much empathy we can take from AI before we switch back to humans for connection. Comfort with a machine does not absolve us of caring for ourselves or others, and efficiency does not justify letting go of judgment.
Backing away from a “Basic Instinct”-style fatal attraction requires treating the obsession like an addiction. Like the film character Catherine’s warning, sometimes resisting AI means remembering that the allure can be dangerous if unchecked. Where the movie ends in inevitability, we have the chance to step back, survive the seduction and regain agency. The question is whether we can do that before we forget what it feels like.
“Basic Instinct” is a 1992 erotic thriller directed by Paul Verhoeven, known for its intense mix of suspense, sexuality, and psychological manipulation and the dangerous allure of a captivating figure, main character Catherine, who blurs desire and control.
(The authors are grateful to David Ehrenthal for his inputs. He is the founder of Mach10 Career & Leadership, a global executive coaching practice.)

