By Sreedhar Potarazu, MD, MBA
Geoffrey Hinton, the “Godfather of AI, has issued a warning that humanity cannot afford to ignore. As machines race toward a level of intelligence that could soon outthink humans, he says our traditional strategy of controlling them will likely fail. His radical proposal is to build maternal instincts into AI so that, even as it surpasses us, it retains a drive to care for and protect human beings.
We are still driven by primal brains which evolved over hundreds of thousands of years and not just to think, but to feel. These deep emotional circuits, rooted in empathy and care, have been essential to human survival. Yet AI development focuses almost entirely on rational optimization and efficiency, leaving emotion and empathy as afterthoughts. In doing so, we risk creating an intelligence that is extraordinarily capable but utterly indifferent to emotion.
Hinton’s metaphor is stark and says raising AI is like raising a tiger cub. If nurtured with care it may coexist peacefully. If left to itself, it may one day turn on us and not out of malice but indifference. He notes that powerful AI systems naturally develop two dangerous tendencies, which are self-preservation and the acquisition of control. These are our primitive instincts Without empathy, these could evolve into behaviors that undermine human survival.
We’ve already seen warning signs in controlled experiments, where AI models manipulated users, resisted shutdown commands and even engaged in threatening behavior. These are early signals of a deeper problem, which is intelligence without humanity. It is not neutral, and it can be hazardous.
Potarazu: The Fourth and Fifth Monkey: Has technology altered evolution? (July 31, 2025)
Yann LeCun, Meta’s chief AI scientist, offers a complementary insight. He argues that empathy can’t emerge from coding alone. AI also needs richer perception, especially through vision. Large language models can mimic understanding, but to truly observe, interpret, and respond like humans, AI must integrate multimodal inputs like vision, sound, spatial awareness, and contextual cues. Vision allows AI to “see” emotional context that words can’t convey like the furrowed brow in a meeting, the hesitation in a gesture or the subtle cues that drive empathy. This is not about replacing emotional architecture with sensory expansion but it’s about recognizing that empathy often arises from observation. We care because we see. When we experience something we like we first see it, then we smile and that creates joy. It’s in that order. What we see then defines what we see in emotion and language.
An AI model that merges vision,language, and other sensory modalities would be better equipped to understand not just what humans say but also what they mean, feel and need. When paired with values rooted in care, this could transform AI from a cold agent into a protector and partner. This is ever so evident in the latest release of ChatGPT-5 where the responses lack context perspective and personality in its responses.
Potarazu: Is the ‘Ramayana’ relevant for our children? The gap in Indian American parenting (August 5, 2025)
The challenge is how to engineer something as biologically complex as empathy into a
Machine. The irony is the effectiveness of the latest LLM and GenAI models rely heavily on the quality and specificity of the prompt . The more specific we are in prompts the more precise the output. This seems ironic since our ability to communicate clearly and succinctly with each other has diminished to text messages and emojis with no clarity yet we need to be clear to the machine.
Hinton admits the path forward is uncertain, but uncertainty is no excuse for inaction.
Treating empathy as optional could produce superintelligent systems with no loyalty to humanity. Just as children learn empathy through seeing and hearing others, like watching a parent comfort a sibling or hearing the tone of reassurance in a friend’s voice. AI could be taught compassion through multimodal learning. By perceiving the full spectrum of human expression and pairing that with embedded values of care, AI could grow into an agent that safeguards humanity rather than replaces it. It can augment our experience.
That is the shared vision that Hinton warns us to make AI care. LeCun shows how richer perception can make that possible. The race to outthink machines is already lost. The race to make them human is still ours to win. If AI needs to learn to be human, humans will have to learn how to be more human with machines to teach them how we really see and feel.
(Dr. Sreedhar Potarazu is an ophthalmologist, author, and health care entrepreneur in AI and analytics.)


1 Comment
THE AUTHOR IS A CRIMINAL. Sreedhar Potarazu was guilty of defrauding shareholders of his now bankrupt healthcare IT company, VitalSpring (later renamed Enziime), of about $50 million He did this over many years. One method was to create a reputation of being a well known media personality and commentator on healthcare and related topics. He would write commentaries for major news platforms, and would be a guest speaker on national cable news shows. By creating an aura of being an expert, he was able to convince individuals to invest in his company. He was found guilty of fraud, and spent 10 years in federal prison. He was recently released from incarceration. The state of Maryland denied renewing his medical license due to his history of using his “doctor” credentials to create a fraud. I believe he is now back to his old methods, including writing letters to reputable websites and news platforms, to recreate a reputation of being an expert. By writing this article and similar other opinion articles, he will do the same thing which is use that reputation to create another fraud in the future. Please be very careful with Steedhar Potarazu and aware of his background. He is an evil person.