Mrinank Sharma, who led Anthropic’s safeguards research team since its launch last year, shared his resignation letter in a post on X Monday morning. This post quickly garnered attention, and was viewed over a million times.
“The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Shah wrote. “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”
Shah insinuated that there is a difference in what Anthropic seems to say in public about AI safety and what the company practices at the workplace. “Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he wrote in his note addressed to colleagues. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”
Sharma added that instead of teaching AI to be more transparent with humans and less sycophantic, Mrinank says he feels “called to writing that addresses and engages fully with the place we find ourselves.” With this writing he wants to place “poetic truth alongside scientific truth as equally valid ways of knowing, both of which I believe have something essential to contribute when developing new technology.”
READ: Anthropic chatbot sparks market doubts, fuels broader selloff (
Sharma’s comments about AI safety and workplace practices is especially notable since Anthropic has long positioned itself as a safe and responsible artificial intelligence company.
Sharma mentioned plans to pursue a poetry degree and “devote myself to the practice of courageous speech,” adding he wants to “contribute in a way that feels fully in my integrity.”
Sharma, who has a PhD in machine learning from the University of Oxford, began working at Anthropic in August 2023, according to his LinkedIn profile. According to his website, the team he formerly led at Anthropic researches how to mitigate risks from AI. Sharma said in his resignation letter, that some of his work included developing defenses against AI-assisted bioterrorism and researching AI sycophancy, the phenomenon where AI chatbots overly praise and flatter a user.
According to a report published in May by Sharma’s team, the Safeguards Research Team had focused on researching and developing safeguards against actors using an AI chatbot to seek guidance on how to conduct malicious activities.
READ: Will AI make coders obsolete? Anthropic CEO’s comments fuel H1-B visa row (
A study Sharma published last week revealed that using AI chatbots could cause users to form a distorted perception of reality. The study showed that “thousands” of these interactions that may produce these distortions “occur daily.” Severe instances of distorted perceptions of reality, which Sharma refers to as disempowerment patterns, are rare but rates are higher regarding topics like relationships and wellness. Sharma said his findings “highlight the need for AI systems designed to robustly support human autonomy and flourishing.”
Other high-profile employees of AI companies have quit citing ethical concerns. Tom Cunningham, a former economic researcher at OpenAI, left the company in September and reportedly said in an internal message he had grown frustrated with the company allegedly becoming more hesitant to publish research that is critical of AI usage.
Jan Leike, who now leads safety research at Anthropic, and used to be part of OpenAI’s now-dissolved Super-alignment Safety Research Team, said in a post on X upon his resignation that he had been “disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”
Gretchen Krueger, who left her post as an AI policy researcher shortly after Leike, said in posts on X the company needs to do more to improve “decision-making processes; accountability; transparency” and “mitigations for impacts on inequality, rights, and the environment.”

