OpenAI is looking to hire a new executive responsible for studying emerging artificial intelligence-related risks in areas ranging from computer security to mental health.
CEO Sam Altman said in an X post that the role of “Head of Preparedness” is a “critical role at an important time.”
“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities,” he added.
Altman said that we are entering a world where we need a more nuanced understanding and measurement of how those capabilities could be abused, and “how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits.”
READ: OpenAI to acquire Neptune, maker of AI monitoring and debugging tools (
“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” Altman added, also stating that this will be a “stressful job” and the person hired would “jump into the deep end pretty much immediately.”
OpenAI’s listing for the Head of Preparedness role describes the job as one that’s responsible for executing the company’s preparedness framework, “our framework explaining OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm.” Compensation for the role is listed as $555,000 plus equity.
OpenAI first announced the creation of a preparedness team in 2023. The company said it would be responsible for studying potential “catastrophic risks,” whether they were more immediate, like phishing attacks, or more speculative, such as nuclear threats. Less than a year later, the Head of Preparedness Aleksander Madry was reassigned to a job focused on AI reasoning. Other safety executives also left the company or taken on different roles.
READ: OpenAI in talks with Amazon for an investment that could exceed $10 billion (
This comes during a critical time for AI safety. A new edition of Future of Life Institute’s AI safety index released earlier this month revealed that the safety practices of major artificial companies like OpenAI, Anthropic, xAI and Meta, were “far short of emerging global standards.” The institute said that the safety evaluation, conducted by an independent panel of experts, found that all these companies, which are racing to develop superintelligence, did not have a robust strategy in place for controlling such advanced systems.
There have also been rising concerns about the impact of AI on mental health. OpenAI and its business partner Microsoft are currently facing a lawsuit over allegations that the AI company’s chatbot, ChatGPT validated a user’s paranoid delusions, leading him to kill his mother before committing suicide.


