A new edition of Future of Life Institute’s AI safety index released on Wednesday revealed that the safety practices of major artificial companies like OpenAI, Anthropic, xAI and Meta, were “far short of emerging global standards.”
The institute said that the safety evaluation, conducted by an independent panel of experts, found that all these companies, which are racing to develop superintelligence, did not have a robust strategy in place for controlling such advanced systems.
“Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, U.S. AI companies remain less regulated than restaurants and continue lobbying against binding safety standards,” said Max Tegmark, MIT professor and Future of Life president.
READ: OpenAI eyes major expansion in India amid global AI push (
The Future of Life is a nonprofit organization that has raised concerns about the risks intelligent machines pose to humanity. The nonprofit was founded in 2014, and supported early on by Tesla CEO Elon Musk.
Sabina Nong, an AI safety investigator at the nonprofit said in an interview at the San Diego Alignment Workshop that the analysis revealed a divide in organizations’ approaches to safety. “We see two clusters of companies in terms of their safety promises and practices,” Nong said. “Three companies are leading: Anthropic, OpenAI, Google DeepMind, in that order, and then five other companies are on the next tier.”
Anthropic, the highest-ranked company on the list, got a “C+” grade, while Alibaba Cloud, the lowest-ranked, received a “D-.” The index examined 35 safety indicators across six domains, including companies’ risk-assessment practices, information sharing protocols and whistleblowing protections, in addition to support for AI safety research.
FLI President Max Tegmark, an MIT professor, said the report provided clear evidence that AI companies are speeding toward a dangerous future, partly because of lack of regulations around AI. “The only reason that there are so many C’s and D’s and F’s in the report is because there are fewer regulations on AI than on making sandwiches,” Tegmark told NBC News, referring to the continued lack of adequate AI laws and the established nature of food-safety regulation.
The report recommended that AI companies share more information about their internal processes and assessments, use independent safety evaluators, increase efforts to prevent AI psychosis and harm and reduce lobbying, among other measures.
A Google DeepMind spokesperson said the company will “continue to innovate on safety and governance at pace with capabilities” as its models become more advanced, while xAI said “Legacy media lies,” in what seemed to be an automated response. “We share our safety frameworks, evaluations, and research to help advance industry standards, and we continuously strengthen our protections to prepare for future capabilities,” an OpenAI spokesperson said, and added that the company invests heavily in frontier safety research and “rigorously” tests its models.
This study comes at a time of rising concerns about the societal impact of advanced AI models, with several instances of self-harm and suicide tied to AI chatbots

