Microsoft is forming a new team that would build artificial intelligence far more capable than humans in certain domains starting with medical diagnosis. This team, called the MAI Superintelligence Teams, follows similar projects by Meta, Safe Superintelligence, and other companies.
Microsoft’s AI chief in charge, Mustafa Suleyman said the company plans to invest “lots of money” Meta this year offered $100 million signing bonuses to recruit famous AI talent. While Suleyman did not say if such offers were on the table, he said that Microsoft AI would continue to recruit from other top labs while staffing its new team with existing researchers and Karen Simonyan as chief scientist.
According to Suleyman, the company is not chasing “infinitely capable generalist” AI like some peers. He said that this is because he doubts that autonomous, self-improving machines could be controlled, despite research into how humanity might keep AI in check.
READ: Microsoft strikes multi-billion dollar deal with IREN (
Suleyman said that Microsoft has a vision for “humanist superintelligence,” or technology that could solve defined problems with a real-world benefit. “Humanism requires us to always ask the question: does this technology serve human interests?” Suleyman said.
Suleyman also said that he aims to focus the Microsoft team on specialist models that achieve what he called superhuman performance while posing “virtually no existential risk whatsoever.” He gave as examples AI that solves battery storage or develops molecules, in a nod to AlphaFold, DeepMind’s AI models that can predict protein structures. Suleyman also happens to be a Deepmind co-founder.
Suleyman said in a blog post that the new Microsoft AI research group will focus on providing useful companions for people that can help in education and other domains. It will also pursue narrow areas in medicine and in renewable energy production.
READ: Microsoft, Nvidia partner to power UAE’s ambition as AI hub (
“We’ll have expert level performance at the full range of diagnostics, alongside highly capable planning and prediction in operational clinical settings,” Suleyman wrote. He also said that he wants to “make clear that we are not building a superintelligence at any cost, with no limits.”
“The project of superintelligence has to be about designing an AI which is subservient to humans, and one that keeps humans at the top of the food chain,” Suleyman told Axios. He also reportedly rejects the narrative of a “race” to AGI. He says results from the new Superintelligence Lab will take time and should be seen as “a wider and deeply human endeavor to improve our lives and future prospects.” “I think it’s still going to be a good year or two before the superintelligence team is producing frontier models,” he said in the interview.

