Google announced that its AI model just won a gold medal at a global mathematics competition. Meanwhile, OpenAI also said one of its experimental reasoning models achieved gold medal-level performance.
As shared, Google’s advanced version of its Gemini Deep Think model nailed five out of six problems from the International Mathematical Olympiad (IMO), scoring 35 points in total to hit gold medal-level performance.
“This year, our advanced Gemini model operated end-to-end in natural language, producing rigorous mathematical proofs directly from the official problem descriptions – all within the 4.5-hour competition time limit,” said DeepMinds’ Thang Luong and Edward Lockhart in a blog post.
READ: Former OpenAI CTO Mira Murati’s startup offers up to $500,000 in salaries (July 3, 2025)
“We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points — a gold medal score. Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow,” said IMO President Prof. Dr. Gregor Dolinar.
On the other hand, OpenAI also reached gold medal status with one of its experimental models. OpenAI researcher Alexander Wei clarified that this IMO-level model is still in the research phase, and OpenAI doesn’t plan to release anything with that kind of advanced math ability for a few more months. “I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO),” said Wei on X.
“In our evaluation, the model solved 5 of the 6 problems on the 2025 IMO. For each problem, three former IMO medalists independently graded the model’s submitted proof, with scores finalized after unanimous consensus. The model earned 35/42 points in total, enough for gold!” he added.
READ: Google to spend $25 billion on AI data centers across largest US electric grid (July 15, 2025)
According to Reuters, this marks the first time AI systems have ever hit gold-medal scores in the high school-level International Mathematical Olympiad. Both Google and OpenAI hit the same mark, solving five out of six problems by using general-purpose reasoning models. What sets this apart is that these models tackled complex math by thinking through problems in natural language, which is a big shift from the more traditional, specialized methods AI teams used before.
Last year, Google DeepMind’s AlphaProof and AlphaGeometry 2 systems hit silver medal-level by solving four out of six IMO problems, scoring 28 points. Now, Google says it plans to share a version of its more advanced Deep Think model with a group of trusted testers like mathematicians before eventually making it available to Google AI Ultra subscribers.

