Finance ministers, bankers, and financiers have expressed concerns about Anthropic’s Claude Mythos, over its potential to undermine security of financial systems. Experts say the model potentially has an unprecedented ability to identify and exploit cybersecurity weaknesses, though others caution further testing is needed to properly understand its capabilities.
Canadian Finance Minister François-Philippe Champagne told the BBC that Mythos had been discussed extensively at the International Monetary Fund (IMF) meeting in Washington, D.C., this week.
“Certainly, it is serious enough to warrant the attention of all the finance ministers,” he said. He went on to draw parallels to the Strait of Hormuz, which has become a focal point of tensions following the U.S.-Israeli conflict with Iran. “The difference is that the Strait of Hormuz – we know where it is and we know how large it is… the issue that we’re facing with Anthropic is that it’s the unknown, unknown.”
READ: Trump administration may be pushing banks to trial Anthropic Mythos (April 13, 2026)
“This is requiring a lot of attention so that we have safeguards, and we have processes in place to make sure that we ensure the resiliency of our financial systems,” he added.
Mythos is one of Anthropic’s latest models, and was developed as part of its broader AI system, Claude. Anthropic had introduced Mythos as a breakthrough in autonomous cybersecurity, warning that the model could be dangerous if it fell into the wrong hands. Rather than allowing broad access, the company is reportedly testing it through a closed initiative called “Project Glasswing,” giving access to Mythos only to select researchers and organizations.
On Thursday, Anthropic released a new version of an existing model, Claude Opus, saying it would allow Mythos’ cyber capabilities to be tested in less powerful systems.
READ: US judge gives Anthropic reprieve from Pentagon blacklisting (March 27, 2026)
The UK’s AI Security Institute has been given access to a preview version of it, and has published the only independent report into the model’s cybersecurity skills. Its researchers say it was a powerful tool able to find many security holes in undefended environments, but suggested Mythos was not dramatically better than Claude’s predecessor, Opus 4.
“Our testing shows that Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed,” the report authors said.
However, some believe Anthropic’s claims about the model are a means to create hype. This isn’t the first time an AI company has claimed the capabilities of its models means they should not be released. In February 2019, OpenAI made similar claims when it chose to stagger the release of GPT-2, an earlier version of its models which now power ChatGPT.

