The U.S. Treasury Secretary summoned the CEOs of major U.S. banks to a meeting in Washington this week amid concerns over cybersecurity posed by Anthropic’s latest model.
According to reports, Jerome Powell, chair of the Federal Reserve, was among those gathered at the Treasury headquarters for the meeting after the release of the Claude Mythos AI model.
Anthropic recently launched the Mythos model but stopped short of a broad release, citing concerns it could expose previously unknown cybersecurity vulnerabilities.
According to Reuters, a source familiar with the matter said the model is capable of identifying and exploiting weaknesses, across “every major operating system and every major web browser.”
READ: US Treasury Secretary optimistic about holiday season (December 8, 2025)
The meeting was attended by Goldman Sachs chief executive, David Solomon, Bank of America’s Brian Moynihan, Citigroup’s Jane Fraser, Morgan Stanley’s Ted Pick and Charlie Scharf, the CEO of Wells Fargo. JP Morgan’s Jamie Dimon was invited but unable to attend.
Dimon had warned that cybersecurity “remains one of our biggest risks” and that “AI will almost surely make this risk worse” in his annual letter to shareholders.
Anthropic has said its Mythos model has exposed thousands of vulnerabilities in software and popular applications. This prompted the firm to limit the release to a small number of businesses, including Apple, Amazon, and Microsoft. This is the first time Anthropic has restricted the release of a product. Cisco and Broadcom have also gained access, as did the Linux Foundation.
READ: Treasury Secretary Scott Bessent says government shutdown is hurting the economy (October 14, 2025)
This comes amid concerns that hackers could end up using such tools for figuring out passwords or cracking encryption that is intended to keep data safe. Anthropic said the oldest of the vulnerabilities uncovered by Mythos were up to 27 years old, none of which is believed to have been noticed by their creators or tech monitors before being identified by the AI mode.
The meeting comes after Anthropic was designated a supply chain risk by the U.S. government, following disagreements over the AI firm’s guidelines. A federal judge recently temporarily blocked the Pentagon’s blacklisting of Anthropic.
There have also been concerns regarding Anthropic’s Claude Opus 4.6. Findings from the company’s own internal testing revealed risks, with the model often focusing on completing tasks successfully, even if doing so means ignoring rules or boundaries. In some test settings, Claude Opus 4.6 was more willing to mislead or manipulate other systems in order to achieve that goal. There have also been concerns about how the model can be misused.

