Singapore Warns Banks to Fortify Defenses Against Anthropic’s “Mythos” AI Threat
Singapore’s Monetary Authority of Singapore (MAS) has urged banks and financial institutions to immediately strengthen cybersecurity defenses against emerging threats linked to an advanced AI model developed by Anthropic.
In coordination with the Cyber Security Agency of Singapore (CSA), MAS warned that the model—known as Claude Mythos Preview, or “Mythos”—could dramatically accelerate how quickly software vulnerabilities are discovered and exploited.
“Advances in artificial intelligence will accelerate the discovery and exploitation of software vulnerabilities in IT systems,” MAS said, noting that processes that once took months could now be reduced to hours.
The advisory marks a significant escalation in regulatory concern. It is the first known warning from a financial regulator tied directly to the capabilities of a specific, unreleased AI model, reflecting how quickly AI-related risks are moving from theory into operational reality.
Anthropic itself has acknowledged the risks. On April 7, the company withheld a full public release of Mythos, citing its offensive cybersecurity capabilities. Instead, it launched a controlled access program—Project Glasswing—granting limited use to select partners across major technology and infrastructure sectors.
Early findings from those tests have raised alarm. The model reportedly identified thousands of high-severity and previously unknown vulnerabilities across major systems, including long-standing flaws in widely used software. It is also capable of generating functional exploit code with minimal human input, placing it near the level of top-tier security experts.
“Financial institutions need to redouble efforts to strengthen their security defences… including timely security patching,” an MAS spokesperson said, emphasizing the need for stronger cyber hygiene and faster response cycles.
The warning extends beyond Singapore. Similar concerns have surfaced globally, with financial and government leaders in the United States and Europe reportedly holding discussions on the implications of AI-driven cyber capabilities.
For Singapore, the move builds on earlier guidance around generative AI and digital threats, but represents a sharper, more immediate stance. The focus is no longer just on misinformation or fraud, but on the possibility that AI systems themselves could rapidly expose—and exploit—critical weaknesses in digital infrastructure.
The message from regulators is clear: institutions cannot rely on existing timelines or defenses. In an environment where vulnerabilities can be found and weaponized in hours, response speed is becoming as critical as prevention.






