AI firm Anthropic recently announced that it would not be releasing its latest and most powerful model, Claude Mythos – a decision which has sent ripples through the cyber insurance industry
On 7 April 2027, artificial intelligence (AI) firm Anthropic announced it would not be releasing its latest and most powerful model, Claude Mythos, to the general public.

Alarmingly, Anthropic cited global cyber security concerns as the reason behind the decision, suggesting that the model was so adept at exploiting weaknesses in digital security that a “coordinated effort to reinforce the world’s cyber defences” was required before a public release was possible.
Indeed, Anthropic’s internal safety team stated that the model was capable of “identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser”.
The announcement, however, was met with a mixed response. Some lauded the firm’s intent to put safety ahead of speed, while others questioned the validity of the claims, pointing instead towards an over-hyped marketing stunt.
Tristan Fletcher, founder and chief executive of ChAI Protect – a firm that uses AI to create commodities pricing insurance – and honorary UCL and Cambridge machine learning lecturer, told Insurance Times that the truth likely lies somewhere in the middle.
“When a company refuses to release a model and regulators start running stress scenarios, something real has likely shifted. At the same time, it’s undeniably a strong strategic position to be seen as the lab that doesn’t release the dangerous system,” he explained.
“The truth is probably a combination – a genuine capability jump, framed in a way that reinforces Anthropic’s brand as the responsible actor. The key signal isn’t what Anthropic says, it’s that banks and regulators are taking it seriously. That doesn’t happen for marketing copy.”
What then are the most pressing risks that Claude Mythos – and other, rival models – present to digital firms, their customers and their insurers?
The current landscape
Speaking at Zywave’s Cyber Risk Conference in April 2026, Tracey-Lee Kus, chief executive at Aon’s Global Broking Centre, framed the disruption that insurers are currently facing from AI.
Read: New AI apprenticeship for insurance industry launched
Read: Cyber risk firm warns insurers not to view AI simply as ‘risk multiplier’
Explore more artificial intelligence related stories here, or discover other news stories here
She said: “Cyber insurance pricing rests on historical claims data. Our underwriting cycles run on 12-month policy periods and our repricing windows assume time to observe, analyse and then respond [to cyber security developments].
“With AI, the gap between vulnerability, discovery and exploitation has collapsed from months to minutes. What happens to historical frequency data as a predictor of forward-looking events? This becomes insufficient. It’s not working.”
And short-term cyber attacks are not the only concern. Kus also explained that, regardless of the perceived effort that has been made by Anthropic to prioritise safety in this instance, future efforts might not be so measured, ultimately shifting the balance of technological power towards threat actors.
“Anthropic has done a responsible thing. But this capability will be replicated by other developers within months. Not all of them will be able to do something like [pausing the release of] Mythos. And not all of them will make the same choices that Anthropic has made,” she added.
Fiona Phillips, head of law firm Marks and Clerk’s AI and cyber security legal advisory practice, mirrored Kus’ concerns as to whether other companies will act with “similar restraint”.
However, Phillips also drew attention to the here and now and the plan Anthropic has developed to keep the global cyber security apparatus one step ahead of threat actors – Project Glasswing.
Project Glasswing
Anthropic has described Project Glasswing an effort to use a preview version of Mythos to help “secure the world’s most critical software and to prepare the industry for the practices we all will need to adopt to keep ahead of cyber attackers”.
The initiative will see Anthropic grant early model access to researchers and major technology companies such as Google, Apple, Microsoft, Nvidia, Crowdstrike and Amazon, in an effort to pre-emptively patch security vulnerabilities before more machiavellian hands can gain access to the tool.
“The launch of Claude Mythos Preview to selected vendors is a game changer for the cyber industry,” said Phillips.
“Especially for defenders, who will now face a surge in patching demands as the ongoing race between attackers and defenders intensifies in the effort to protect organisations from cyber crime.”
Project Glasswing represents a concerted effort to shore up the world’s digital estates, but the sheer acreage of those estates, and the level of interconnectedness they have developed from relying on a small number of technology providers, has introduced a new problem entirely.
Fletcher explained: “From an insurance perspective, the most interesting shift is toward correlation. The industry is generally built around the assumption that risks are at least partially independent. If AI accelerates exploit discovery or concentrates vulnerabilities across shared systems, you start to see losses cluster rather than diversify.
“That’s a much harder problem to price and manage. If multiple firms share the same software, vendors or models, AI turns one vulnerability into a portfolio-level event.”
Furthermore, Fletcher suggested that if cyber attack frequencies or severities are being “structurally altered” by AI, then historical risk data becomes a “weaker guide just as the stakes increase”.
And, to compound the problem, Jason Hart, managing director of proactive and global security services at CFC, told Insurance Times that the rapid uptake of AI in software development may have in fact left traditional security standards by the wayside.
As the internet matured, he explained, good cyber security experts evangelised that their firm’s ports – in essence, the entryways for their digital communications – were a key attack vector, and keeping their number low and their security high was of utmost importance.
With rapid AI-assisted development, he warned, that hygiene is being forgotten.
“We’ve got a lot of organisations just going into the cloud and spinning things up and there seem to be a lot of ports open. So, these fundamental basics which were just common knowledge, we seem to be getting worse at, or not doing at all,” he said.
Looking ahead
The success of Project Glasswing remains to be seen, though what is clear is that it is not likely to sound the death knell for cyber crime any time soon.
But for Fletcher, the most pertinent risk of AI – and one that cyber insurers need to be existentially aware of – is not simply its use as a hacking tool for threat actors, but the sheer rate and scale of its worldwide adoption.
He concluded: “Looking slightly further out, the bigger risk becomes systemic rather than technical. We’re heading toward a world where many institutions rely on the same small number of models, infrastructure providers and design assumptions.
“That creates the potential for correlated failure. The more subtle issue is that organisations may start embedding systems they don’t fully understand into decisions they can’t afford to get wrong.
“I feel like the next phase of risk isn’t a rogue model – it’s quiet and unconscious overdependence.”

He graduated in 2017 from the University of Manchester with a degree in Geology. He spent the first part of his career working in consulting and tech, spending time at Citibank as a data analyst, before working as an analytics engineer with clients in the retail, technology, manufacturing and financial services sectors.View full Profile











































No comments yet