Anthropic investigates claim of unauthorised access to Mythos AI tool
Anthropic said it is investigating a report that a small group gained unauthorised access to Claude Mythos Preview via a third-party vendor environment. Bloomberg reported that users in a private forum accessed the model without the usual permissions, and Anthropic said there is no evidence its systems were compromised. The incident highlights ongoing questions about how frontier AI tools are controlled and secured.
Why It Matters
The case underscores the difficulty of preventing access to highly capable AI models and raises questions about responsibility and safeguards when frontier AI is deployed by external partners.
Timeline
9 Events
Context on frontier AI and UK reliance on external developers
The article notes that frontier AI models are developed outside the UK, with top-tier companies based in the US or China, and that the UK relies on companies like Anthropic to gain access to Mythos. It also mentions OpenAI’s cyber-security model GPT 5.4 Cyber and ongoing threats from nation-states and hacktivists.
UK Security Minister Dan Jarvis urges collaboration with government
Security Minister Dan Jarvis called on AI firms to work with the government in a generational effort to ensure AI is used to protect critical networks from attackers.
UK NCSC chief Richard Horne addresses CyberUK
At CyberUK, Richard Horne urged delegates to focus on basics of cyber-security and argued frontier AI can be safer and more secure if underlying practices are strengthened.
Details cited by Bloomberg about the access
Bloomberg reported that the person who accessed Mythos already had permission to view Anthropic’s AI models through work for a third-party contractor, and that the group had been using the model since gaining access.
Mythos released to some firms to help secure systems
Anthropic has released the Mythos model to some tech and financial companies to assist in securing their systems against its reported ability to exploit vulnerabilities, with the caveat that those companies must tightly control access.
Cyber-security expert comments on likely nature of access
Raluca Saceanu, chief executive of Smarttech247, described the incident as most likely arising from misuse of access rather than a traditional hack.
Anthropic says there is no evidence its systems were affected
The company said there is no evidence that its systems were compromised and there is no indication that malicious actors currently hold the model.
Anthropic confirms it is investigating the reported unauthorised access
Anthropic issued a statement saying it is investigating a report that unauthorised access to Claude Mythos Preview occurred through a third-party vendor environment.
Bloomberg reports alleged unauthorised access to Claude Mythos Preview via private forum
Bloomberg reported that a small group of people in a private forum were able to access Claude Mythos Preview without the normal permissions. The report notes the group had been using the model since gaining access and tried to avoid detection, with claims that the individual had permission to view Anthropic’s AI models through a third-party contractor.