A leaked draft blog post has revealed that artificial intelligence firm Anthropic is testing a new system called Claude Mythos, described internally as the most powerful AI model the company has developed to date. The discovery came after cybersecurity researchers found the unpublished material stored in an unsecured and publicly searchable data cache containing nearly 3,000 draft assets.
Capybara Model Tier Shows Major Performance Leap
The leaked draft introduced a new model category known as the Capybara tier, designed to exceed the performance of Anthropic’s earlier Claude Opus 4.6 system. According to internal descriptions, the new model delivers significantly stronger results in software coding, academic reasoning, and cybersecurity testing.
Anthropic later confirmed the model’s existence, describing it as a “step change” in capability. The company acknowledged that the exposure resulted from human error within its content management system and stated that the model is currently being tested by select early-access users rather than released publicly.

Cybersecurity Concerns Rise as Leak Highlights AI Risks
The draft also warned that the model could introduce unprecedented cybersecurity risks, particularly in areas such as blockchain security and decentralized finance infrastructure. The incident itself highlighted the risks of advanced systems being disclosed unintentionally, especially when sensitive development details are stored in unsecured environments.
Disclaimer
This content is for informational purposes only and does not constitute financial, investment, or legal advice. Cryptocurrency trading involves risk and may result in financial loss.

