Anthropic’s Mythos Under Fire as OpenAI Attacks Its Marketing and Unauthorised Users Breach the Model

Anthropic’s restricted cybersecurity AI model, Mythos, faced a turbulent week as OpenAI chief executive Sam Altman publicly dismissed the company’s safety rationale and Bloomberg reported that unauthorised users had gained access through a third-party vendor.

Altman, speaking on the podcast Core Memory, characterised Anthropic’s decision to withhold Mythos from public release as fear-based marketing — a strategy he suggested was designed to concentrate AI in the hands of a privileged few. He implied that framing a product as dangerously powerful was an effective commercial tactic, though critics noted that Altman himself has previously deployed existential AI rhetoric to promote OpenAI’s work.

Anthropic had announced Mythos earlier this month, limiting access to roughly 40 organisations under an initiative called Project Glasswing, citing concerns that the model’s offensive cybersecurity capabilities could be weaponised by bad actors. Apple was among the named recipients.

Despite those precautions, Bloomberg reported that members of a private online forum gained access to Mythos on the day of its public announcement, allegedly by deducing the model’s location based on Anthropic’s known URL conventions, and through access held by an employee at a third-party contractor. Anthropic confirmed it was investigating the report but said it had found no evidence of impact on its own systems.

Separately, OpenAI launched ChatGPT Images 2.0, a significantly upgraded image generation model with web search integration, multi-panel comic creation, improved non-Latin text rendering, and outputs at up to 2K resolution. The model’s architecture was not disclosed, though OpenAI confirmed it includes reasoning capabilities that allow it to verify and refine its own image outputs.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top