Unauthorized Access to Anthropic's Mythos AI Model Investigated

Unauthorized Access to Anthropic's Mythos AI Model Investigated

Synopsis

Unauthorized users reportedly gained access to Anthropic's new Mythos AI model via a private online forum on the same day the company announced plans for limited testing. Anthropic is investigating the alleged breach through a third-party vendor environment. The powerful AI, intended for defensive cybersecurity, has raised regulatory concerns due to its vulnerability detection capabilities.

Listen to this article in summarized format

Reuters
A small group of unauthorized users has accessed Anthropic's new Mythos AI model, Bloomberg News reported on Tuesday, citing documentation and a person familiar with the matter.

A handful ‌of users ⁠in ⁠a private online forum gained access to Mythos on the same day that Anthropic first announced a plan to release the model to a limited number of companies for testing purposes, the ⁠report said.

The ‌group has been using Mythos regularly since then, though not ⁠for cybersecurity purposes, according to the report.

"We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said.

Announced on April 7, Mythos is being deployed ‌as part of Anthropic's "Project Glasswing," a controlled initiative under which select organisations are permitted to ⁠use the unreleased Claude Mythos Preview model for defensive cybersecurity.

Mythos is a powerful AI model that has sparked concerns among regulators about its unprecedented ability to identify digital security vulnerabilities and potential for misuse.

This editorial summary reflects ET Tech and other public reporting on Unauthorized Access to Anthropic's Mythos AI Model Investigated.

Reviewed by WTGuru editorial team.