Banks and financial institutions in America and Europe, Anthropic may have just ‘failed in its promise to protect’ you as hackers claim …

Spread the love


Banks and financial institutions in America and Europe, Anthropic may have just ‘failed in its promise to protect’ you as hackers claim …

A small group of unauthorized users has gained access to Anthropic’s new Mythos AI model, says a Bloomberg report. The AI model was announced by the company earlier this month. In a blog post, the company announced that it will not publicly release the model citing fears that it could de-stabilize the cybersecurity world. Anthropic then said to have a limited rollout of Mythos to a limited set of approved partners due to its advanced capabilities. Now, the Bloomberg report says that a group of users in a private online forum accessed the Mythos model on the same day it was announced for limited testing. The access was allegedly linked to a third-party contractor environment.

How a Discord group accessed Anthropic’s Mythos

According to the report, the unauthorized access was carried out by a small group of users in a private Discord channel. The group reportedly used a mix of methods to gain entry, including access linked to a third-party contractor and online tools commonly used for cybersecurity research.The report said the users were able to guess the likely online location of the model based on patterns used in earlier Anthropic systems. Some of this information may have come from publicly available data or earlier breaches involving related platforms.The group is said to focus on tracking unreleased AI models and uses bots to scan websites such as GitHub for clues. According to the report, the users have been interacting with the model but have not used it for cybersecurity-related tasks.

What Anthropic said

The Bloomberg report quotes an Anthropic spokesperson who said: “We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments.”The spokesperson added that the company has “no evidence that the access reported by Bloomberg went beyond a third-party vendor’s environment or that it is impacting any of Anthropic’s systems.”

What makes Mythos different from other AI models

Anthropic says that during testing, Mythos reportedly detected thousands of critical flaws, including zero-day vulnerabilities that typically take elite human teams months to uncover. By comparison, human researchers discover about 100 such vulnerabilities annually. Experts told Business Insider that Mythos compresses exploit development from weeks to hours, representing a leap in AI’s ability to handle cybersecurity tasks.Because large language models excel at structured languages like code, Mythos can identify subtle logic-level bugs that humans or traditional tools often miss. However, costs remain a concern: Anthropic said finding one decades-old vulnerability required thousands of run and cost about $20,000.

Growing demand for Mythos access

Anthropic has already allowed several companies, including Amazon, Apple and Cisco Systems, to test Mythos. The model is also being offered through Amazon’s Bedrock platform to a limited set of organisations. The report added that financial institutions and government agencies are seeking early access to better understand and defend against potential risks linked to the technology.The incident underlines the broader challenge facing AI companies as they develop powerful systems while trying to control how they are accessed and used.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *