
Tether Freezes $344 Million in USDT Stablecoins Flagged for Illicit Activity
Tether has frozen $344 million in USDT across two Tron addresses flagged for illicit activity.

OpenAI CEO Sam Altman claims Anthropic is using 'fear-based marketing' to promote its Claude Mythos AI model, which can identify software vulnerabilities. He argues this strategy aims to restrict AI access to a select group.
Mentioned in this story
OpenAI CEO Sam Altman pushed back against growing alarm over rival Anthropic’s powerful new AI model Claude Mythos, suggesting the company is using “fear” to market the product.
Speaking on the Core Memory podcast hosted by tech journalist Ashlee Vance, Altman argued that the use of “fear-based marketing” was geared towards keeping AI in the hands of a “smaller group of people.”
“You can justify that in a lot of different ways, and some of it’s real, like there are going to be legitimate safety concerns," Altman said.
“But if what you want is like 'we need control of AI, just us, because we’re the trustworthy people', I think fear-based marketing is probably the most effective way to justify that."
Altman added that while there are valid concerns about AI safety, "it is clearly incredible marketing to say: 'We have built a bomb. We are about to drop it on your head. We will sell you a bomb shelter for $100 million. You need it to run across all your stuff, but only if we pick you as a customer.'"
He noted that it was “not always easy” to balance AI’s new capabilities with OpenAI’s belief that the technology should be accessible.
Anthropic’s Claude Mythos model, revealed last month, has drawn intense attention from researchers, governments and the cybersecurity industry, particularly after testing suggested it can autonomously identify software vulnerabilities and execute complex cyber operations. The model is being distributed only to a limited set of organizations through a restricted program.
The rollout reflects a broader divide in the AI industry over how powerful systems should be deployed, with some companies emphasizing controlled access and others arguing for wider distribution to accelerate innovation and understanding of the technology.
Mythos has become a focal point in that debate. The model’s capabilities have been framed by Anthropic as both a defensive breakthrough—allowing faster detection of critical software flaws—and a potential offensive risk if misused. Early this month, it identified hundreds of vulnerabilities in Mozilla’s Firefox browser during testing and has also demonstrated the ability to carry out multi-stage cyberattack simulations.
Claude Mythos is an AI model developed by Anthropic known for its ability to find software vulnerabilities, raising concerns about both defensive and offensive risks.
Sam Altman describes Anthropic's marketing strategy as 'fear-based marketing,' suggesting it aims to create alarm around AI to limit access to a smaller group of people.
Experts have warned of both defensive and offensive risks involving Claude Mythos, highlighting potential safety concerns related to its capabilities.
Sam Altman acknowledged that while some safety concerns are legitimate, he criticized the use of fear to market AI technologies like Claude Mythos.

Tether has frozen $344 million in USDT across two Tron addresses flagged for illicit activity.

JPMorgan reports that ongoing security flaws and stagnant total value locked (TVL) are diminishing decentralized finance's appeal to institutional investors. A recent exploit erased $20 billion in TVL, highlighting significant structural risks in the DeFi ecosystem.

The Bank for International Settlements warns that cryptocurrency exchanges are evolving into 'shadow banks' by offering bank-like services without adequate protections. This shift poses risks to retail users who may unknowingly engage in unsecured lending through high-yield products.

Peter Schiff labels Strategy's STRC as a Ponzi scheme in live Space

Proposed bill mandates warrants for government AI surveillance access.

DeFi suffers a $15 billion loss in just three days after a major exploit at Kelp DAO. What does this mean for the future?
See every story in Crypto — including breaking news and analysis.
Anthropic has restricted access to the system via Project Glasswing, granting select companies including Amazon, Apple and Microsoft the ability to test its capabilities. The company has also committed significant resources to supporting open-source security efforts, arguing that defenders should benefit from the technology before it becomes more widely available.
Security experts warn that the same capabilities that allow Mythos to identify vulnerabilities could also be used to exploit them at scale. Tests by the UK’s AI Security Institute found the model could autonomously complete complex cyber operations.
The model has also exposed limitations in existing AI evaluation systems, with Anthropic acknowledging that many current cybersecurity benchmarks are no longer sufficient to measure the capabilities of its latest system.
That said, a group of researchers claimed last week they were able to reproduce Mythos’ findings using publicly available models.
Despite calls within parts of the U.S. government to halt use of the technology over concerns about its potential applications in warfare and surveillance, the National Security Agency has reportedly begun testing a preview version of the model on classified networks. On prediction market Myriad, owned by Decrypt's parent company Dastan, users put a 49% chance on Claude Mythos being released to the wider public by June 30.
Altman suggested that rhetoric around highly dangerous AI systems may increase as capabilities improve, but argued that not all such claims should be taken at face value.
“There will be a lot more rhetoric about models that are too dangerous to release. There will also be very dangerous models that will have to be released in different ways,” he said. “I’m sure Mythos is a great model for cybersecurity but I think we have a plan we feel good about for how we put this kind of capability out into the world.”
Altman also dismissed suggestions that OpenAI is scaling back its infrastructure spending, saying the company would continue expanding its computing capacity despite shifting narratives.
“I don’t know where that’s coming from… people really want to write the story of pulling back,” he said. “But very soon it will be again, like, ‘OpenAI is so reckless. How can they be spending this crazy amount?’”