NSA Reportedly Using Anthropic's Restricted Mythos AI Amid DoD Feud
The National Security Agency (NSA) is reportedly utilizing Anthropic's highly restricted Mythos Preview AI model, a development that emerges despite the Department of Defense (DoD) having previously designated Anthropic

The National Security Agency (NSA) is reportedly utilizing Anthropic's highly restricted Mythos Preview AI model, a development that emerges despite the Department of Defense (DoD) having previously designated Anthropic as a "supply chain risk." This revelation, first reported by Axios, highlights a complex and seemingly contradictory relationship between the U.S. government and a leading artificial intelligence firm.
Anthropic officially unveiled Mythos earlier this month, characterizing it as a cutting-edge AI designed specifically for sophisticated cybersecurity applications. However, the company opted to withhold the model from public release, citing concerns that its advanced capabilities could be misused for offensive cyberattacks. Consequently, access to Mythos has been limited to approximately 40 select organizations, with Anthropic publicly identifying only about a dozen of these recipients.
The NSA, a key intelligence agency within the U.S. government, appears to be among the undisclosed entities granted access to Mythos. According to reports, the agency is leveraging the AI primarily to scan digital environments for exploitable vulnerabilities, a critical function in national security. The United Kingdom’s AI Security Institute has also confirmed its access to the specialized model.
This reported usage by the NSA creates a perplexing dynamic, occurring just weeks after the DoD, the parent agency of the NSA, officially labeled Anthropic a "supply chain risk." This designation stemmed from Anthropic's refusal to grant Pentagon officials unfettered access to the full functionalities of its AI models. The core of the dispute originated from Anthropic's principled stand against making its AI, specifically the Claude model, available for mass domestic surveillance or the development of autonomous weapons systems.
The disagreement escalated to the point where Anthropic initiated a lawsuit against the Defense Department, challenging the "supply chain risk" designation. The lawsuit underscores the tension between the military's demand for advanced technological capabilities and AI developers' ethical considerations and control over their powerful creations.
Yet, even as the legal battle and public dispute unfold, there are signs that Anthropic's relationship with the current administration may be undergoing a notable shift. Last Friday, Anthropic CEO Dario Amodei reportedly engaged in a productive meeting at the White House with Chief of Staff Susie Wiles and Secretary of the Treasury Scott Bessent. This high-level engagement suggests a potential thawing of relations, despite the ongoing friction with the Pentagon.
The contradictory situation — one arm of the U.S. military actively deploying a restricted AI from a company that another, overarching military branch has deemed a risk — underscores the challenging landscape of AI integration into national security. It highlights the intricate balance between leveraging powerful new technologies for defense and intelligence, and navigating the ethical and control frameworks that AI developers are attempting to establish.
Anthropic has declined to comment on the matter, and TechCrunch has reached out to the NSA for their statement.
FAQ
Q: What is Anthropic's Mythos model?
A: Mythos is a frontier AI model developed by Anthropic, designed specifically for advanced cybersecurity tasks. Anthropic chose not to release it publicly due to concerns over its potential misuse for offensive cyberattacks, instead limiting access to a select group of around 40 organizations.
Q: Why did the Pentagon label Anthropic a "supply chain risk"?
A: The Department of Defense (DoD) designated Anthropic as a "supply chain risk" because the AI firm refused to provide Pentagon officials with unrestricted access to the full capabilities of its models. This dispute specifically arose from Anthropic's unwillingness to allow its Claude model to be used for mass domestic surveillance or the development of autonomous weapons.
Q: How is the NSA reportedly using Mythos?
A: The National Security Agency (NSA) is reportedly utilizing Anthropic's Mythos Preview model primarily for scanning digital environments. Its main application is to identify and analyze exploitable vulnerabilities, enhancing the agency's cybersecurity capabilities.
Related articles
PowerLight's Laser System Powers Drone for Hours in Pentagon Test
PowerLight Technologies' laser power beaming system successfully kept a military drone airborne for hours during Pentagon tests. This breakthrough, demonstrated at Shaw Air Force Base, could enable indefinite flight for unmanned systems, revolutionizing surveillance and offering potential counter-drone capabilities.
Tim Cook Pens Farewell Letter as He Departs Apple CEO Role
Apple CEO Tim Cook is set to depart his role in September, marking the conclusion of a significant 15-year tenure at the helm of the tech giant. In anticipation of this major leadership transition, Cook has penned a
2026 Phone Review: My Buying Advice is Changing This Year
As an experienced tech reviewer, I've had my hands on every major phone released in 2026 so far, and I have to be honest: my buying advice is fundamentally different this year. While the allure of the latest and
in-depth: Tech CEOs Think AI Will Let Them Be Everywhere at Once
A growing trend among Silicon Valley's top executives reveals a bold, personalized vision for artificial intelligence: leveraging it to achieve a form of digital omnipresence within their organizations. Despite broader
policy: He spreads hate online — and fans pay him hundreds of
Far-right influencer Nick Fuentes has earned nearly $900,000 from fans since early 2025, funding his "hate-filled monologues." This highlights a significant, direct-to-fan monetization model for extremist online content.
Google in Talks with Marvell for Custom AI Inference Chips
Google is in talks with Marvell Technology to develop two custom AI inference chips, including a memory processing unit and an inference-optimized TPU. This move signals Google's strategic diversification of its chip supply chain, expanding beyond its primary partner Broadcom to address the rapidly growing demand and cost of AI inference workloads. The collaboration aims to enhance Google's competitive advantage in the burgeoning custom silicon market.






