The Deal
On Friday, May 1, the Department of Defense announced a sweeping set of AI agreements with eight major tech companies for use on Impact Level 6 and 7 networks — the most sensitive classified environments in the US military. The companies signed: OpenAI, Google, Microsoft, AWS, Nvidia, SpaceX, Reflection, and Oracle.
What was unsigned: Anthropic. The maker of Claude was excluded from the round entirely. The Pentagon designated Anthropic a "supply-chain risk to US national security" in March. Anthropic responded with two federal lawsuits.
The story is not just about who won. It is about who refused.
What Impact Level 6 and 7 Means
Impact Levels are the Pentagon's classification system for cloud and AI services. IL 5 is for Controlled Unclassified Information. IL 6 covers Secret-level classified data. IL 7 covers Top Secret information.
Getting cleared to operate at IL 6 and 7 is one of the highest bars in tech. It requires physical security audits, personnel security clearances, isolated infrastructure, audit logging, and continuous monitoring. Most of the cloud and AI vendors in the world cannot operate at these levels.
The eight companies that just got cleared are now permitted to deploy AI for mission planning, intelligence analysis, weapons targeting, and operational logistics across the entire Department of Defense. This is the largest expansion of commercial AI into classified military environments in US history.
Who Got What
OpenAI: Models for intelligence analysis and mission planning. The DoD is reportedly the largest non-Microsoft enterprise customer for OpenAI products.
Google: Gemini for classified workloads, plus access to Google Cloud TPU infrastructure inside DoD environments.
Microsoft: Continuation and expansion of existing JEDI/JWCC contracts, with Azure OpenAI integration cleared for IL 6/7.
AWS: Bedrock and SageMaker cleared for classified use. Trainium and Inferentia chips approved for DoD deployments.
Nvidia: H100 and Blackwell GPU access for classified DoD AI training and inference. Plus the entire CUDA ecosystem.
SpaceX: Surprised many observers. Starlink-based AI inference in austere environments. Likely tied to broader Pentagon dependence on SpaceX launch capacity.
Reflection: The newest entrant. Specialized in autonomous reasoning systems. Smaller deal but symbolically important.
Oracle: Added hours after the initial announcement. Government cloud capacity and AI database integration.
Why Anthropic Said No
This is the most important part of the story. Anthropic did not lose the contract because their technology is worse. By most third-party benchmarks, Claude is competitive with OpenAI and Google models on the technical metrics that matter for defense applications.
Anthropic was excluded because they refused to grant the Pentagon unrestricted access to Claude for two specific use cases: fully autonomous weapons systems and mass domestic surveillance.
Read that sentence twice. The Pentagon wanted commercial AI for mass domestic surveillance. Anthropic refused. The other seven (now eight) companies did not.
What Mass Domestic Surveillance Actually Looks Like
Mass domestic surveillance via AI is not science fiction. It is the integration of facial recognition, license plate readers, social media monitoring, financial transaction analysis, location data from cell phones, and pattern-of-life analysis into a single system that can identify, track, and flag individuals based on behavioral signals.
Each of those data streams already exists. The bottleneck has been integration and analysis at scale. AI removes that bottleneck. A model running across all those data sources can flag a person of interest in real time based on combinations of behaviors that no human analyst could identify manually.
The Pentagon has been building toward this capability for two decades. Commercial AI from OpenAI, Google, Microsoft, and AWS just got it most of the way there.
Why This Matters Beyond the Pentagon
Pentagon technology decisions cascade. The clearance standards developed for IL 6/7 environments become the templates for federal civilian agencies, state governments, and eventually international allies. The vendors selected become the de facto standards for sensitive AI work across the public sector.
Anthropic's exclusion is not just about losing one contract. It is about being shut out of an entire vertical of long-term high-margin work. Their lawsuits against the Trump administration are not really about the supply-chain designation. They are about whether AI labs can refuse government contracts on ethical grounds without facing structural retaliation.
The OpenAI Pivot
OpenAI's position deserves separate attention. The company's charter explicitly committed to ensuring AI is used to "benefit all of humanity." OpenAI's usage policies historically restricted military applications. Both restrictions were quietly modified in 2024-2025 as the Pentagon contracts approached.
OpenAI is now actively building products specifically for military intelligence and targeting workflows. The shift from "broad benefit to humanity" to "deeply integrated with the world's largest military" is one of the most consequential corporate ethics pivots in recent tech history. The market has rewarded it. The internal employee response has been more divided.
The Bigger Picture
Eight companies have just gained privileged access to one of the largest budgets in the world. The contracts are worth tens of billions over the next decade. Pentagon AI is now a primary growth vector for OpenAI, Google, Microsoft, AWS, and Nvidia.
Anthropic is now the only major frontier AI lab that has explicitly refused to participate in mass surveillance and autonomous weapons applications. They are paying for that refusal with lost revenue and ongoing federal litigation.
Whether that refusal looks like principle or like commercial suicide depends on what the Pentagon AI infrastructure produces in practice. The next 24 months will tell us.
