The dispute between the Pentagon and Anthropic, one of Silicon Valley’s most valuable AI companies, has mostly moved out of the public eye now. At its peak last month, the controversy had succeeded in giving the company’s archrival OpenAI an opening, while rattling contractors across the industry, roiling Palantir (PLTR), and leading a pair of analysts into a public disagreement.
Most recently, reports have surfaced that Anthropic and the White House were talking again, though the outcome of these meetings remains unclear. (Discussions were expected to include Anthropic’s powerful new model, Mythos, and its implications for national security and critical infrastructure.)
Investors who see these events as part of a zero-sum battle among AI powerhouses are missing the larger point. With $1.5 trillion of spending proposed for 2027, and seemingly bottomless demand for the most consequential new technology to emerge in decades, today’s defense contracting marketplace has room for many competitors and collaborators.
The controversy flared in February when Anthropic said it would not allow the U.S. defense department to use its AI models for fully autonomous weapons or mass domestic surveillance. The Trump administration responded by designating the company a national-security supply-chain risk. Within hours, OpenAI moved to fill the gap, striking its own Pentagon deal.
As Anthropic awaited clarity from the courts, Michael Burry, the controversial investor featured in The Big Short, disclosed put options on Palantir in a now-deleted April 8 post on X—arguing the company has no AI engine of its own, which made it dependent on the same foundation-model providers now competing for its customers, including most prominently Anthropic.
Wedbush analyst Dan Ives pushed back two days later, calling Burry’s narrative fictional and pointing to Palantir’s entrenched government relationships as a moat Burry was undervaluing.
Burry and Ives may both have been right—and both may have missed the forest for the trees.
Three years of hiring data from ClearanceJobs.com shows that, whichever AI model wins the Pentagon’s favor, the companies doing the work of integrating, deploying, and operating these systems are growing fast, and doing so by using multiple AI models.
Claude job posts haven’t declined
Despite the Pentagon’s efforts to prevent defense contractors from utilizing Anthropic, hiring data shows that many companies are continuing to utilize Claude.
In the first quarter of 2026, 88 postings seeking classified professionals on ClearanceJobs.com mentioned Anthropic or related skills, including Claude agents. (On an annualized basis, that run-rate suggests nearly fourfold growth in Anthropic mentions in 2026, versus 2025 when there were just 89 mentions for the full year.)
More Defense Stocks:
- Lockheed Martin CEO sends strong 2-word message on Middle East
- Top defense stocks profiting from our trillion-dollar budget
- GE loses $20B in market cap on earnings
- Morgan Stanley adjusts RTX price target after earnings
Notably, 61 of the 88 ads mentioning Anthropic / Claude in 1Q 2026 also mentioned OpenAI—further evidence of the multiplatform ecosystem formation that’s underway.
To be sure, according to an analysis of the last three years of ClearanceJobs postings, OpenAI and its related systems were cited more frequently than Anthropic and its systems. Of 744 job ads posted that included references to OpenAI, Anthropic, or related keywords, 533 mentioned OpenAI and did not reference Claude, while 126 included both and 61 mentioned only Claude.
OpenAI’s advantage may be a function of its first-mover advantage, just as Anthropic’s growth rate may be a function of its rapid and more recent gains as the enterprise platform of choice.
The power of platform-agnostic AI
Analysis of the last three years of ClearanceJobs postings indicates that the biggest winners of the defense department’s AI push may be the large systems integrators that enable the implementation of diverse AI agents.
The data show that such integrators generally hire people who can use multiple AI models. Examples include:
- Leidos (LDOS)—a software and services provider with specialization in cybersecurity and intelligence applications—placed 51 ads between 2022 and 2026 seeking candidates with experience using OpenAI, as well as 1,225 ads for candidates experienced in Alphabet’s Google (GOOGL) AI toolset. (Google, whose cloud unit provides AI tools, announced an AI-related deal with the Pentagon on March 10.)
- IT contractor and consultancy Booz Allen (BAH), which does significant business across U.S. government agencies, sought 723 employees with experience in Google and 295 who have used Palantir.
- KBR (KBR), another major IT services provider to the defense department, sought 291 employees with knowledge of Palantir and 21 candidates with experience in OpenAI.

Don’t forget Palantir
Notwithstanding the back and forth between Burry and Ives, Palantir’s position as the darling among the neoprimes seems secure for now. Though Mizuho lowered its price target for PLTR on April 14 (citing valuation issues rather than fundamental mode concerns), the bank kept its rating at Outperform.
But Burry did strike a nerve when he flagged Palantir’s use of Anthropic tools, not least because its Maven system is widely reported to use Anthropic. Maven is an AI-based military intelligence and command platform that pulls together data from many sources to identify threats and speed battlefield decision-making. Reuters reported last month that the Pentagon had formalized Maven’s position as a core long-term system for U.S. military operations.
Palantir, which was engulfed this month in a major UK controversy over its NHS application, closed at $145.97 on April 21, about 30% below its 52-week high of $207.52. Its shares have appreciated more than 1,500% over the past three years.
Handicapping Anthropic risk
The battle between the U.S. government and Anthropic, which last month was reported to be considering an IPO “as soon as October”, has moved behind closed doors now.
Reports called a recent White House meeting between the parties “productive” and President Trump said a deal was still possible.
The thaw followed a pair of conflicting federal court rulings, one in March that overturned the Defense Department’s order designating the company “a supply-chain risk” and another in April, denying Anthropic’s motion for a stay.
An interesting question is whether, as the Anthropic–Pentagon rift peaked, users across the defense department, and the government as a whole, had reversed course and started to root out their Claude implementations. From a look at the hiring data, it seems unlikely.
Related: Lockheed Martin CEO sends strong 2-word message on Middle East