Google has signed a classified AI agreement with the US Department of Defense, granting the Pentagon access to its AI models on classified networks for “any lawful government purpose,” according to The Information, citing a person familiar with the matter. The deal was reported on April 28 — less than 24 hours after more than 600 Google employees, including directors and vice presidents, published an open letter to CEO Sundar Pichai urging him to refuse exactly this kind of arrangement. Google did not respond to press inquiries.

The Context: What Anthropic Refused

The Google deal is most clearly understood in contrast to what happened when Anthropic faced the same request. In early 2026, the Pentagon pushed Anthropic to make Claude available on classified networks without the standard restrictions Anthropic applies to all users — specifically, prohibitions on using Claude for domestic mass surveillance and for autonomous weapons systems without appropriate human oversight.

Anthropic refused to remove those guardrails. The Trump administration responded by designating Anthropic a “supply chain risk” — a label normally reserved for foreign adversaries — effectively blacklisting the company from federal procurement. Anthropic sued, and a judge later granted an injunction against the designation while the lawsuit proceeds. President Trump has since said “some very good talks” have occurred and suggested a future agreement is “possible.”

OpenAI and Elon Musk’s xAI both signed Pentagon AI deals with language permitting “any lawful government purpose” — the same terms Anthropic refused. Google is now the third major AI company to do so, filling a classified AI vendor pool from which Anthropic remains excluded.

What the Employee Letter Said — and When It Was Ignored

The timing is the most striking element of this story. More than 600 Google employees — including directors and vice presidents — published an open letter to Pichai on Monday, April 27, making the following argument:

“We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways. This includes lethal autonomous weapons and mass surveillance but extends beyond. The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads.”

By Tuesday morning, Google had signed the deal the letter was explicitly asking it to refuse. The contract was reportedly already in progress when the letter was published.

What the Contract Actually Says — and Doesn’t Say

The agreement includes language stating: “the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.”

A separate clause states that the agreement does not give Google a right to control or veto “lawful government operational decision-making.” Google is also required to help the government adjust the company’s AI safety settings and filters on request — meaning the Pentagon can ask Google to modify how the system behaves, and Google is contractually obligated to assist.

The employees who signed the letter identified exactly this enforcement gap in their argument. On classified, air-gapped networks, Google cannot monitor what queries are being run, what outputs are being generated, or what operational decisions are being made with those outputs. The “should not be used for” language is advisory, not a contractual prohibition with enforcement mechanisms Google controls. Google’s own engineers pointed out that “appropriate human oversight and control” is undefined in the contract.

The Trajectory From Project Maven to Now

This is not Google’s first encounter with military AI ethics. In 2018, roughly 4,000 Google employees signed a petition over the company’s involvement in Project Maven — a Pentagon program analyzing drone footage — and Google let the contract expire. Palantir took it over. Palantir’s Maven investment has since grown to $13 billion.

Since then, Google has systematically expanded its Pentagon relationship: winning a share of the $9 billion Joint Warfighting Cloud contract in 2022, powering the Pentagon’s GenAI.mil platform in December 2025, deploying Gemini agents to the Pentagon’s three-million-person workforce on unclassified systems in March 2026, and now extending to classified networks. Each step was individually defensible. The direction of travel is consistent.

The additional irony: Google is simultaneously one of Anthropic’s largest investors, having committed up to $40 billion in investment. Google is now a classified AI vendor for a government that has blacklisted Anthropic for refusing to be one.

Conclusion

The Google Pentagon deal is the clearest example yet of the divide forming in the AI industry over military AI ethics — between companies that include meaningful guardrails as non-negotiable conditions of government work, and companies that accept the terms governments offer. Anthropic is currently the only major AI lab in the first category. Browse our directory to explore Claude, ChatGPT, and every major AI tool whose policy decisions are now shaping the AI governance landscape.