Tech · Monday, April 27, 2026
What Happens If Washington Nationalizes AI?
From Pentagon contracts to Google staff revolts, the question of state control over frontier AI is no longer hypothetical.
What Happens If Washington Nationalizes AI?
From Pentagon contracts to Google staff revolts, the question of state control over frontier AI is no longer hypothetical.
The letter that landed on Sundar Pichai’s desk this month would have been unremarkable a decade ago, when “Don’t be evil” was still on Google’s letterhead and engineers routinely walked off projects they found morally objectionable. More than 560 Google employees signed an open letter urging the chief executive to refuse Pentagon contracts for military AI, following a public clash between the Defense Department and Anthropic over the latter’s reluctance to allow its models to be used in certain national security contexts.
What is unusual is the timing. The letter arrived just as a different conversation was gathering force in Washington: not whether AI labs should be allowed to say no to the Pentagon, but whether the federal government should permit private companies to control frontier AI at all. An Atlantic report this fall floated the once-unthinkable possibility that the Trump administration could exert unprecedented control over the labs, up to and including effective nationalization. The two stories are the same story, viewed from opposite ends. Employees want their companies further from the state. The state is quietly considering whether to bring the companies in.
The legal toolkit is already on the shelf
If Washington decided tomorrow that frontier AI was too strategically important to leave in private hands, it would not need new legislation. The mechanisms exist, and most have been used before, often within living memory.
The Defense Production Act, a Korean War-era statute, allows the president to compel companies to prioritize government orders, allocate materials, and even direct industrial output in the name of national defense. The Trump administration invoked it during the pandemic to force ventilator production. The Biden administration cited it in a 2023 executive order requiring AI labs to share safety test results for the most powerful models. That was a relatively gentle use. The Act’s text contemplates much more aggressive intervention, including loans, purchase commitments, and priority ratings that effectively reorder a company’s customer list with the Pentagon at the top.
The International Emergency Economic Powers Act, IEEPA, is the more dramatic instrument. It allows the president, after declaring a national emergency, to block transactions, freeze assets, and regulate property in which any foreign interest is involved. Given that every major AI lab has foreign investors, foreign customers, or foreign cloud infrastructure, the predicate is not hard to construct. IEEPA was the legal basis for the attempted TikTok ban and for sweeping sanctions regimes against Russia and Iran.
Then there is the simplest lever of all: contracting. The federal government is already the AI industry’s most coveted customer. The Pentagon’s recent deals with OpenAI, Anthropic, and Google Cloud have run into the hundreds of millions. When a single buyer represents that share of the addressable market for your most lucrative product line, “voluntary” cooperation begins to look like a term of art.
The precedents are not encouraging for techno-libertarians
The mythology of American capitalism holds that strategic industries are private and that the government is a customer, not a partner. The history is messier.
AT&T spent most of the twentieth century as a regulated monopoly whose research arm, Bell Labs, operated under a 1956 consent decree that required it to license its patents, including the transistor, on reasonable terms. The arrangement was not nationalization, but it was not arm’s-length capitalism either. The federal government decided what the country’s telecommunications backbone would look like, and AT&T executed.
Aerospace tells a similar story. Boeing, Lockheed Martin, and Northrop Grumman are nominally private companies, but their largest customer writes their product specifications, audits their cost structures, and in practice determines which of them survives. When McDonnell Douglas could no longer compete, the Pentagon engineered its absorption into Boeing. The line between “defense contractor” and “instrument of state policy” is, on close inspection, not a line at all.
Even the semiconductor industry, often held up as a triumph of private innovation, was midwifed by Pentagon procurement in the 1960s and is now being substantially re-shored through the CHIPS Act, which conditions billions in subsidies on commitments around hiring, capital expenditure, and stock buybacks. The Biden administration extracted equity-like warrants from Intel as a condition of support. The Trump administration has signaled it would go further.
If frontier AI is, as its boosters insist, the most strategically consequential technology since nuclear fission, the historical pattern suggests the relevant question is not whether Washington will impose strategic direction but how, and how soon.
The industry’s posture is splintering at exactly the wrong moment
For the labs, the timing is awful. Just as Washington appears to be concluding it cannot let private firms set strategic AI direction, the industry’s own workforce is making it harder for those firms to play the role of reliable national champion.
Anthropic’s reported friction with the Defense Department, which apparently centered on the company’s refusal to allow its models to be used for certain offensive applications, is not an isolated incident. It reflects a worldview, common among AI safety-focused researchers, that some uses of the technology are categorically off-limits regardless of who is asking. The Google letter reflects a different but adjacent impulse: that participating in military AI is morally compromising and that employees should have veto power over their employer’s customer list.
These are coherent positions. They are also, from the perspective of a Pentagon planner watching China’s People’s Liberation Army integrate AI across its command structure, intolerable. A national security establishment that concludes American AI labs are unreliable partners has options. Some of those options look like more aggressive contracting. Others look like Defense Production Act orders. The most extreme, the one The Atlantic raised, looks like the government deciding it would rather own the capability than negotiate with it.
The Google employees and the Trump national security team are, in a sense, on a collision course with each other through the bodies of the labs. The employees want their companies to be less entangled with the state. The state is contemplating becoming so entangled that the question of refusal would no longer arise.
What “nationalization” would actually look like
Full nationalization, in the European sense of the state buying out shareholders and running the firm, is unlikely in the American context. What is more plausible is something like the wartime arrangement that governed aircraft manufacturers in the 1940s, or the consent-decree regime that governed AT&T: nominally private ownership, but with strategic decisions, including who can be a customer, what can be researched, and what must be shared with the government, removed from corporate control.
In practice, this could mean classified compute clusters, mandatory security clearances for researchers working on frontier models, export controls on model weights treated like export controls on enriched uranium, and a Pentagon office with sign-off on which capabilities ship and which do not. None of this requires Congress. Most of it could be assembled from existing authorities within a year.
The labs would, in public, resist. In private, some executives would welcome the cover. “The government made us do it” is a more comfortable answer to give a restive workforce than “we decided the contract was worth it.”
The closing
The Google letter and the Atlantic report are usually read as separate stories, one about tech worker activism, the other about creeping statism. They are better read together, as evidence that the post-Cold War settlement under which Washington outsourced strategic technology to Silicon Valley and trusted the market to deliver national capability is quietly ending.
Whether what replaces it looks like the AT&T consent decree, the CHIPS Act, or something more coercive remains genuinely open. What is no longer open is the assumption that frontier AI will remain a normal commercial business whose customer relationships, including with the Pentagon, are matters for the companies themselves to decide. That premise died sometime this year. Most of the people involved have not yet noticed.
References
- https://www.theatlantic.com/technology/2026/04/ai-nationalization-trump-hegseth-anthropic-openai/686943/ — theatlantic.com (accessed 2026-04-27)
- Google staff urge chief executive to block US military AI use — FT (accessed 2026-04-27)