The Battle Over AI Warfare and Surveillance
When frontier AI company Anthropic refused to allow the Pentagon to use its software to conduct mass surveillance or to power autonomous weapons, the government designated it a “supply chain risk” and effectively excluded the company from federal work. Litigation is ongoing.
The underlying question: who decides how powerful AI systems are used in war — governments or the firms that build them? The two specific use cases of targeting and surveillance highlight the conflict.
AI exponentially expands the possibility of mass surveillance. Linguists, analysts, and subject-matter experts are finite resources. Their limits impose prioritization. Imagine a linguist at the National Security Agency who can fluently interpret all 22 Arabic dialects. Such a capability would be rare and carefully allocated. The scarcity of that skill would itself discipline the scope of surveillance. Analysts would focus on high-value targets.
Now replace that linguist with an AI system capable of processing every dialect simultaneously, at scale, in real time. The constraint disappears. The limiting factor is no longer human expertise, but compute and storage. The marginal cost of additional surveillance approaches zero. The distinction between targeted collection and generalized monitoring blurs.
Anthropic says safeguards must be embedded at the model level. The government believes such safeguards are blunt instruments that interfere with lawful intelligence activities and should not be dictated by private firms.
A similar dynamic is unfolding in military targeting. US military programs such as Project Maven have already demonstrated the value of machine-assisted analysis in identifying objects and patterns in vast streams of imagery. Frontier AI models extend this capability across text, signals, geospatial data — and across time, integrating historical and real-time information.
In a conflict, AI systems could ingest signals intelligence, open-source reporting, and battlefield data to generate targeting recommendations at speed and scale. While computers may not necessarily make final decisions, they would shape the menu of options presented to human operators.
Here, too, practical constraints have traditionally played a stabilizing role. Analysts could only process so much information. Targeting cycles were bound by human deliberation. Scarcity imposed limits.
AI reduces that restraint. It expands the set of potential targets and compresses the time available for review. The question shifts from whether a target can be identified to how quickly it can be validated and acted upon.
This raises a sharp issue: the boundary between decision support and decision making. Anthropic has drawn a line against the use of its systems in autonomous weapons. The Department of War resists, arguing that any lawful application should remain available.
As systems become more reliable and more deeply integrated, the human role may narrow from judgment to oversight, and from oversight to exception handling. In such an environment, constraints embedded at the model level function as a brake. From the government’s perspective, they degrade scale and speed in performance.
The legal dispute masks a deeper disagreement about risk. Anthropic seeks to limit the outer bounds of what its technology can enable. The government seeks to preserve flexibility in how that technology is applied.
Anthropic argues that companies can and should impose safety constraints on their products. The firm has framed government pressure to remove such constraints as coercive and, in legal filings, as violating constitutional protections.
Get the Latest
Sign up to receive regular Bandwidth emails and stay informed about CEPA's work.
Historically, the Pentagon either built critical technologies itself or exercised tight control over their production. Today, frontier AI systems are developed by private firms with global shareholders, reach, and their own internal policies. The Anthropic dispute asks whether companies can set binding limits on the use of their technologies, or whether those limits must yield when national security is invoked.
This debate does not occur in isolation. In China, the model is one of alignment rather than negotiation. The integration of commercial and military capabilities proceeds with fewer institutional barriers. Firms are not only expected to support state objectives — Chinese law requires that they do so.
In the US, the Foreign Intelligence Surveillance Act (FISA) and the Defense Production Act impose national security requirements on private companies. But the Defense Production Act can only be invoked during times of crisis.
FISA is also selective. Section 702 allows the US government to compel telecom providers to assist in targeting non-US persons located outside the US in support of counterterrorism, cybersecurity, and foreign intelligence. It is up for renewal this year. At times, Americans’ communications can incidentally be collected (though not analyzed) as part of bulk and metadata, resulting in important privacy questions. But even where broader collection is permitted, the system still ensures that analysis is restrictive.
This creates asymmetry. In the US, companies may resist or condition participation in defense activities. In China, they are unlikely to do so — and in law, unable to. The question is whether this difference translates into a strategic disadvantage, or whether it reflects a deliberate choice about how power is exercised.
The answer will not be found in a single court case. It will emerge over time, as democracies and firms adjust to a landscape in which AI is both widely distributed and deeply consequential.
For now, the conflict between Anthropic and the Pentagon offers an early indication of the choices ahead. It is a test of whether liberal systems can integrate powerful, privately developed technologies into national security without either ceding control to corporate actors or eroding democratic freedoms.
Elly Rostoum is a Senior Resident Fellow with the Center for European Policy Analysis (CEPA).
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
A Roadmap for Europe-US Tech Cooperation
Learn More
Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More