Connect with us

News

Why the Pentagon is pushing back against limits on military AI use

Published

on

Pentagon artificial intelligence, military AI systems, autonomous weapons debate, US defence technology, Anthropic AI ethics, Joburg ETC

The quiet fight over who controls the future of war

Behind closed doors in Washington, a high-stakes argument is playing out that could shape how wars are fought for decades. The United States Department of War and Silicon Valley AI developer Anthropic are locked in a standoff over how far artificial intelligence should be allowed to go on the battlefield and at home.

At the centre of the dispute is a Pentagon contract that could be worth up to 200 million dollars. Negotiations have reportedly stalled because Anthropic wants strict limits on how its AI can be used, while military officials are pushing back hard against those boundaries.

Why Anthropic is drawing a hard line

Anthropic has raised concerns that its AI tools could be used for lethal operations without meaningful human oversight. Another red line for the company is domestic surveillance, particularly systems that could be turned on American citizens.

These concerns are not new. Anthropic chief executive Dario Amodei has repeatedly warned that unconstrained AI use risks pushing democratic societies towards the kind of mass surveillance and automated violence seen in authoritarian states. In a recent essay, he argued that AI should support national defence but not at the cost of democratic values.

From Anthropic’s perspective, internal usage rules are not optional ethics statements. They are core safeguards designed to prevent irreversible harm.

The Pentagon’s argument: law is enough

Pentagon officials see the situation very differently. Their view is that if an AI system complies with US law, it should be available for military deployment regardless of a private company’s internal policies.

Defence leaders argue that allowing tech firms to dictate battlefield limits creates strategic weakness. In blunt terms, the military wants tools it can fully control, including AI systems that assist with targeting and intelligence operations.

US Defence Secretary Pete Hegseth has publicly underscored that position, making it clear that the Pentagon will not rely on AI models that restrict its ability to fight wars.

An AI-first military under Trump

This dispute is unfolding against a wider political backdrop. The Trump administration has made rapid AI integration a priority across the armed forces. Earlier this month, the Department of War unveiled a strategy aimed at transforming the US military into what it described as an AI-first fighting force.

The urgency is driven by fears that rivals such as China and Russia are accelerating their own military AI programmes without similar ethical brakes. From the Pentagon’s point of view, hesitation equals vulnerability.

Industry tensions and market pressure

Public reaction online has been divided. Some commentators argue that Silicon Valley firms are right to resist becoming arms manufacturers by another name. Others accuse AI companies of hypocrisy for taking defence money while trying to limit how that technology is used.

The standoff also carries real financial risk for Anthropic. The company has invested heavily in government and national security clients and is reportedly preparing for a future public listing. Losing ground with the Pentagon could send a worrying signal to investors.

Anthropic is not alone in this space. Last year, the Pentagon awarded contracts to several major AI players, including OpenAI, Google, and xAI. How this dispute is resolved could influence how all of them set their own red lines.

What this really means

At its core, this is not just a contract dispute. It is a battle over who gets to decide the moral limits of machines that can analyse, predict and potentially kill at speed.

If the Pentagon gets its way, commercial AI firms may be forced to choose between access to defence budgets and sticking to their ethical frameworks. If companies like Anthropic hold firm, the military may turn to less constrained alternatives.

Either way, the outcome will ripple far beyond Washington. It will shape how much human judgment remains in future conflicts and how comfortable societies are with letting algorithms make life-and-death decisions.

Follow Joburg ETC on Facebook, TwitterTikTok and Instagram

For more News in Johannesburg, visit joburgetc.com

Source: IOL

Featured Image: ISNA News Agency

Continue Reading