Loading Now

Anthropic’s military test? Claude AI reportedly used by US during Venezuela operation

Anthropic’s military test? Claude AI reportedly used by US during Venezuela operation

Anthropic’s military test? Claude AI reportedly used by US during Venezuela operation


Anthropic’s Claude AI was reportedly used by the United States during a military operation to capture former Venezuelan President Nicolás Maduro, according to a report by The Wall Street Journal. The deployment of Claude was said to have been facilitated through Anthropic’s partnership with data analytics firm Palantir Technologies, whose tools are widely used by the US Department of Defense and federal law-enforcement agencies.

During the mission, Maduro and his wife were captured and several sites in Caracas were bombed. However, Anthropic’s usage guidelines explicitly prohibit Claude from being used to facilitate violence, develop weapons, or conduct surveillance.

Responding to the report, an Anthropic spokesperson said:“We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise.”

“Any use of Claude—whether in the private sector or across government—is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance,” the spokesperson added.

The report also noted that Anthropic’s concerns about how Claude could be used by the Pentagon have led US administration officials to consider cancelling a contract reportedly worth around $200 million.

Anthropic first AI company used by US DoD?

Anthropic is said to be the first AI company whose tools were used by the US Department of Defense in a classified operation. The report also does not rule out the possibility that other AI systems were used during the Venezuela operation for unclassified tasks, ranging from summarising documents to helping manage autonomous systems.

At an event last month, US Defense Secretary Pete Hegseth said he is creating an “AI-first, war-fighting force”.

“Responsible AI at the War Department means objectively truthful AI capabilities employed securely and within the law,” he said.

“We will not employ AI models that won’t allow you to fight wars,” Hegseth added, in a comment widely seen as referencing ongoing discussions with AI companies, including Anthropic.

Also Read | What is ‘Discombobulator’, weapon Trump says US used to capture Nicolas Maduro

Anthropic was awarded the $200 million Pentagon contract last year. However, CEO Dario Amodei has repeatedly expressed concern about the use of AI in lethal operations and surveillance.

“Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it,” Amodei said at an event last month.

While many AI companies are building tools for the US military, most of these are available only on unclassified networks. Anthropic, however, is the only one available in classified settings through third parties, though the US government is still bound by the company’s usage policies, Reuters reported.

Post Comment