icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
16 Feb, 2026 22:16

Pentagon considering labeling Claude AI creators ‘supply chain risk’ – Axios

The US military could punish Anthropic for refusing to amend the lab’s ethics code, a report says
Pentagon considering labeling Claude AI creators ‘supply chain risk’ – Axios

The US Department of War is close to cutting ties with key AI partner Anthropic, the company behind the Claude model, and designating it a supply chain risk, Axios reported on Sunday, citing a Pentagon official. This designation is typically reserved for entities linked to states the US considers foreign adversaries.

The San Francisco–based lab has reportedly been clashing with the US government for months over its policy limiting military use of its technology. While the Pentagon has been pushing the company to allow use of the Claude for “all lawful purposes,” Anthropic’s ethics documents prohibit its technology from being used to “facilitate violence, develop weapons, or conduct surveillance.”

“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” an unnamed Pentagon official told Axios.

Claude is the only AI model currently deployed on the military’s classified platforms through a partnership with Palantir Technologies.

The potential blacklisting would require Pentagon contractors to demonstrate that they do not use Anthropic’s technology or risk losing their contracts.

Pentagon spokesman Sean Parnell told Axios that the relationship between the department and Anthropic “is being reviewed.”

“All Pentagon partners must be willing to help our warfighters win in any fight,” Parnell added. An Anthropic spokesperson told the outlet that the company has been having “productive conversations” with the department.

The reported clash follows allegations that the company’s AI model was used during the operation to abduct Venezuelan President Nicolas Maduro in early January. Last week, Axios and The Wall Street Journal reported that Claude was used both in staging and during the raid, although its exact role remains unclear. These allegations surfaced after the company spent weeks publicly touting its safeguard policies and presenting itself as the safety-conscious option within the AI sector.

Please check our commenting policy. If you have questions or suggestions feel free to send them to feedback@rttv.ru.
Podcasts
0:00
55:39
0:00
26:37