Anthropic’s appeal fails as court refuses to block DoD blacklist measure
The U.S. Federal Court of Appeals rejected Anthropic’s motion to suspend the Department of Defense’s “supply chain risk blacklist.” The court ruled that national security interests take precedence over corporate financial losses. During the litigation, Anthropic will be barred from participating in Pentagon-related procurements.

Artificial intelligence company Anthropic has recently suffered a significant setback in court. The U.S. Court of Appeals for the District of Columbia Circuit rejected its motion to temporarily block the Department of Defense from placing it on a “supply chain risk blacklist.” As a result, the company will be barred from participating in related Pentagon procurement contracts while the litigation continues.

Court: National Security Takes Priority Over Corporate Interests
In its ruling, the appeals court stated that the case essentially involves balancing national security interests against the economic interests of a private company. Amid ongoing military conflicts, the Department of Defense has the authority to oversee access to critical artificial intelligence technologies, and this public interest outweighs the potential economic losses Anthropic may face.
The court acknowledged that being placed on the blacklist could cause harm to Anthropic, but concluded that the impact is primarily financial. Regarding the company’s claim that its free speech rights were being infringed, the court found that Anthropic failed to demonstrate any substantial administrative suppression of its expression during the litigation period, and therefore this argument did not justify blocking the blacklist action.
Impact of the “Supply Chain Risk” Designation
The dispute stems from the U.S. Department of Defense designating Anthropic as a supply chain risk, citing concerns that its technology could pose a threat to national security. Under this designation, all defense contractors must certify compliance and ensure that Anthropic’s Claude models are not used in military projects.
This requirement has dealt a direct blow to Anthropic’s government and defense-related business and has drawn widespread attention across the industry.
Core Dispute: Boundaries on Model Usage
At the heart of the disagreement is the scope of use for AI models. The Department of Defense is seeking unrestricted rights to use the relevant technologies, while Anthropic insists on setting clear boundaries, requiring that its models not be deployed in fully autonomous weapons systems or for large-scale surveillance purposes.
The appeals court has now agreed to expedite proceedings, but the legal standoff over the boundary between technological ethics and national security is expected to continue.