A group of former federal judges has voiced support for AI company Anthropic while raising serious concerns about how the U.S. Department of Defense applies its “supply chain risk” designation. The intervention adds legal weight to an ongoing dispute that could shape how national security policies intersect with emerging technology firms.
In a formal statement, the former judges argued that the Pentagon’s use of the label lacks transparency and may be applied too broadly, potentially harming companies without clear justification. They warned that such classifications could have far-reaching consequences, including reputational damage, loss of government contracts, and reduced investor confidence.
The case centers on whether Anthropic was unfairly categorized under a framework designed to protect sensitive government systems from foreign interference and cybersecurity threats. While the Defense Department maintains that the designation is necessary to safeguard national security, critics say the criteria and process remain opaque.
Legal experts note that the involvement of former judges underscores growing concern about due process and accountability in national security decisions involving private sector companies. They argue that without clearer standards and oversight, the risk label could be misused or inconsistently applied.
The controversy comes at a time when artificial intelligence companies are increasingly working with government agencies, particularly in defense and intelligence sectors. As AI becomes more central to national security strategies, the balance between protecting infrastructure and ensuring fair treatment of companies is becoming more complex.
Anthropic has not publicly detailed its response but is expected to challenge the designation through legal and regulatory channels. The outcome of this dispute could set an important precedent for how the U.S. government evaluates and partners with AI firms in the future.







