Anthropic Blacklisted After Pentagon Dispute; OpenAI Secures Classified AI Deal

BY
/
Mar 1, 2026

A high-stakes dispute between artificial intelligence firm Anthropic and the U.S. Department of Defense escalated this week into an unprecedented federal blacklist order, after the company refused to remove ethical restrictions from a proposed military AI contract.

Within hours of the breakdown, rival OpenAI announced it had reached a separate agreement with the Pentagon to deploy its models on classified networks — incorporating similar safeguards that had been central to Anthropic’s objections.

The clash underscores intensifying tensions between national security priorities and AI safety principles as frontier AI systems become embedded in military operations.

Contract Breakdown

Negotiations between Anthropic and the Pentagon centered on a contract reportedly valued at around $200 million. According to public statements, Anthropic sought explicit contractual prohibitions on two uses of its AI systems: domestic mass surveillance of U.S. citizens and fully autonomous lethal weapons without human oversight.

The Defense Department insisted on language permitting AI use for “all lawful purposes,” arguing that contractors cannot impose operational restrictions on national security activities.

After Anthropic declined to remove what it described as ethical “red lines,” the administration set a late-afternoon deadline on February 27 for the company to accept revised terms.

Anthropic Chief Executive Dario Amodei rejected the ultimatum, stating the company could not “in good conscience” comply.

Federal Blacklist Order

Hours later, President Donald Trump directed federal agencies to begin phasing out Anthropic technology, with a six-month transition period for critical national security functions. Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk to national security,” a label typically applied to foreign adversaries rather than U.S.-based firms.

The designation effectively bars federal contractors from using Anthropic tools and represents an extraordinary escalation in a contract dispute.

Anthropic said it intends to challenge the action in court, calling the designation “legally unsound.”

OpenAI Agreement

On the same day, OpenAI CEO Sam Altman announced that his company had reached a classified deployment agreement with the Pentagon.

Altman had earlier publicly supported Anthropic’s position, stating that OpenAI also considers domestic mass surveillance and fully autonomous lethal weapons without human control to be unacceptable uses of AI.

Under the finalized OpenAI agreement, the company said safeguards will include restrictions against domestic surveillance use cases, requirements for human responsibility in the use of force, and technical controls such as cloud access limitations and on-site oversight mechanisms.

Altman later described the Pentagon as showing “deep respect for safety” during negotiations, while calling the blacklisting of Anthropic “extremely scary.”

National Security vs. AI Ethics

The dispute reflects a broader policy debate within Washington and Silicon Valley over who defines the boundaries of military AI deployment.

Pentagon officials have emphasized the need for rapid AI adoption amid global competition, particularly with China, arguing that contractors cannot dictate how lawful military tools are used.

AI firms, meanwhile, face pressure from employees, researchers, and advocacy groups to set clear guardrails against misuse.

The incident marks the first known case in which a major U.S. AI company has been labeled a domestic supply chain risk in connection with ethical restrictions.

Broader Implications

Anthropic’s exclusion and OpenAI’s parallel agreement may reshape the competitive landscape for classified AI deployments, potentially consolidating federal AI work among firms willing to negotiate safeguards within government-defined parameters.

Legal challenges could test the limits of executive authority in designating domestic technology providers as national security risks.

For now, Anthropic faces a federal phase-out, while OpenAI appears positioned as a primary partner for classified AI systems—illustrating the delicate balance between corporate ethics commitments and government security demands in the AI era.

GET MORE OF IT ALL FROM
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Recommended reads from the metaverse