Anthropic Softens Safety Policies Amid Pentagon Pressure and Military AI Race

Feb 25, 2026
The Deep View
Article image for Anthropic Softens Safety Policies Amid Pentagon Pressure and Military AI Race

Summary

Anthropic is softening its AI safety policies under Pentagon pressure, removing key risk mitigation pledges and lifting training restrictions, as the Defense Department threatens to invoke the Defense Production Act if the company refuses military use of Claude by Friday.

Key Points

  • Anthropic is loosening its Responsible Scaling Policy by removing the pledge to withhold models without guaranteed risk mitigations and lifting restrictions on training models above certain safety thresholds, with Chief Science Officer Jared Kaplan citing competitive pressure as justification.
  • The Pentagon is pressuring Anthropic to drop its ethical guardrails for Claude by Friday, threatening to invoke the Defense Production Act and label the company a 'supply chain risk' if it refuses to allow military use, while Anthropic continues to hold firm against autonomous targeting and surveillance of U.S. citizens.
  • As xAI signs a deal with the Pentagon to deploy Grok in classified weapons and battlefield systems, Anthropic faces growing scrutiny over whether its softening safety standards signal a shift away from the ethical AI principles that defined its founding identity.

Tags

Read Original Article