
A major clash is erupting in the tech world, pitting the Pentagon against rising AI star Anthropic. The conflict began when Defence Secretary Pete Hegseth, backed by President Donald Trump, abruptly cut ties with Anthropic, accusing the company of endangering national security. The reason? Anthropic CEO Dario Amodei refused to compromise on his company’s deep concerns that its AI could be used for mass surveillance or autonomous armed drones.
In an unprecedented move, the Pentagon designated Anthropic a “supply chain risk”—a label typically reserved for foreign adversaries like Huawei. This decision not only canceled a potential $200 million contract but also aims to prohibit other defense contractors from working with the San Francisco-based firm. Anthropic vows to fight this “legally unsound” action in court, arguing it’s a stand for democratic values and responsible AI development.
This high-stakes dispute has massive implications for the future of AI. It highlights a growing tension between technological innovation, national security demands, and ethical considerations. The battle could redefine the rules for military AI use and the guardrails needed to prevent potential threats to human life.
Meanwhile, competitor OpenAI, makers of ChatGPT, quickly stepped into the void. CEO Sam Altman announced a deal to supply AI to the Pentagon, stating that their partnership would still adhere to “red lines” against mass surveillance and autonomous weapons. This move further fuels the already intense rivalry between Altman and Amodei, who initially left OpenAI due to AI safety concerns.
The outcome of this legal and ethical showdown will significantly impact the balance of power within Big Tech and set crucial precedents for how AI is developed and deployed.






