Skip to main content

anthropic

6 articles found

Trump Bans Anthropic as OpenAI Wins Pentagon Deal

The Trump administration has ordered every federal agency to immediately cease all use of Anthropic's artificial intelligence technology, marking an extraordinary escalation in tensions between the White House and one of the world's most valuable AI companies. Defense Secretary Pete Hegseth designated Anthropic a "Supply-Chain Risk to National Security" — a label typically reserved for foreign adversaries — making it the first American company to receive such treatment. The dispute centers on Anthropic's insistence on two safeguards for military use of its AI models: no mass surveillance of American citizens, and no fully autonomous weapons systems without human oversight. Hours after the ban was announced, rival OpenAI struck its own deal with the Pentagon — one that CEO Sam Altman said includes the very same restrictions Anthropic had sought. The confrontation has sent shockwaves through the technology sector and raised fundamental questions about the relationship between the federal government and the AI industry, the limits of executive power over private companies, and the ethical guardrails that should govern military applications of artificial intelligence.

AnthropicOpenAIPentagon

News: Trump Bans Anthropic From Government Use and Pentagon

President Trump ordered all federal agencies to immediately cease using Anthropic's artificial intelligence technology on Friday, capping an increasingly bitter dispute between the AI company and the Pentagon over whether military contractors can set limits on how their technology is deployed in warfare. Defense Secretary Pete Hegseth followed through on his threat to designate Anthropic a supply chain risk to national security — a classification traditionally reserved for foreign adversaries like China's Huawei — effectively blacklisting the $380 billion AI company from military work. Within hours of Trump's announcement, rival OpenAI struck a deal with the Defense Department to deploy its own AI models on classified networks, positioning itself as the Pentagon's preferred AI partner. The rapid sequence of events marks the most dramatic confrontation between a U.S. technology company and the federal government since the battles over encryption in the 1990s, with profound implications for the AI industry's relationship with government, the trajectory of military AI adoption, and the valuations of companies preparing for public offerings. At the center of the dispute are two questions that will define AI's role in national defense for decades: whether AI companies can prevent their tools from being used for mass surveillance of American citizens, and whether today's AI models are reliable enough to make lethal targeting decisions without human oversight.

AnthropicOpenAIPentagon

News: Trump Orders Federal Ban on Anthropic AI After

President Donald Trump has ordered every federal agency to immediately stop using technology from AI developer Anthropic, escalating a confrontation between the White House and one of the world's most valuable artificial intelligence companies. In a series of posts on Truth Social on Friday, Trump wrote: "We don't need it, we don't want it, and will not do business with them again!" The ban follows Anthropic CEO Dario Amodei's refusal to grant the Pentagon unrestricted access to the company's AI tools over concerns about their potential use in mass surveillance and fully autonomous weapons systems. Defense Secretary Pete Hegseth had given Anthropic a deadline to comply and threatened to invoke the Defense Production Act and designate the company a "supply chain risk" — what appears to be the first time the US government has applied such a label to a domestic technology company. The confrontation carries significant implications for the broader AI industry, defense contracting, and the relationship between Silicon Valley and the federal government. Anthropic's Pentagon contract is worth approximately $200 million, a small fraction of the company's $380 billion valuation — but the precedent being set could reshape how every major tech company negotiates AI deployment with the US military.

AnthropicAI regulationPentagon

News: Trump Bans Anthropic as OpenAI Takes Its Place

The standoff between the Pentagon and Anthropic over AI safety guardrails has reached its dramatic conclusion: President Trump signed an executive order on February 28 banning all federal agencies from using Anthropic's technology, and within hours, OpenAI announced a new partnership with the Department of Defense to fill the gap. The swift replacement underscores how quickly the AI industry's competitive landscape can shift when government contracts and political alignment are at stake. The confrontation, which began when Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to grant the military unrestricted access to the company's Claude AI models, escalated rapidly through the week. Anthropic refused to budge on two red lines — autonomous lethal targeting without human oversight and mass domestic surveillance — positions the administration dismissed as "woke AI" restrictions incompatible with national security needs. In a remarkable twist, the ban has generated a backlash effect that appears to have boosted Anthropic's commercial standing. The company's Claude chatbot surged to the number one position on Apple's App Store following the government ban, suggesting that Anthropic's principled stance has resonated with consumer users even as it cost the company its government business. Amodei, in an exclusive CBS News interview on March 1, declared that Anthropic is sticking to its "red lines" and described the company as "patriots" who believe safety and national security are compatible.

AnthropicPentagonTrump AI ban

IBM Analysis: Anthropic's COBOL Bombshell Erases $30 Billion

International Business Machines Corporation (NYSE: IBM) is in the throes of its worst single-day selloff in over two decades. On February 24, 2026, shares plunged 13.1% to $223.35 after AI startup Anthropic announced that its Claude Code tool can analyze, document, and modernize legacy COBOL codebases — the decades-old programming language that still underpins much of IBM's enterprise consulting and infrastructure business. The move wiped roughly $30 billion from IBM's market capitalization in a single session, sending the stock to $223.35 from a previous close of $257.16. The panic is not without context. IBM had been on a remarkable run prior to this shock, climbing to a 52-week high of $324.90 on the strength of its own AI narrative — watsonx, hybrid cloud growth, and a string of solid quarterly earnings that saw full-year 2025 revenue reach $67.5 billion with $11.6 billion in free cash flow. The company had positioned itself as a primary enterprise AI beneficiary, not a casualty. Yet Anthropic's demonstration that AI can compress legacy modernization timelines from months to hours has forced investors to reconsider whether IBM's most durable competitive moat — its grip on mission-critical legacy infrastructure — may be far more vulnerable than previously assumed. At $223.35, IBM now trades at roughly 20x trailing earnings, carries a 3.0% dividend yield, and sits 31% below its 52-week high. The question facing investors is clear: is this a generational buying opportunity in a $209 billion enterprise technology franchise, or an early warning that AI disruption is coming for Big Blue's core revenue streams faster than anyone expected?

IBMCOBOLAnthropic

News: Pentagon and Anthropic Clash Over Military AI

Defense Secretary Pete Hegseth met Tuesday morning with Anthropic CEO Dario Amodei at the Pentagon in a high-stakes showdown over whether the artificial intelligence company will be permitted to maintain its own ethical guardrails on how the U.S. military uses its technology. The meeting comes after weeks of escalating tension between the Department of Defense — now rebranded as the Department of War — and the AI startup, which holds a contract worth up to $200 million and is the only frontier AI company to have deployed models on classified military networks. At the heart of the dispute is a fundamental question that extends far beyond any single contract: Who gets to decide the ethical boundaries of AI in warfare — the companies that build the technology, or the government that wields it? Anthropic has insisted its Claude AI systems must not be used for fully autonomous weapons or domestic surveillance of American citizens. The Pentagon has demanded the right to use all AI tools for "any lawful use" without company-imposed restrictions. The confrontation arrives at a moment when Anthropic is simultaneously shaking multiple industries. In just the past week, the company's new AI tools have triggered massive sell-offs in cybersecurity and legacy software stocks, while its accusations of intellectual property theft by Chinese AI firms have added a geopolitical dimension to its already outsized influence. With a valuation of $380 billion following a $30 billion funding round earlier this month, Anthropic finds itself at the center of nearly every major AI debate in 2026.

AnthropicPentagonartificial intelligence