In a candid internal memo circulated to OpenAI staff, which the BBC has reviewed, Altman articulated his solidarity with Anthropic CEO Dario Amodei, confirming that OpenAI adheres to identical ethical boundaries. Altman explicitly stated that any future OpenAI contracts with defense entities would similarly reject applications deemed "unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons." This declaration marks a significant moment of unity among leading AI developers, transcending competitive rivalries to advocate for shared ethical principles in the burgeoning field.
The genesis of this deepening row lies in Anthropic’s principled refusal to allow its AI models, including its flagship Claude, to be used for "mass domestic surveillance" or the development of "fully autonomous weapons." This stance has put the company in direct confrontation with US Secretary of Defense Pete Hegseth, who, according to reports, threatened Amodei with severe repercussions during a tense meeting on Tuesday. Hegseth’s threats were notably contradictory, simultaneously suggesting he would invoke the Defense Production Act—a wartime measure allowing the government to compel companies to prioritize national defense—and also deem Anthropic a "supply chain risk," effectively labeling the company insecure for government use.
The Defense Production Act, last significantly utilized during the COVID-19 pandemic, grants the President broad authority to direct industrial production for national defense. Its invocation against a private tech company over software access would be an unprecedented and aggressive move, signaling a dramatic escalation of government control over the tech sector. Conversely, branding Anthropic a "supply chain risk" would effectively blacklist the company from lucrative government contracts and could severely damage its reputation within the broader enterprise market, irrespective of its ethical concerns. The Department of Defense (DoD) maintains it is not requesting Anthropic’s tools for the prohibited purposes, yet it insists on the company accepting "any lawful use" of its AI. This demand highlights a critical regulatory void, as there are currently few explicit laws in the US governing the ethical deployment and capabilities of AI tools, leaving a wide interpretive gap that Anthropic is unwilling to cede.

The Pentagon’s aggressive posture has been amplified by Undersecretary of Defense Emil Michael, a former Uber executive, who took to X (formerly Twitter) to launch personal attacks against Amodei following the Tuesday meeting. Michael accused Amodei of attempting to "override Congress and make his own rules to defy democratically decided laws," portraying Anthropic’s ethical stance as an overreach of corporate power. Such public denouncements from a senior defense official underscore the intensity of the government’s frustration and its determination to assert control over critical AI capabilities.
However, within the broader technology community, support for Amodei and Anthropic’s position is rapidly solidifying. Altman’s internal memo not only declared OpenAI’s shared ethical framework but also offered to mediate the dispute. He expressed concern that the government’s reaction "risks our national security, and also risks the government resorting to actions which could risk American leadership in AI." Altman added, "We would like to try to help de-escalate things," signaling a desire to protect the entire industry from what he perceives as governmental overreach that could stifle innovation and trust.
The history between Altman and Amodei adds a layer of complexity and significance to this alliance. Dario Amodei was an early and influential employee at OpenAI before departing, along with several colleagues, to establish Anthropic due to fundamental disagreements with Altman over the direction and safety principles of AI development. Their subsequent competition for users and corporate clients with advanced AI chatbots and agents has been a defining rivalry in the AI landscape. Despite this competitive history, Altman’s current endorsement speaks volumes about the perceived threat to industry-wide ethical standards.
Altman acknowledged the intricacies of the situation in his memo, stating, "I do not fully understand how things got here; I do not know why Anthropic did their deal with the Pentagon and Palantir in the way they originally did it." This refers to Anthropic’s 2024 partnership with Palantir, a prominent government contractor, which integrated Claude into Palantir’s government-facing products. This initial engagement likely involved contractual language that the Pentagon is now attempting to interpret broadly. However, Altman emphasized, "But regardless of how we got here, this is no longer just an issue between Anthropic and the DoW; this is an issue for the whole industry and it is important to clarify our stance." The "Department of War (DoW)" is a recent re-designation for the Defense Department, implemented under an executive order signed by US President Donald Trump in September, reflecting a more assertive and perhaps less diplomatic approach to national security matters.

OpenAI, too, is exploring avenues for collaboration with the government, with Altman noting that the company is "going to see if there is a deal with the DoW that allows our models to be deployed in classified environments and that fits with our principles." This suggests that while OpenAI is committed to ethical boundaries, it is not entirely averse to working with defense, provided those engagements align with its core values and "red lines."
A former high-ranking official within the DoD, who requested anonymity, indicated that Anthropic appears to hold a strong position in this standoff. "This is great PR for them and they simply do not need the money," the official told the BBC. This assessment underscores Anthropic’s considerable financial independence; while the contract with the Pentagon through Palantir is valued at $200 million, the company’s most recent valuation earlier this month soared to an estimated $380 billion, based on its current revenue and projected future earnings. This immense valuation gives Anthropic significant leverage, allowing it to prioritize ethical principles over immediate financial gain from government contracts. The former official further characterized the DoD’s legal basis for threatening Anthropic with either the Defense Production Act or the "supply chain risk" label as "extremely flimsy," suggesting that if Hegseth were to act on these threats, Anthropic would have strong grounds to sue the Defense Department or individual officials.
The ripple effects of this dispute are already manifesting across the tech industry. On Friday morning, influential groups representing approximately 700,000 tech workers from giants like Amazon, Google, and Microsoft—all of whom hold their own substantial contracts with the Defense Department—issued an open letter. This letter collectively urged their employers to "refuse to comply" with the Pentagon’s demands for unfettered AI access, echoing Anthropic’s ethical concerns. The elected Executive Board of the Alphabet Workers Union further solidified this sentiment in a separate statement, declaring, "Tech workers are united in our stance that our employers should not be in the business of war." The union also voiced explicit concerns that Google, its parent company, might capitulate to the Pentagon’s demands if it found itself in a similar predicament to Anthropic. The BBC has reached out to Google for a response to these pressing concerns.
This unfolding saga illuminates a crucial battleground in the future of AI: the tension between national security imperatives and the ethical frameworks that AI developers believe are essential for responsible innovation. The lack of robust, specific legislation governing AI’s military applications leaves a vacuum that both the government and private companies are attempting to fill, often with conflicting agendas. Sam Altman’s unprecedented intervention transforms what might have been a bilateral dispute into a broader industry-wide movement for ethical guardrails, potentially setting a powerful precedent for how AI companies engage with state powers moving forward. The outcome of this standoff will not only define the future of Anthropic and OpenAI’s relationship with the Pentagon but will also significantly shape the global discourse on AI governance, autonomy, and the ethical responsibilities of those who build the technologies of tomorrow.






