Anthropic boss rejects Pentagon demand to drop AI safeguards.

In a high-stakes standoff that could redefine the ethical boundaries of artificial intelligence in national security, Anthropic CEO Dario Amodei has firmly rejected a demand from the US Department of Defense (DoD) to relinquish crucial safety protocols for its AI technology. Amodei declared on Thursday that his company would rather sever ties with the Pentagon entirely than compromise on the responsible deployment of its AI, asserting that using its tools in ways that "undermine, rather than defend, democratic values" is an unacceptable proposition. This resolute stance follows a tense meeting on Tuesday with US Secretary of Defense Pete Hegseth, where Anthropic was reportedly threatened with removal from the DoD’s supply chain if it did not agree to permit "any lawful use" of its advanced AI capabilities.

"These threats do not change our position: we cannot in good conscience accede to their request," Amodei stated, underscoring the gravity of the situation. The core of the dispute centers on Anthropic’s deep-seated concerns regarding two specific potential applications of its AI, notably its sophisticated language model Claude: "mass domestic surveillance" and "fully autonomous weapons." Amodei emphasized that these controversial use cases have never been part of Anthropic’s existing contracts with the Department of War, a designation referring to the Defense Department under a specific executive order, and reiterated his belief that they should not be incorporated now.

"Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider," Amodei added, signaling a willingness to face potential business repercussions rather than betray his company’s ethical principles. A spokeswoman for Anthropic further elaborated on Thursday, revealing that despite receiving updated contract language from the DoD on Wednesday night, it represented "virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons." She criticized the new wording as a deceptive compromise, stating that "new language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." This spokesperson highlighted that these critical safeguards have been the central focus of negotiations for months, despite recent public statements from the Department of War suggesting otherwise. A representative from the Defense Department was unavailable for immediate comment.

Anthropic boss rejects Pentagon demand to drop AI safeguards

The controversy has escalated with personal attacks from within the Pentagon. Emil Michael, the US Undersecretary for Defense, publicly criticized Amodei on X (formerly Twitter) on Thursday night, accusing the CEO of attempting to "personally control the US Military" and being willing to "put our nation’s safety at risk." This rhetoric aligns with previous statements from a Pentagon official who informed the BBC that if Anthropic failed to comply, Secretary Hegseth intended to invoke the Defense Production Act against the company. The Defense Production Act grants the US President the authority to compel companies to meet national defense needs if their products or services are deemed critical.

However, Hegseth also reportedly threatened to designate Anthropic as a "supply chain risk," a label that would effectively render the company too insecure for government utilization. A former DoD official, speaking on condition of anonymity, described the grounds for either of these potential actions as "extremely flimsy." The underlying tensions between Anthropic and the Pentagon, according to a person familiar with the negotiations who also requested anonymity, have been brewing for several months, predating public knowledge of Claude’s alleged involvement in a US operation to apprehend Venezuelan President Nicolás Maduro.

While Amodei did not provide explicit details on how Anthropic’s AI could be or had been utilized for mass surveillance or fully autonomous weapons, he outlined in a company blog post the inherent capabilities of AI for such purposes. He explained that AI can be employed to "assemble scattered, individually innocuous data into a comprehensive picture of any person’s life – automatically and at massive scale." Amodei clarified his company’s position: "We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values."

Regarding the deployment of AI in weaponry, Amodei expressed profound skepticism about the reliability of even the most advanced AI systems currently available for powering "fully autonomous weapons." He stated unequivocally, "We will not knowingly provide a product that puts America’s warfighters and civilians at risk." Amodei elaborated that without stringent oversight, fully autonomous weapons cannot be trusted to exercise the nuanced judgment demonstrated daily by highly trained military personnel. He stressed the necessity for such systems to be deployed with robust safeguards, which he contends are currently absent. In a further point of contention, Amodei revealed that Anthropic had "offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer." This offer of collaborative research and development highlights Anthropic’s commitment to advancing AI safety, a commitment that appears to be at odds with the Pentagon’s immediate demands. The critical meeting on Tuesday was reportedly initiated by Hegseth himself, underscoring the urgency with which the Pentagon sought to resolve the issue. The refusal by Anthropic to compromise on its ethical AI safeguards sets a precedent for the ongoing debate about the responsible integration of artificial intelligence into military applications and the potential conflicts between technological advancement and democratic principles.

Related Posts

UK social media ban for under 16s consultation begins

In her statement, Technology Secretary Liz Kendall articulated that the consultation aims to establish a clear understanding of how young people can not only navigate but also "thrive in an…

Kepler’s boss on why it priced Clair Obscur below its ‘worth’.

The question of what a video game is truly "worth" is becoming increasingly complex in an industry where prices are steadily climbing, yet player expectations are evolving. In the UK,…

Leave a Reply

Your email address will not be published. Required fields are marked *