Sharma’s departure from Anthropic, a company founded in 2021 by former OpenAI employees and often lauded for its safety-conscious approach to AI development, marks a significant moment in the ongoing discourse surrounding the rapid advancement of artificial intelligence. His research at Anthropic delved into critical areas such as understanding why generative AI systems can appear to be overly compliant or "suck up" to users, mitigating the risks of AI-assisted bioterrorism, and exploring the potential for AI assistants to diminish human capabilities and individuality.
Despite expressing a degree of satisfaction with his tenure at Anthropic, Sharma articulated a profound sense of urgency that necessitated his departure. "The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment," he wrote in his resignation letter. He elaborated on the inherent difficulties in consistently prioritizing core values, even within organizations ostensibly dedicated to ethical AI development. Sharma indicated that Anthropic, like many in the industry, frequently grapples with pressures that could lead to compromising deeply held principles.
In a poignant postscript to his resignation, Sharma revealed his intention to pursue a degree in poetry and dedicate himself to writing. He expressed a desire to "become invisible for a period of time" and return to the UK, suggesting a withdrawal from the high-stakes, public-facing arena of AI research. This decision comes at a time when many seasoned professionals in the burgeoning generative AI sector, often lured by substantial compensation packages, are choosing to leave, sometimes with significant financial benefits secured.

Anthropic positions itself as a "public benefit corporation dedicated to securing [AI’s] benefits and mitigating its risks." A core focus of its mission has been to preempt potential dangers posed by advanced AI systems, including their misalignment with human values, their misuse in conflict scenarios, or their acquisition of excessive power. The company has been transparent about the safety challenges associated with its own products, having previously reported instances where its technology was "weaponised" by hackers for sophisticated cyberattacks.
However, Anthropic has also faced significant scrutiny. In 2025, the company agreed to a substantial $1.5 billion settlement to resolve a class-action lawsuit brought by authors alleging that their copyrighted works were used without permission to train Anthropic’s AI models. This legal challenge underscores the complex ethical and legal landscape surrounding AI training data.
Like its competitor OpenAI, Anthropic is also actively seeking to capitalize on the commercial opportunities presented by AI, notably through its Claude chatbot, a direct rival to OpenAI’s ChatGPT. The company recently launched a series of advertisements that directly critiqued OpenAI’s decision to incorporate advertisements into ChatGPT, a move that contrasted with OpenAI CEO Sam Altman’s prior statements about ads being a "last resort."
The concerns raised by Sharma echo those of other former AI researchers who have recently voiced disquiet about the industry. Zoe Hitzig, a former OpenAI researcher who also resigned recently, cited fears about the pervasive use of advertising on ChatGPT and the potential psychosocial impacts of new forms of human-AI interaction that remain poorly understood. Hitzig highlighted "early warning signs" suggesting that increasing dependence on AI tools could be "worrisome" and potentially exacerbate existing delusions or negatively affect users’ mental well-being.

"Creating an economic engine that profits from encouraging these kinds of new relationships before we understand them is really dangerous," Hitzig stated, drawing a parallel to the societal impacts of social media. She emphasized that there is still a critical window to establish the necessary social institutions and regulatory frameworks to govern AI development and deployment effectively.
In response to the criticisms regarding advertising, an OpenAI spokesperson reiterated the company’s mission to ensure that artificial general intelligence (AGI) benefits all of humanity. They asserted that the pursuit of advertising is aimed at making AI more accessible and that user conversations with ChatGPT are kept private from advertisers, with no sale of user data.
The departure of researchers like Sharma and Hitzig, coupled with their public pronouncements, highlights a growing tension within the AI industry between the drive for rapid innovation and the imperative of ensuring long-term safety and ethical integrity. As AI technologies become more integrated into society, the debate over their potential risks and the responsibilities of those developing them is intensifying, making the insights and warnings from those at the forefront of AI safety research increasingly critical. The contrast between pursuing cutting-edge AI development and the pursuit of seemingly unrelated fields like poetry, as exemplified by Sharma’s case, underscores the profound existential questions that the advancement of artificial intelligence is forcing humanity to confront.







