AI safety researcher quits with a cryptic warning

11 Feb, 2026 22:09
“The world is in peril,” Anthropic’s Safeguards Research Team lead wrote in his resignation letter

A leading artificial intelligence safety researcher, Mrinank Sharma, has resigned from Anthropic with an enigmatic warning about global “interconnected crises,” announcing his plans to become “invisible for a period of time.”

Sharma, an Oxford graduate who led the Claude chatbot maker’s Safeguards Research Team, posted his resignation letter on X Monday, describing a growing personal reckoning with “our situation.”

“The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Sharma wrote to colleagues.

The departure comes amid mounting tensions surrounding the San Francisco-based AI lab, which is simultaneously racing to develop ever more powerful systems while its own executives warn that those same technologies could harm humanity.

It also follows reports of a widening rift between Anthropic and the Pentagon over the military’s desire to deploy AI for autonomous weapons targeting without the safeguards the company has sought to impose.

Sharma’s resignation, which lands days after Anthropic released Opus 4.6 – a more powerful iteration of its flagship Claude tool – hinted at internal friction over safety priorities.

“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he wrote. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

The researcher’s team was established just over a year ago with a mandate to tackle AI security threats including “model misuse and misalignment,” bioterrorism prevention, and “catastrophe prevention.”

Sharma noted with pride his work developing defenses against AI-assisted bioweapons and his “final project on understanding how AI assistants could make us less human or distort our humanity.” Now he intends to move back to the UK to “explore a poetry degree” and “become invisible for a period of time.”

Anthropic’s chief executive, Dario Amodei, has repeatedly warned of the dangers posed by the very technology his company is commercializing. In a near-20,000-word essay last month, he cautioned that AI systems of “almost unimaginable power” are “imminent” and will “test who we are as a species.”

Amodei warned of “autonomy risks” where AI could “go rogue and overpower humanity,” and suggested the technology could enable “a global totalitarian dictatorship” through AI-powered surveillance and autonomous weapons.