AI Safety Leader Quits Anthropic to Study Poetry

An AI safety leader has stepped away from the industry with a stark warning about the future. Mrinank Sharma resigned from U.S.-based AI firm Anthropic, saying in a letter posted on X that “the world is in peril”, citing concerns not only about artificial intelligence but also bioweapons and broader global crises.

Sharma led research on AI safeguards at Anthropic, a company that positions itself as safety-focused and develops the Claude chatbot. His work included examining AI alignment with human values, mitigating AI-assisted bioterrorism risks and studying how AI assistants could influence human behavior.

In his resignation letter, Sharma said he had repeatedly seen how difficult it is for organizations to let values fully guide actions amid growing industry pressures. While he described his time at Anthropic positively, he said the moment called for personal reflection.

He announced he would return to the UK to pursue writing and study poetry, adding that he plans to “become invisible” for a period of time.

His departure comes during heightened debate over AI safety, commercialization and regulation, as companies race to deploy increasingly powerful systems while facing scrutiny over ethics and long-term risks.

Back