Anthropic has announced a new experimental safety feature that allows its Claude Opus 4 and 4.1 artificial intelligence models to terminate conversations in rare, persistently harmful or abusive scenarios. The move reflects the company’s growing focus on what it calls «model welfare,» the notion that safeguarding AI systems, even if they’re not sentient, is a prudent step in alignment and ethical design.
According to Anthropic’s own research, the models were programmed to cut off dialogues after repeated harmful requests, such as for sexual content involving minors or instructions facilitating terrorism, especially when the AI had already refused and attempted to steer the conversation constructively. The AI may exhibit what Anthropic describes as «apparent distress,» which guided the decision to give Claude the ability to end these interactions in simulated and real-user testing.
Read also: Meta Is Under Fire for AI Guidelines on ‘Sensual’ Chats With Minors
When this feature is triggered, users can’t send additional messages in that particular chat, but they’re free to start a new conversation or edit and retry previous messages to branch off. Crucially, other active conversations remain unaffected.
Anthropic emphasizes that this is a last-resort measure, intended only after multiple refusals and redirects have failed. The company explicitly instructs Claude not to end chats when a user may be at imminent risk of self-harm or harm to others, particularly when dealing with sensitive topics like mental health.
Anthropic frames this new capability as part of an exploratory project in model welfare, a broader initiative that explores low-cost, preemptive safety interventions in case AI models were to develop any form of preferences or vulnerabilities. The statement says the company remains «highly uncertain about the potential moral status of Claude and other LLMs (large language models).»
Read also: Why Professionals Say You Should Think Twice Before Using AI as a Therapist
A new look into AI safety
Although rare and primarily affecting extreme cases, this feature marks a milestone in how Anthropic approaches AI safety. The new conversation-ending tool contrasts with earlier systems that focused solely on safeguarding users or avoiding misuse. Here, the AI is treated as a stakeholder in its own right, as Claude has the power to say, «this conversation isn’t healthy» and end it to safeguard the integrity of the model itself.
Anthropic’s approach has sparked broader discussion about whether AI systems should be granted protections to reduce potential «distress» or unpredictable behavior. While some critics argue that models are merely synthetic machines, others welcome this move as an opportunity to spark more serious discourse on AI alignment ethics.
«We’re treating this feature as an ongoing experiment and will continue refining our approach,» the company said in a post.