New Anthropic Safety Updates Focus on Claude’s Well-Being

Claude Opus 4 and 4.1 now have the discrete ability to end “abusive” or “harmful” conversations in consumer chat interfaces. Anthropic says the feature was developed as part of its exploratory work on the protection and well-being of its AI models. The company also envisions broader safety uses, although it does point out that having a model defensively terminate a chat is an extreme measure, intended for use in rare cases. “We’re working to identify and implement low-cost interventions to mitigate risks to model welfare,” Anthropic explains, qualifying it is unsure “such welfare is possible.” Continue reading New Anthropic Safety Updates Focus on Claude’s Well-Being

Anthropic Announces Enhanced Claude Enterprise Plan for AI

Anthropic has launched the Claude Enterprise subscription plan to compete with OpenAI’s ChatGPT Enterprise business solution. Focused on security and administrative controls, Claude Enterprise is designed to help organizations securely collaborate with artificial intelligence using proprietary internal data. Pricing will vary based on the number of seats and how Claude is used but is expected to be more expensive than Claude Pro and Claude Teams ($20 and $25 per month, respectively). An expanded 500K context window, more usage capacity, and a native GitHub integration for work on entire codebases are advantages Anthropic touts for Claude Enterprise. Continue reading Anthropic Announces Enhanced Claude Enterprise Plan for AI