Anthropic Shares Details of Constitutional AI Used on Claude

AI startup Anthropic is sharing new details of the “safe AI” principles that helped train its Claude chatbot. Also known as “Constitutional AI,” the method draws inspiration from treatises that range from a Universal Declaration of Human Rights to Apple’s Terms of Service and Anthropic’s own research. “What ‘values’ might a language model have?,” Anthropic asks, noting “our recently published research on Constitutional AI provides one answer by giving language models explicit values determined by a constitution, rather than values determined implicitly via large-scale human feedback.” Continue reading Anthropic Shares Details of Constitutional AI Used on Claude

Anthropic Takes Claude Chatbot Public After Months of Tests

After several months of testing, Anthropic is making its AI chatbot Claude available for general release in two configurations: the high-performace Claude and a lighter, cheaper, faster option called Claude Instant. Anthropic was launched in 2021 by a pair of former OpenAI employees, and its Claude chatbots are competitors to that firm’s ChatGPT. Accessible through a chat interface and API in Anthropic’s developer console, Claude is being marketed as the product of training designed to produce a more “helpful, honest, and harmless AI systems.” To that end, Anthropic says “Claude is much less likely to produce harmful outputs.” Continue reading Anthropic Takes Claude Chatbot Public After Months of Tests