Anthropic and OpenAI Report Findings of Joint AI Safety Tests

OpenAI and Anthropic — rivals in the AI space who guard their proprietary systems — joined forces for a misalignment evaluation, safety testing each other’s models to identify when and how they fall short of human values. Among the findings: reasoning models including Anthropic’s Claude Opus 4 and Sonnet 4, and OpenAI’s o3 and o4-mini resist jailbreaks, while conversational models like GPT-4.1 were susceptible to prompts or techniques intended to bypass safety protocols. Although the test results were unveiled as users complain chatbots have become overly sycophantic, the tests were “primarily interested in understanding model propensities for harmful action,” per OpenAI. Continue reading Anthropic and OpenAI Report Findings of Joint AI Safety Tests

OpenAI’s Affordable GPT-4.1 Models Place Focus on Coding

OpenAI has launched a new series of multimodal models dubbed GPT-4.1 that represent what the company says is a leap in small model performance, including longer context windows and improvements in coding and instruction following. Geared to developers and available exclusively via API (not through ChatGPT), the 4.1 series comes in three variations: in addition to the flagship GPT‑4.1, GPT‑4.1 mini and GPT‑4.1 nano, OpenAI’s first nano model. Unlike Web-connected models (which have “retrieval-augmented generation,” or RAG) and can access up-to-date information, they are static knowledge models. Continue reading OpenAI’s Affordable GPT-4.1 Models Place Focus on Coding