OpenAI Unveils Faster AI Model, Desktop Version of ChatGPT

OpenAI CTO Mira Murati announced during a live-streamed event today that the company is launching an updated version of its GPT-4 model that powers OpenAI’s popular chatbot. The new flagship AI model, GPT-4o is reportedly “much faster” and offers improved text, voice and vision capabilities. Murati said GPT-4o will be free to all users, while Plus users will enjoy “up to five times the capacity limits” available to free users. According to OpenAI, the new AI model “can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation.”

“It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50 percent cheaper in the API,” explains the company announcement. “GPT-4o is especially better at vision and audio understanding compared to existing models.”

“OpenAI CEO Sam Altman posted that the model is ‘natively multimodal,’ which means the model could generate content or understand commands in voice, text, or images,” The Verge reports. “Developers who want to tinker with GPT-4o will have access to the API, which is half the price and twice as fast as GPT-4 Turbo, Altman added on X.”

Tom’s Guide listed some of today’s key moments:

  • Free ChatGPT users to get access to custom chatbots for the first time
  • A new, more efficient GPT-4o model will power the free and paid versions
  • GPT-4o is multimodal by design, able to analyze image, video and speech
  • The multimodal model will power a new ChatGPT Voice that is more human-like
  • ChatGPT Desktop app launching with voice and vision capabilities
  • Everything gradually launching over the coming weeks

“Prior to today’s GPT-4o launch, conflicting reports predicted that OpenAI was announcing an AI search engine to rival Google and Perplexity, a voice assistant baked into GPT-4, or a totally new and improved model, GPT-5,” The Verge adds. GPT-4o was announced one day ahead of tomorrow’s Google I/O, “where we expect to see the launch of various AI products from the Gemini team.”

There Are Two Things from Our Announcement Today I Wanted to Highlight, Sam Altman, 5/13/24
OpenAI Just Killed Siri, The Atlantic, 5/13/24
OpenAI Unveils New ChatGPT That Listens, Looks and Talks, The New York Times, 5/13/24
OpenAI Says It Can Now Detect Images Spawned by Its Software, The Wall Street Journal, 5/7/24
OpenAI Is Readying a Search Product to Rival Google, Perplexity, Bloomberg, 5/7/24
OpenAI to Steer Content Authentication Group C2PA, Label Sora Videos as AI, VentureBeat, 5/7/24
OpenAI Launches Voice Assistant Inspired by Hollywood Vision of AI, The Wall Street Journal, 5/13/24
OpenAI’s GPT-4o Model Gives ChatGPT a Snappy, Flirty Upgrade, Wired, 5/13/24
GPT-4o First Reactions: ‘Essentially AGI’, VentureBeat, 5/13/24

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.