Microsoft AI Introduces Proprietary Foundation, Voice Models

Microsoft is rolling out its first internally developed AI models. Branded Microsoft AI (MAI), the two initial releases are MAI-Voice-1, a “highly expressive and natural speech generation model,” and MAI-1-preview, a mixture-of-experts LLM designed for consumer facing applications. The move demonstrates Microsoft’s intent to move beyond exclusive reliance on OpenAI models to power its Copilot assistant and other applications. By striking out on its own, Microsoft is paving a smoother road for OpenAI’s transition to a for-profit entity, which the company is scheduled to initiate by the end of the year. Continue reading Microsoft AI Introduces Proprietary Foundation, Voice Models

ElevenLabs Text-to-Voice AI Tools Now Available for Mobile

ElevenLabs is bringing its powerful AI voice tools to mobile. Previously, the company’s apps and voice libraries were only available via the Web. Now iOS and Android users can tap ElevenLabs tech on the go with a “faster, intuitive, more powerful experience built natively for mobile” rather than awkwardly through a mobile browser. Combining mobility with creativity, the app lets users create realistic voiceovers for social media or narrate video using ElevenLabs’ text-to-speech models — including Eleven v3, now in alpha, which lets users fine-tune vocalizations using tags. The company has also introduced a new voice assistant, 11ai. Continue reading ElevenLabs Text-to-Voice AI Tools Now Available for Mobile

Hume AI Introduces Voice Control and Claude Interoperability

Artificial voice startup Hume AI has had a busy Q4, introducing Voice Control, a no-code artificial speech interface that gives users control over 10 voice dimensions ranging from “assertiveness” to “buoyancy” and “nasality.” The company also debuted an interface that “creates emotionally intelligent voice interactions” with Anthropic’s foundation model Claude that has prompted one observer to ponder the possibility that keyboards will become a thing of the past when it comes to controlling computers. Both advances expand on Hume’s work with its own foundation model, Empathic Voice Interface 2 (EVI 2), which adds emotional timbre to AI voices. Continue reading Hume AI Introduces Voice Control and Claude Interoperability

Deepgram’s Speech Portfolio Now Includes Human-Like Aura

Deepgram’s new Aura software turns text into generative audio with a “human-like voice.” The 9-year-old voice recognition company has raised nearly $86 million to date on the strength of its Voice AI platform. Aura is an extremely low-latency text-to-speech voice AI that can be used for voice AI agents, the company says. Paired with Deepgram’s Nova-2 speech-to-text API, developers can use it to “easily (and quickly) exchange real-time information between humans and LLMs to build responsive, high-throughput AI agents and conversational AI applications,” according to Deepgram. Continue reading Deepgram’s Speech Portfolio Now Includes Human-Like Aura