Decart AI’s Mirage Transforms Live-Stream Video in Real Time

Startup Decart AI is showcasing MirageLSD, a “world transformation model” that can change the look of a camera feed, recorded video or game in real time. Built on the company’s Live-Stream Diffusion (LSD) model, Mirage debuted last week as a demo on the company website with iOS and Android apps scheduled for release this week. Mirage makes it possible to manipulate video continuously, in real time with zero latency. The technology has created buzz as a potential disruptor in the live-streaming space, and it looks like it could be an impactful special effects tool as well.

Wired calls the Mirage results “mind-bending” and provides examples of its use, writing that while “tools like OpenAI’s Sora can conjure increasingly realistic video footage with a text prompt, Mirage now makes it possible to manipulate video in real time.”

The Mirage website makes some default themes available, including “anime,” “cyberpunk,” “Dubai skyline” and “Versailles Palace.”

In a private demo for Wired, Decart co-founder and CEO Dean Leitersdorf “uploads a clip of someone playing ‘Fortnite’ and the scene transforms from the familiar ‘Battle Royale’ world into a version set underwater.”

Imagining “the tool becoming popular on platforms like TikTok or Instagram,” Wired also suggests it holds promise for games, noting that in November 2024 Decart “demoed a game called ‘Oasis’ that used a similar approach to Mirage to generate a playable ‘Minecraft’-like world on the fly.”

Manipulating live video in real time is “computationally taxing,” Wired writes, explaining Decart’s approach, which is also discussed in a technical report issued by the company.

“Mirage generates 20 frames per second at 768×432 resolution and a latency of 100 milliseconds per frame — good enough for a decent-quality TikTok clip,” Wired reports, explaining that Decart is “working toward full HD and 4K output and finding new ways for users to control their videos.”

The Decoder also delves into the technology, writing that “AI video models are often slow and typically only manage to generate short, five- to ten-second clips before the visuals start to degrade.” By contrast, the MirageLSD model “creates each frame individually” rather than generating an entire video sequences at once.

Decart has spent a good deal of time on techniques focused on consistency for longer sessions to avoid the visuals veering into incoherence, as unchecked AI is wont to do. Two principal Mirage training techniques are called diffusion forcing and history augmentation, as explained by The Decoder.

Unlike prior approaches, Decart’s custom LSD model “supports fully interactive video synthesis — allowing continuous prompting, transformation and editing as video is generated,” the company’s technical report explains.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.