By
Paula ParisiJuly 21, 2025
Adobe’s Firefly Video model has introduced new updates including Generate Sound Effects, in beta, and a text-to-avatar feature that lets users turn scripts into avatar-led videos “in just a few clicks.” Firefly becomes the second video model to generate audio, joining Veo 3, although unlike Google’s AI video tool Firefly does not yet generate dialogue. What it can do is output foley-like sound and sound effects, while text-to-avatar can generate speech. As with Firefly’s generative visuals, Adobe says Generate Sound Effects is “commercially safe,” which means they are trained only on licensed or publicly available material. Continue reading Adobe Adds Generative Audio and Text-to-Avatar to Firefly AI
By
Paula ParisiJuly 11, 2025
Moonvalley, the AI startup behind Marey, a high-quality video generator trained exclusively on licensed content, has just put the product in general release. The credits-based subscription pricing ranges from $15 to $150 per month. In addition to ethical training on 1080p native video, Marey also takes a non-traditional approach on its user interface, eschewing prompts for what it says is a more creatively intuitive process. “Directors need precise control over every creative decision, plus legal confidence for commercial use. Today we’re delivering both,” says Moonvalley CEO and co-founder Naeem Talukdar. Continue reading Moonvalley’s Production-Tailored AI Marey Publicly Released
By
Paula ParisiJune 27, 2025
Creative Commons, the non-profit that pioneered sharing content through permissive licensing, is launching CC Signals, a framework to signal permissions for content use by machines in the age of artificial intelligence. “They are both a technical and legal tool and a social proposition: a call for a new pact between those who share data and those who use it to train AI models,” says Creative Commons CEO Anna Tumadóttir, noting the signals are “based on a set of limited but meaningful options shaped in the public interest.” The framework is designed to bridge the openness of the Internet with AI’s insatiable demand for training data, according to Creative Commons. Continue reading Creative Commons Introduces New Licensing Platform for AI
By
Paula ParisiJune 26, 2025
Google DeepMind has released a new vision-language-action (VLA) model, Gemini Robotics On-Device, that can operate robots locally, controlling their movements without requiring an Internet connection or the cloud. Google says the software provides “general-purpose dexterity and fast task adaptation,” building on the March release of the first Gemini Robotics VLA model, which brought “Gemini 2.0’s multimodal reasoning and real-world understanding into the physical world.” Since the model operates independent of a data network, it’s useful for latency sensitive applications as well as low or no connectivity environments. Google is also releasing a Gemini Robotics SDK for developers. Continue reading Google Gemini Robotics On-Device Controls Robots Locally
By
Paula ParisiMay 20, 2025
Stability AI has released an AI model that generates stereo audio that is quick and lightweight enough for mobile devices. Called Stable Audio Open Small, the open-source model is the result of a collaboration between the AI startup and chipmaker Arm. While there are several AI-powered apps that generate audio — Suno and Udio among them — most rely on cloud processing, thus can’t be used offline. Stability says Stable Audio Open Small is also IP safe due to being trained entirely on audio from the royalty-free libraries Free Music Archive and Freesound. Continue reading Stability AI Releases a Fast Stereo Audio-Generator for Mobile
By
Paula ParisiMarch 7, 2025
Staircase Studios AI — the film, television and gaming studio launched by “Divergent” franchise producer Pouya Shahbazian — has announced its investors and shared plans to produce more than 30 projects at budgets under $500,000 over the next 3-4 years. The company will be using a proprietary AI workflow it invented called ForwardMotion that the company says will revolutionize film and television production. The company has acquired multiple pieces of IP, including more than 20 scripts that have appeared on the Black List, which tallies the most popular unproduced scripts. Continue reading Staircase Studios AI Plans 30 Projects Over Next 3 to 4 Years
By
Paula ParisiJanuary 7, 2025
YouTube has partnered with Creative Artists Agency to develop technology that will help celebrities identify and remove deepfake videos created by AI to exploit their images. YouTube announced the tech in September and has now gained CAA’s support in the form of “critical feedback to help us build our detection systems and refine the controls.” In exchange, “several of the world’s most influential figures will have access to early-stage technology designed to identify and manage AI-generated content that features their likeness, including their face, on YouTube at scale,” the streamer announced. CAA’s clients includes celebrity talent spanning acting, music and sports. Continue reading CAA to Help YouTube Develop an AI Deepfake Removal Tool
By
Paula ParisiJanuary 6, 2025
Artist-led generative AI film and animation studio Asteria has joined with AI research startup Moonvalley to create Marey, “the entertainment industry’s first clean foundational AI model.” Trained exclusively on “ethically sourced data” owned by Asteria, Marey is named after the French physiologist who helped put the “motion” in motion pictures, Étienne-Jules Marey. Asteria says Marey is being trained for a 2025 debut targeting major Hollywood productions with a pitch that says it has not and will not rely on scraped data, something Big Tech competitors have taken heat on. Continue reading Asteria and Moonvalley Team Up on Ethical AI for Hollywood
By
Paula ParisiOctober 25, 2024
Runway is launching Act-One motion capture system that uses video and voice recordings to map human facial expressions onto characters using the company’s latest model, Gen-3 Alpha. Runway calls it “a significant step forward in using generative models for expressive live action and animated content.” Compared to past facial capture techniques — which typically require complex rigging — Act-One is driven directly and only by the performance of an actor, requiring “no extra equipment,” making it more likely to capture and preserve an authentic, nuanced performance, according to the company. Continue reading Runway’s Act-One Facial Capture Could Be a ‘Game Changer’
By
Paula ParisiOctober 18, 2024
Anthropic, maker of the the popular Claude AI chatbot, has updated its Responsible Scaling Policy (RSP), designed and implemented to mitigate the risks of advanced AI systems. The policy was introduced last year and has since been improved, with new protocols added to ensure AI models are developed and deployed safely as they grow more powerful. This latest update offers “a more flexible and nuanced approach to assessing and managing AI risks while maintaining our commitment not to train or deploy models unless we have implemented adequate safeguards,” according to Anthropic. Continue reading Anthropic Updates ‘Responsible Scaling’ to Minimize AI Risks
By
Paula ParisiOctober 16, 2024
OpenAI has announced Swarm, an experimental framework that coordinates networks of AI agents, and true to its name the news has kicked over a hornet’s nest of contentious debate about the ethics of artificial intelligence and the future of enterprise automation. OpenAI emphasizes that Swarm is not an official product and says though it has shared the code publicly it has no intention of maintaining it. “Think of it more like a cookbook,” OpenAI engineer Shyamal Anadkat said in a social media post, calling it “code for building simple agents.” Continue reading OpenAI Tests Open-Source Framework for Autonomous Agents
By
Paula ParisiSeptember 27, 2024
The European Commission has released a list of more than 100 companies that have become signatories to the EU’s AI Pact. While Google, Microsoft and OpenAI are among them, Apple and Meta are not. The voluntary AI Pact is aimed at eliciting policies on AI deployment during the period before the legally binding AI Act takes full effect. The EU AI Pact focuses on transparency in three core areas: internal AI governance, high-risk AI systems mapping and promoting AI literacy and awareness among staff to support ethical development. It is aimed at “relevant stakeholders,” across industry, civil society and academia. Continue reading Amazon, Google, Microsoft and OpenAI Join the EU’s AI Pact
By
Paula ParisiSeptember 18, 2024
The OpenAI board’s Safety and Security Committee will become an independent board oversight committee, chaired by Zico Kolter, machine learning department chair at Carnegie Mellon University. The committee will be responsible for “the safety and security processes guiding OpenAI’s model deployment and development.” Three OpenAI board members segue from their current SSC roles to the new committee: Quora founder Adam D’Angelo, former Sony Corporation EVP Nicole Seligman and erstwhile NSA chief Paul Nakasone. OpenAI is currently putting together a new funding round that reportedly aims to value the company at $150 billion. Continue reading OpenAI Bestows Independent Oversight on Safety Committee
By
Paula ParisiSeptember 6, 2024
OpenAI co-founder and former chief scientist Ilya Sutskever, who exited the company in May after a power struggle with CEO Sam Altman, has raised $1 billion for his new venture, Safe Superintelligence (SSI). The cash infusion from major Silicon Valley venture capital firms including Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel and NFDG has resulted in a $5 billion valuation for the startup. As its name implies, SSI is focused on developing artificial intelligence that does not pose a threat to humanity, a goal that will be pursued “in a straight shot” with “one product,” Sutskever has stated. Continue reading Safe Superintelligence Raises $1 Billion to Develop Ethical AI
By
Paula ParisiAugust 29, 2024
In a move toward increased transparency, San Francisco-based AI startup Anthropic has published the system prompts for three of its most recent large language models: Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku. The information is now available on the web and in the Claude iOS and Android apps. The prompts are instruction sets that reveal what the models can and cannot do. Anthropic says it will regularly update the information, emphasizing that evolving system prompts do not affect the API. Examples of Claude’s prompts include “Claude cannot open URLs, links, or videos” and, when dealing with images, “avoid identifying or naming any humans.” Continue reading Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’