Meta’s Llama 3.3 Delivers More Processing for Less Compute

Meta Platforms has packed more artificial intelligence into a smaller package with Llama 3.3, which the company released last week. The open-source large language model (LLM) “improves core performance at a significantly lower cost, making it even more accessible to the entire open-source community,” Meta VP of Generative AI Ahmad Al-Dahle wrote on X social. The 70 billion parameter text-only Llama 3.3 is said to perform on par with the 405 billion parameter model that was part of Meta’s Llama 3.1 release in July, with less computing power required, significantly lowering its operational costs. Continue reading Meta’s Llama 3.3 Delivers More Processing for Less Compute

Hume AI Introduces Voice Control and Claude Interoperability

Artificial voice startup Hume AI has had a busy Q4, introducing Voice Control, a no-code artificial speech interface that gives users control over 10 voice dimensions ranging from “assertiveness” to “buoyancy” and “nasality.” The company also debuted an interface that “creates emotionally intelligent voice interactions” with Anthropic’s foundation model Claude that has prompted one observer to ponder the possibility that keyboards will become a thing of the past when it comes to controlling computers. Both advances expand on Hume’s work with its own foundation model, Empathic Voice Interface 2 (EVI 2), which adds emotional timbre to AI voices. Continue reading Hume AI Introduces Voice Control and Claude Interoperability

Qwen with Questions: Alibaba Previews New Reasoning Model

Alibaba Cloud has released the latest entry in its growing Qwen family of large language models. The new Qwen with Questions (QwQ) is an open-source competitor to OpenAI’s o1 reasoning model. As with competing large reasoning models (LRMs), QwQ can correct its own mistakes, relying on extra compute cycles during inference to assess its responses, making it well suited for reasoning tasks like math and coding. Described as an “experimental research model,” this preview version of QwQ has 32-billion-parameters and a 32,000-token context, leading to speculation that a more powerful iteration is in the offing. Continue reading Qwen with Questions: Alibaba Previews New Reasoning Model

Lightricks LTX Video Model Impresses with Speed and Motion

Lightricks has released an AI model called LTX Video (LTXV) it says generates five seconds of 768 x 512 resolution video (121 frames) in just four seconds, outputting in less time than it takes to watch. The model can run on consumer-grade hardware and is open source, positioning Lightricks as a mass market challenger to firms like Adobe, OpenAI, Google and their proprietary systems. “It’s time for an open-sourced video model that the global academic and developer community can build on and help shape the future of AI video,” Lightricks co-founder and CEO Zeev Farbman said. Continue reading Lightricks LTX Video Model Impresses with Speed and Motion

Anthropic Protocol Intends to Standardize AI Data Integration

Anthropic is releasing what it hopes will be a new standard in data integration for AI. Called the Model Context Protocol (MCP), its goal is to eliminate the need to customize each integration by having code written each time a company’s data is connected to a model. The open-source MCP tool could become a universal way to link data sources to AI. The aim is to have models querying databases directly. MCP is “a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments,” according to Anthropic. Continue reading Anthropic Protocol Intends to Standardize AI Data Integration

GitHub Promotes Open-Source Security with Funding Initiative

The GitHub Secure Open Source Fund will award financing to select applicants in a program designed to fuel security and sustainability for open-source projects. Applications are open now and close on January 7. During that time, 125 projects will be selected for a piece of the $1.25 million investment fund, made possible through the participation of American Express, the Alfred P. Sloan Foundation, Chainguard, HeroDevs, Kraken, Mayfield Fund, Microsoft, Shopify, Stripe and others. In addition to monetary support, recipients will be invited to take part in a three-week educational program. Continue reading GitHub Promotes Open-Source Security with Funding Initiative

Nvidia’s Impressive AI Model Could Compete with Top Brands

Nvidia has debuted a new AI model, Llama-3.1-Nemotron-70B-Instruct, that it claims is outperforming competitors GPT-4o from OpenAI and Anthropic’s Claude 3.5 Sonnet. The impressive showing has prompted speculation of an AI shakeup and a significant shift in Nividia’s AI strategy, which has thus far been focused primarily on chipmaking. The model was quietly released on Hugging Face, and Nvidia says as of October 1 it ranked first on three top automatic alignment benchmarks, “edging out strong frontier models” and vaulting Nvidia to the forefront of the LLM field in areas like comprehension, context and generation. Continue reading Nvidia’s Impressive AI Model Could Compete with Top Brands

OpenAI Tests Open-Source Framework for Autonomous Agents

OpenAI has announced Swarm, an experimental framework that coordinates networks of AI agents, and true to its name the news has kicked over a hornet’s nest of contentious debate about the ethics of artificial intelligence and the future of enterprise automation. OpenAI emphasizes that Swarm is not an official product and says though it has shared the code publicly it has no intention of maintaining it. “Think of it more like a cookbook,” OpenAI engineer Shyamal Anadkat said in a social media post, calling it “code for building simple agents.” Continue reading OpenAI Tests Open-Source Framework for Autonomous Agents

Apple Advances Computer Vision with Its Depth Pro AI Model

Apple has released a new AI model called Depth Pro that can create a 3D depth map from a 2D image in under a second. The system is being hailed as a breakthrough that could potentially revolutionize how machines perceive depth, with transformative impact on industries from augmented reality to self-driving vehicles. “The predictions are metric, with absolute scale” without relying on the camera metadata typically required for such mapping, according to Apple. Using a consumer-grade GPU, the model can produce a 2.25-megapixel depth map using a single image in only 0.3 seconds. Continue reading Apple Advances Computer Vision with Its Depth Pro AI Model

Nvidia Releases Open-Source Frontier-Class Multimodal LLMs

Nvidia has unveiled the NVLM 1.0 family of multimodal LLMs, a powerful open-source AI that the company says performs comparably to proprietary systems from OpenAI and Google. Led by NVLM-D-72B, with 72 billion parameters, Nvidia’s new entry in the AI race achieved what the company describes as “state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models.” Nvidia has made the model weights publicly available and says it will also be releasing the training code, a break from the closed approach of OpenAI, Anthropic and Google. Continue reading Nvidia Releases Open-Source Frontier-Class Multimodal LLMs

Allen Institute Announces Vision-Optimized Molmo AI Models

The Allen Institute for AI (also known as Ai2, founded by Paul Allen and led by Ali Farhadi) has launched Molmo, a family of four open-source multimodal models. While advanced models “can perceive the world and communicate with us, Molmo goes beyond that to enable one to act in their worlds, unlocking a whole new generation of capabilities, everything from sophisticated web agents to robotics,” according to Ai2. On some third-party benchmark tests, Molmo’s 72 billion parameter model outperforms other open AI offerings and “performs favorably” against proprietary rivals like OpenAI’s GPT-4o, Google’s Gemini 1.5 and Anthropic’s Claude 3.5 Sonnet, Ai2 says. Continue reading Allen Institute Announces Vision-Optimized Molmo AI Models

Meta Unveils New Open-Source Multimodal Model Llama 3.2

Meta’s Llama 3.2 release includes two new multimodal LLMs, one with 11 billion parameters and one with 90 billion — considered small- and medium-sized — and two lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices. Included are pre-trained and instruction-tuned versions. In addition to text, the multimodal models can interpret images, supporting apps that require visual understanding. Meta says the models are free and open source. Alongside them, the company is releasing “the first official Llama Stack distributions,” enabling “turnkey deployment” with integrated safety. Continue reading Meta Unveils New Open-Source Multimodal Model Llama 3.2

Alibaba Cloud Ups Its AI Game with 100 Open-Source Models

Alibaba Cloud last week globally released more than 100 new open-source variants of its large language foundation model, Qwen 2.5, to the global open-source community. The company has also revamped its proprietary offering as a full-stack AI-computing infrastructure across cloud products, networking and data center architecture, all aimed at supporting the growing demands of AI computing. Alibaba Cloud’s significant contribution was revealed at the Apsara Conference, the annual flagship event held by the cloud division of China’s e-retail giant, often referred to as the Chinese Amazon. Continue reading Alibaba Cloud Ups Its AI Game with 100 Open-Source Models

AWS Transfers OpenSearch Stewardship to Linux Foundation

Amazon is transferring its OpenSearch platform to the Linux Foundation’s new OpenSearch Software Foundation. By handing a third-party the open-source project it has developed internally since 2021, Amazon hopes to accelerate collaboration in data-driven search and analytics, an area of focus due to the proliferation of model training. Not to be confused with commercial search (Google, Bing), engines like OpenSearch are geared toward enterprise and academia. Because it is licensed under Apache 2.0, OpenSearch is a viable starting point for organizations that customize internal platforms for searching, monitoring and analyzing large volumes of data. Continue reading AWS Transfers OpenSearch Stewardship to Linux Foundation

Safe Superintelligence Raises $1 Billion to Develop Ethical AI

OpenAI co-founder and former chief scientist Ilya Sutskever, who exited the company in May after a power struggle with CEO Sam Altman, has raised $1 billion for his new venture, Safe Superintelligence (SSI). The cash infusion from major Silicon Valley venture capital firms including Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel and NFDG has resulted in a $5 billion valuation for the startup. As its name implies, SSI is focused on developing artificial intelligence that does not pose a threat to humanity, a goal that will be pursued “in a straight shot” with “one product,” Sutskever has stated. Continue reading Safe Superintelligence Raises $1 Billion to Develop Ethical AI