By
Paula ParisiMay 16, 2024
Google has infused search with more Gemini AI, adding expanded AI Overviews and more planning and research capabilities. “Ask whatever’s on your mind or whatever you need to get done — from researching to planning to brainstorming — and Google will take care of the legwork” culling from “a knowledge base of billions of facts about people, places and things,” explained Google and Alphabet CEO Sundar Pichai at the Google I/O developer conference. AI Overviews will roll out to all U.S. users this week. Coming soon are customizable AI Overview options that can simplify language or add more detail. Continue reading Google Ups AI Quotient with Search-Optimized Gemini Model
Anthropic has launched a paid tier catering to business customers as well as a free mobile app for iOS users featuring its chatbot Claude. The generative AI startup — which has backing from Amazon, Google and Salesforce — is positioning itself to compete with companies like OpenAI, Google and Microsoft that focus on enterprise plans for revenue while also offering individual plans. Anthropic’s Team plan starts at $30 per user per month, on par with competing enterprise products, and requires a minimum of five seats. Anthropic has been beta testing Team over the past few quarters in industries including legal, tech and healthcare. Continue reading Anthropic Debuts Enterprise Plan, Free Claude App for iPhone
By
ETCentric StaffApril 26, 2024
The trend toward small language models that can efficiently run on a single device instead of requiring cloud connectivity has emerged as a focus for Big Tech companies involved in artificial intelligence. Apple has released the OpenELM family of open-source models as its entry in that field. OpenELM uses “a layer-wise scaling strategy” to efficiently allocate parameters within each layer of the transformer model, resulting in what Apple claims is “enhanced accuracy.” The “ELM” stands for “Efficient Language Models,” and one media outlet couches it as “the future of AI on the iPhone.” Continue reading Apple Unveils OpenELM Tech Optimized for Local Applications
By
ETCentric StaffApril 11, 2024
Google is moving its most powerful artificial intelligence model, Gemini 1.5 Pro, into public preview for developers and Google Cloud customers. Gemini 1.5 Pro includes what Google claims is a breakthrough in long context understanding, with the ability to run 1 million tokens of information “opening up new possibilities for enterprises to create, discover and build using AI.” Gemini’s multimodal capabilities allow it to process audio, video, text, code and more, which when combined with long context, “enables enterprises to do things that just weren’t possible with AI before,” according to Google. Continue reading Google Offers Public Preview of Gemini Pro for Cloud Clients
By
ETCentric StaffApril 4, 2024
Apple has developed a large language model it says has advanced screen-reading and comprehension capabilities. ReALM (Reference Resolution as Language Modeling) is artificial intelligence that can see and read computer screens in context, according to Apple, which says it advances technology essential for a true AI assistant “that aims to allow a user to naturally communicate their requirements to an agent, or to have a conversation with it.” Apple claims that in a benchmark against GPT-3.5 and GPT-4, the smallest ReALM model performed “comparable” to GPT-4, with its “larger models substantially outperforming it.” Continue reading Apple’s ReALM AI Advances the Science of Digital Assistants
By
ETCentric StaffFebruary 9, 2024
Apple has released MGIE, an open-source AI model that edits images using natural language instructions. MGIE, short for MLLM-Guided Image Editing, can also modify and optimize images. Developed in conjunction with University of California Santa Barbara, MGIE is Apple’s first AI model. The multimodal MGIE, which understands text and image input, also crops, resizes, flips, and adds filters based on text instructions using what Apple says is an easier instruction set than other AI editing programs, and is simpler and faster than learning a traditional program, like Apple’s own Final Cut Pro. Continue reading Apple Launches Open-Source Language-Based Image Editor
By
Paula ParisiJanuary 25, 2024
Google’s multimodal Gemini large language model will offer chat capabilities that help advertisers build and scale Search campaigns within the Google Ads platform using natural language prompts. “We’ve been actively testing Gemini to further enhance our ads solutions, and, we’re pleased to share that Gemini is now powering the conversational experience,” Google said, explaining the functionality is now available in beta to English language advertisers in the U.S., UK and will be rolling out globally to all English language advertisers over the next few weeks, with additional languages offered in the months ahead. Continue reading Conversational Chatbot Optimizes Google Ads, Search Results
By
Paula ParisiJanuary 19, 2024
During this week’s Unpacked event, Samsung introduced Galaxy AI, a suite of artificial intelligence tools designed for the new Galaxy S series smartphones — the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. “AI amplifies nearly every experience on the Galaxy S24 series,” including real-time text and call translations, a powerful suite of creative tools in the ProVisual Engine and a new kind of “gestural search that lets users circle, highlight, scribble on or tap anything onscreen” to see related search results. The AI enhancements are largely enabled by a multiyear deal with Google and Qualcomm. Samsung also debuted a wearable accessory, the Galaxy Ring. Continue reading Unpacked: Samsung Intros Galaxy AI with Next Gen S Phones
By
Paula ParisiDecember 22, 2023
Google has unveiled a new large language model designed to advance video generation. VideoPoet is capable of text-to-video, image-to-video, video stylization, video inpainting and outpainting, and video-to-audio. “The leading video generation models are almost exclusively diffusion-based,” Google says, citing Imagen Video as an example. Google finds this counter intuitive, since “LLMs are widely recognized as the de facto standard due to their exceptional learning capabilities across various modalities.” VideoPoet eschews the diffusion approach of relying on separately trained tasks in favor of integrating many video generation capabilities in a single LLM. Continue reading VideoPoet: Google Launches a Multimodal AI Video Generator
By
Paula ParisiDecember 20, 2023
Microsoft has expanded its Models as a Service (MaaS) catalog for Azure AI Studio, building beyond the 40 models announced at the Microsoft Ignite event last month with the addition of the Llama 2 code generation model from Meta Platforms in public preview. In addition, GPT-4 Turbo with Vision has been added to accelerate generative AI and multimodal application development. Similar to things like Software as a Service (SaaS) and Infrastructure as a Service (IaaS), MaaS lets customers use AI models on-demand over the web with easy setup and technical support. Continue reading Microsoft Brings Meta’s Llama 2 to Azure Models as a Service
By
Paula ParisiDecember 15, 2023
Google is rolling out Gemini to developers, enticing them with tools including AI Studio, an easy-to-navigate Web-based platform that will serve as a portal to the multi-tiered Gemini ecosystem, beginning with Gemini Pro, with Gemini Ultra to come next year. The service aims to allow developers to quickly create prompts and Gemini-powered chatbots, providing access to API keys to integrate them into apps. They’ll also be able to access code, should projects require a full featured IDE. The site is essentially a revamped version of what was formerly Google’s MakerSuite. Continue reading Google Debuts Turnkey Gemini AI Studio for Developing Apps
By
Paula ParisiDecember 8, 2023
Google is closing the year by heralding 2024 as the “Gemini era,” with the introduction of its “most capable and general AI model yet,” Gemini 1.0. This new foundation model is optimized for three different use-case sizes: Ultra, Pro and Nano. As a result, Google is releasing a new, Gemini-powered version of its Bard chatbot, available to English speakers in the U.S. and 170 global regions. Google touts Gemini as built from the ground up for multimodality, reasoning across text, images, video, audio and code. However, Bard will not as yet incorporate Gemini’s ability to analyze sound and images. Continue reading Google Announces the Launch of Gemini, Its Largest AI Model
By
Paula ParisiOctober 27, 2023
The University of Science and Technology of China (USTC) and Tencent YouTu Lab have released a research paper on a new framework called Woodpecker, designed to correct hallucinations in multimodal large language AI models. “Hallucination is a big shadow hanging over the rapidly evolving MLLMs,” writes the group, describing the phenomenon as when MLLMs “output descriptions that are inconsistent with the input image.” Solutions to date focus mainly on “instruction-tuning,” a form of retraining that is data and computation intensive. Woodpecker takes a training-free approach that purports to correct hallucinations from the basis of the generated text. Continue reading Woodpecker: Chinese Researchers Combat AI Hallucinations
By
Paula ParisiOctober 11, 2023
OpenAI began previewing vision capabilities for GPT-4 in March, and the company is now starting to roll out the image input and output to users of its popular ChatGPT. The multimodal expansion also includes audio functionality, with OpenAI proclaiming late last month that “ChatGPT can now see, hear and speak.” The upgrade vaults GPT-4 into the multimodal category with what OpenAI is apparently calling GPT-4V (for “Vision,” though equally applicable to “Voice”). “We’re rolling out voice and images in ChatGPT to Plus and Enterprise users,” OpenAI announced. Continue reading ChatGPT Goes Multimodal: OpenAI Adds Vision, Voice Ability
By
Paula ParisiOctober 11, 2023
Startup Reka AI is releasing in preview its first artificial intelligence assistant, Yasa-1. The multimodal AI is described as “a language assistant with visual and auditory sensors.” The year-old company says it “trained Yasa-1 from scratch,” including pretraining foundation models “from ground zero,” then aligning them and optimizing to its training and server infrastructures. “Yasa-1 is not just a text assistant, it also understands images, short videos and audio (yes, sounds too),” said Reka AI co-founder and Chief Scientist Yi Tay. Yasa-1 is available via Reka’s APIs and as docker containers for on-site or virtual private cloud deployment. Continue reading Yasa-1: Startup Reka Launches New AI Multimodal Assistant