Google Merges Android and Hardware Units for AI Efficiency

Google is implementing an internal reorganization that combines its Android and hardware teams. Google CEO Sundar Pichai announced a new Platforms & Devices team headed by Rick Osterloh, which includes Android, Chrome, ChromeOS, Photos and all Pixel products. Pichai says the move will help speed development. Osterloh’s mandate is full-stack platform development that smoothly incorporates AI across all Google platforms, including smartphones, TVs and anything with Android OS. Hiroshi Lockheimer, who previously ran ops for Android, Chrome and ChromeOS, moves on to other projects at Google and Alphabet. Continue reading Google Merges Android and Hardware Units for AI Efficiency

Microsoft’s VASA-1 Can Generate Talking Faces in Real Time

Microsoft has developed VASA, a framework for generating lifelike virtual characters with vocal capabilities including speaking and singing. The premiere model, VASA-1, can perform the feat in real time from a single static image and a vocalization clip. The research demo showcases realistic audio-enhanced faces that can be fine-tuned to look in different directions or change expression in video clips of up to one minute at 512 x 512 pixels and up to 40fps “with negligible starting latency,” according to Microsoft, which says “it paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors.” Continue reading Microsoft’s VASA-1 Can Generate Talking Faces in Real Time

UN Adopts Global AI Resolution Backed by U.S., 122 Others

The United Nations General Assembly on Thursday adopted a U.S.-led resolution to promote “safe, secure and trustworthy” artificial intelligence systems and their sustainable development for the benefit of all. The non-binding proposal, which was adopted without a formal vote, drew support from more than 122 co-sponsors, including China and India. It emphasizes “the respect, protection and promotion of human rights in the design, development, deployment and use” of responsible and inclusive AI. “The same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems,” the resolution affirms. Continue reading UN Adopts Global AI Resolution Backed by U.S., 122 Others

Semafor Teams with Microsoft on AI-Driven Newsfeed Signals

News site Semafor has teamed with Microsoft to create a new breaking news product called Signals it says is a template for “the newsroom of the future.” Using AI tools from Microsoft and OpenAI to assist its journalists, the multi-source Signals will offer “perspectives and insights on the biggest stories in the world as they develop,” Semafor says. Microsoft simultaneously announced deals with the Craig Newmark Graduate School of Journalism at CUNY and the Online News Association. “In a year where billions of people will vote in democratic elections worldwide, journalism is critical to creating healthy information ecosystems,” Microsoft says. Continue reading Semafor Teams with Microsoft on AI-Driven Newsfeed Signals

U.S. AI Safety Institute Consortium Debuts with 200 Members

The U.S. has established the AI Safety Institute Consortium (AISIC), uniting artificial intelligence researchers, creators, academics and other users across government, industry and civil society organizations to support the development and deployment of safe and trustworthy AI. The group launches with more than 200 member entities ranging from tech giants Google, Microsoft and Amazon to AI-first firms OpenAI, Cohere and Anthropic. Secretary of Commerce Gina Raimondo announced the move the day after naming Elizabeth Kelly director of the new U.S. AI Safety Institute, housed at the National Institute of Standards and Technology (NIST). Continue reading U.S. AI Safety Institute Consortium Debuts with 200 Members

Big Tech Onboard to Advance President Biden’s NSF AI Pilot

The National Science Foundation (NSF) is launching a pilot program to create the National Artificial Intelligence Research Resource (NAIRR), a shared U.S. AI research infrastructure. The move fulfills part of President Biden’s October executive order on the responsible development of artificial intelligence. Ten other federal agencies have joined the NSF in launching the program, while tech giants Microsoft, Nvidia and Google have already pledged their support, along with more than 20 other private organizations across the industry, academic and non-profit sectors. The idea is to create shared access to information and things like cloud computing resources. Continue reading Big Tech Onboard to Advance President Biden’s NSF AI Pilot

Meta Combines Research Units to Develop Open-Source AGI

In a Threads post last week, Meta Platforms CEO Mark Zuckerberg announced that the company’s new frontier is open-source artificial general intelligence (AGI). Meta has united its FAIR and GenAI research teams behind the goal of developing such a platform, which Zuckerberg described as part of the company’s “long-term vision.” “The next generation of services required is building full general intelligence, building the best AI assistants, AIs for creators, AIs for businesses and more,” Zuckerberg said, explaining that will require “advances in every area of AI from reasoning to planning to coding to memory and other cognitive abilities.” Continue reading Meta Combines Research Units to Develop Open-Source AGI

IBM and Meta Debut AI Alliance for Safe Artificial Intelligence

IBM and Meta Platforms have launched the AI Alliance, a coalition of companies and educational institutions committed to responsible, transparent development of artificial intelligence. The group launched this week with more than 50 global founding participants from industry, startup, academia, research and government. Among the members and collaborators: AMD, CERN, Cerebras, Cornell University, Dell Technologies, Hugging Face, Intel, Linux Foundation, NASA, Oracle, Red Hat, Sony Group, Stability AI, the University of Tokyo and Yale Engineering. The group’s stated purpose is “to support open innovation and open science in AI.” Continue reading IBM and Meta Debut AI Alliance for Safe Artificial Intelligence

U.S., Britain and 16 Nations Aim to Make AI Secure by Design

The United States, Britain and 16 other countries have signed a 20-page agreement on working together to keep artificial intelligence safe from bad actors, mandating collaborative efforts for creating AI systems that are “secure by design.” The 18 countries said they will aim to ensure companies that design and utilize AI develop and deploy it in a way that protects their customers and the public from abuse. The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) jointly released the Guidelines for Secure AI System Development. Continue reading U.S., Britain and 16 Nations Aim to Make AI Secure by Design

TikTok Creates New Tools for Labeling Content Created by AI

As creators embrace artificial intelligence to juice creativity, TikTok is launching a tool that helps them label their AI-generated content while also beginning to test “ways to label AI-generated content automatically.” “AI enables incredible creative opportunities, but can potentially confuse or mislead viewers,” TikTok said in announcing labels that can apply to “any content that has been completely generated or significantly edited by AI,” including video, photographs, music and more. The platform also touted a policy that “requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualize.” Continue reading TikTok Creates New Tools for Labeling Content Created by AI

UK’s Competition Office Issues Principles for Responsible AI

The UK’s Competition and Markets Authority has issued a report featuring seven proposed principles that aim to “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models,” or FMs. Ranging from “accountability” and “diversity” to “transparency,” the principles aim to “spur innovation and growth” while implementing social safety measures amidst rapid adoption of apps including OpenAI’s ChatGPT, Microsoft 365 Copilot, Stability AI’s Stable Diffusion. The transformative properties of FMs can “have a significant impact on people, businesses, and the UK economy,” according to the CMA. Continue reading UK’s Competition Office Issues Principles for Responsible AI

DHS Moves to ‘Master’ AI While Keeping It Safe, Trustworthy

The Department of Homeland Security is harnessing artificial intelligence, according to a memo by Secretary Alejandro Mayorkas explaining the department will use AI to keep Americans safe while implementing safeguards to ensure civil rights, privacy rights and the U.S. Constitution are not violated. The DHS appointed Eric Hysen as chief AI officer, moving him into the role from his previous post as CIO. “DHS must master this technology, applying it effectively and building a world class workforce that can reap the benefits of Al, while meeting the threats posed by adversaries that wield Al,” Mayorkas wrote. Continue reading DHS Moves to ‘Master’ AI While Keeping It Safe, Trustworthy

Major Tech Players Launch Frontier Model Forum for Safe AI

Advancing President Biden’s push for responsible development of artificial intelligence, top AI firms including Anthropic, Google, Microsoft and OpenAI have launched the Frontier Model Forum, an industry forum that will work collaboratively with outside researchers and policymakers to implement best practices. The new group will focus on AI safety, research into its risks, and disseminating information to the public, governments and civil society. Other companies involved in building bleeding-edge AI models will also be invited to join and participate in technical evaluations and benchmarks. Continue reading Major Tech Players Launch Frontier Model Forum for Safe AI

Top Tech Firms Support Government’s Planned AI Safeguards

President Biden has secured voluntary commitments from seven leading AI companies who say they will support the executive branch goal of advancing safe, secure and transparent development of artificial intelligence. Executives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI convened at the White House on Friday to support the accord, which some criticized as a half measure, claiming the companies have already embraced independent security testing and a commitment to collaborating with each other and the government. Biden stressed the need to deploy AI altruistically, “to help address society’s greatest challenges.” Continue reading Top Tech Firms Support Government’s Planned AI Safeguards

Adobe Pursues Ethical, Responsible AI in the Creative Space

As a next step in its advances in ethical AI, Adobe has announced its Firefly generative AI platform now supports text prompts in more than 100 international languages. The company says its Firefly AI app has generated over one billion images in Firefly and Photoshop since implementation in March. Adobe has also deployed artificial intelligence in Express, Illustrator and the Creative Cloud. Positioning its latest news as an expansion of global proportions, Adobe’s generative AI products will now support text prompts in native dialects in the standalone Firefly web service, with localization coming to more than 20 additional languages. Continue reading Adobe Pursues Ethical, Responsible AI in the Creative Space