YouTube Adds GenAI Labeling Requirement for Realistic Video

YouTube has added new rules requiring those uploading realistic-looking videos that are “made with altered or synthetic media, including generative AI” to label them using a new tool in Creator Studio. The new labeling “is meant to strengthen transparency with viewers and build trust between creators and their audience,” YouTube says, listing examples of content that require disclosure as “likeness of a realistic person” including voice as well as image, “altering footage of real events or places” and “generating realistic scenes” of fictional major events, “like a tornado moving toward a real town.” Continue reading YouTube Adds GenAI Labeling Requirement for Realistic Video

EU Lawmakers Pass AI Act, World’s First Major AI Regulation

The European Union has passed the Artificial Intelligence Act, becoming the first global entity to pass comprehensive law to regulate AI’s development and use. Member states agreed on the framework in December 2023, and it was adopted Wednesday by the European Parliament with 523 votes in favor, 46 against and 49 abstentions. The legislation establishes what are being called “sweeping rules” for those building AI as well as those who deploy it. The rules, which will take effect gradually, implement new risk assessments, ban AI uses deemed “high risk,” and mandate transparency requirements. Continue reading EU Lawmakers Pass AI Act, World’s First Major AI Regulation

Researchers Call for Safe Harbor for the Evaluation of AI Tools

Artificial intelligence stakeholders are calling for safe harbor legal and technical protections that will allow them access to conduct “good-faith” evaluations of various AI products and services without fear of reprisal. More than 300 researchers, academics, creatives, journalists and legal professionals had as of last week signed an open letter calling on companies including Meta Platforms, OpenAI and Google to allow access for safety testing and red teaming of systems they say are shrouded in opaque rules and secrecy despite the fact that millions of consumers are already using them. Continue reading Researchers Call for Safe Harbor for the Evaluation of AI Tools

EU Makes Provisional Agreement on Artificial Intelligence Act

The EU has reached a provisional agreement on the Artificial Intelligence Act, making it the first Western democracy to establish comprehensive AI regulations. The sweeping new law predominantly focuses on so-called “high-risk AI,” establishing parameters — largely in the form of reporting and third-party monitoring — “based on its potential risks and level of impact.” Parliament and the 27-country European Council must still hold final votes before the AI Act is finalized and goes into effect, but the agreement, reached Friday in Brussels after three days of negotiations, means the main points are set. Continue reading EU Makes Provisional Agreement on Artificial Intelligence Act

Altman Reinstated as CEO of OpenAI, Microsoft Joins Board

Sam Altman has wasted no time since being rehired as CEO of OpenAI on November 22, four days after being fired. This week, the 38-year-old leader of one of the most influential artificial intelligence firms outlined his “immediate priorities” and announced a newly constituted “initial board” that includes a non-voting seat for investor Microsoft. The three voting members thus far include former Salesforce co-CEO Bret Taylor as chairman and former U.S. Treasury Secretary Larry Summers — both newcomers — and sophomore Adam D’Angelo, CEO of Quora. Mira Murati, interim CEO during Altman’s brief absence, returns to her role as CTO. Continue reading Altman Reinstated as CEO of OpenAI, Microsoft Joins Board

Newsom Report Examines Use of AI by California Government

California Governor Gavin Newsom has released a report examining the beneficial uses and potential harms of artificial intelligence in state government. Potential plusses include improving access to government services by identifying groups that are hindered due to language barriers or other reasons, while dangers highlight the need to prepare citizens with next generation skills so they don’t get left behind in the GenAI economy. “This is an important first step in our efforts to fully understand the scope of GenAI and the state’s role in deploying it,” Newsom said, calling California’s strategy “a nuanced, measured approach.” Continue reading Newsom Report Examines Use of AI by California Government

CBS News Confirmed: New Fact-Checking Unit Examining AI

CBS is launching a unit charged with identifying misinformation and avoiding deepfakes. Called CBS News Confirmed, it will operate out of the news-and-stations division, ferreting out false information generated by artificial intelligence. Claudia Milne, senior VP of CBS News and Stations and its standards and practices chief will run the new group with Ross Dagan, EVP and head of news operations and transformation, CBS News and Stations. CBS plans to hire forensic journalists and will expand training and invest in technologies to assist them in their role. In addition to flagging deepfakes, CBS News Confirmed will also report on them. Continue reading CBS News Confirmed: New Fact-Checking Unit Examining AI

TikTok Creates New Tools for Labeling Content Created by AI

As creators embrace artificial intelligence to juice creativity, TikTok is launching a tool that helps them label their AI-generated content while also beginning to test “ways to label AI-generated content automatically.” “AI enables incredible creative opportunities, but can potentially confuse or mislead viewers,” TikTok said in announcing labels that can apply to “any content that has been completely generated or significantly edited by AI,” including video, photographs, music and more. The platform also touted a policy that “requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualize.” Continue reading TikTok Creates New Tools for Labeling Content Created by AI

UK’s Competition Office Issues Principles for Responsible AI

The UK’s Competition and Markets Authority has issued a report featuring seven proposed principles that aim to “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models,” or FMs. Ranging from “accountability” and “diversity” to “transparency,” the principles aim to “spur innovation and growth” while implementing social safety measures amidst rapid adoption of apps including OpenAI’s ChatGPT, Microsoft 365 Copilot, Stability AI’s Stable Diffusion. The transformative properties of FMs can “have a significant impact on people, businesses, and the UK economy,” according to the CMA. Continue reading UK’s Competition Office Issues Principles for Responsible AI

Google Updates Policies Regarding Blockchain in Play Store

Google has updated transaction policies to allow for blockchain-based digital content, such as NFTs, to be placed within content distributed through its mobile software marketplace Google Play. Google has been slow to warm to blockchain integration, and the new approach comes with strict transparency requirements. If tokenized digital assets are part of an app or game “developers must declare this clearly,” Google explains, adding that “developers may not promote or glamorize any potential earning from playing or trading activities.” These stipulations intend to prevent the hype that has attached itself to so much blockchain activity from infiltrating Google Play. Continue reading Google Updates Policies Regarding Blockchain in Play Store

Biden Supports FCC Plan for Multichannel Price Disclosures

The Federal Communications Commission proposed a rule that would require cable TV and multichannel satellite services to disclose full pricing for programming plans in consumer promotional materials and invoicing, a plan President Biden quickly endorsed. The intent is to clearly convey “all-in” costs as a prominent single line, avoiding taxes and surcharges excluded from sales pitches and sometimes difficult to decipher on bills. “Too often, these companies hide additional junk fees on customer bills disguised as ‘broadcast TV’ or ‘regional sports’ fees that in reality pay for no additional services,” Biden said. Continue reading Biden Supports FCC Plan for Multichannel Price Disclosures

Schumer Shares Plan for SAFE AI Senate Listening Sessions

Senate Majority Leader Chuck Schumer unveiled his approach toward regulating artificial intelligence, beginning with nine listening sessions to explore topics including AI’s impact on the job market, copyright, national security and “doomsday scenarios.” Schumer’s plan — the SAFE (Security, Accountability, Foundations, Explainability) Innovation framework — isn’t proposed legislation, but a discovery roadmap. Set to begin in September, the panels will draw on members of industry, academia and civil society. “Experts aren’t even sure which questions policymakers should be asking,” said Schumer of the learning curve. “In many ways, we’re starting from scratch.” Continue reading Schumer Shares Plan for SAFE AI Senate Listening Sessions

IBM and Adobe Advance AI Content Workflow for Enterprise

IBM and Adobe are expanding their partnership to help enterprise clients accelerate their content supply chains using artificial intelligence including Adobe Sensei GenAI, released in March, and Adobe Firefly, now in beta, as well as Adobe’s generative AI models. IBM says it will create a portfolio of Adobe-specific consulting services. Leveraging Adobe’s AI solutions and IBM Consulting services, the companies aim to “help clients build an integrated content supply chain ecosystem that drives collaboration, optimizes creativity, increases speed, automates tasks and enhances stakeholders’ visibility across design and creative projects.” Continue reading IBM and Adobe Advance AI Content Workflow for Enterprise

European Union Takes Steps to Regulate Artificial Intelligence

The European Parliament on Wednesday took a major step to legislate artificial intelligence, passing a draft of the AI Act, which puts restrictions on many of what are believed to be the technology’s riskiest uses. The EU has been leading the world in advancing AI regulation, and observers are already citing this developing law as a model framework for global policymakers eager to place guardrails on this rapidly advancing technology. Among the Act’s key tenets: it will dramatically curtail use of facial recognition software and require AI firms such as OpenAI to disclose more about their training data. Continue reading European Union Takes Steps to Regulate Artificial Intelligence

Big Tech Braces for Potential Impact of EU Digital Markets Act

The European Union’s Digital Markets Act, applicable as of May 1, finds tech giants scrambling to anticipate regional compliance. The regulatory framework aims to ensure tech giants don’t abuse their clout by taking advantage of consumers and smaller companies. Within two months, companies providing core platform services will have to notify the European Commission and provide all relevant information. The Commission will then have two months to identify companies that fit the DMA definition of “gatekeeper.” Those that do will be subject to DMA rules and have six months to conform. Continue reading Big Tech Braces for Potential Impact of EU Digital Markets Act