By
Paula ParisiJuly 14, 2025
The European Union has published a General Purpose AI (GPAI) Code of Practice designed to help companies comply with the AI Act, which includes copyright protections and transparency requirements for advanced models. The Code of Practice bans training models on unauthorized materials and says companies must comply with copyright-holder requests to omit work from datasets. Developers are required to provide documentation describing the features of their AI models. The AI Act began taking effect in August 2024 and is being implemented gradually, with key transparency, governance and privacy provisions coming into force next month. Continue reading EU Releases AI Practices Code to Help with Legal Compliance
By
Paula ParisiJune 27, 2025
Creative Commons, the non-profit that pioneered sharing content through permissive licensing, is launching CC Signals, a framework to signal permissions for content use by machines in the age of artificial intelligence. “They are both a technical and legal tool and a social proposition: a call for a new pact between those who share data and those who use it to train AI models,” says Creative Commons CEO Anna Tumadóttir, noting the signals are “based on a set of limited but meaningful options shaped in the public interest.” The framework is designed to bridge the openness of the Internet with AI’s insatiable demand for training data, according to Creative Commons. Continue reading Creative Commons Introduces New Licensing Platform for AI
By
Paula ParisiJune 12, 2025
YouTube has loosened its rules regarding content moderation, instructing its moderators to prioritize “freedom of expression” over perceived risks of harm in assessing what to take down for the popular video platform. Although the move hasn’t been widely publicized, it was gleaned through leaks of moderator training material that made its way to the media. YouTube becomes the latest in a string of social platforms that have relaxed content moderation standards. Unlike Elon Musk’s X (formerly Twitter) and Facebook and Instagram owner Meta, YouTube and its parent Google have refrained from public comment on the move. Continue reading YouTube Latest Social Platform to Loosen Content Moderation
By
Paula ParisiApril 8, 2025
Sentient, a year-old non-profit backed by Peter Thiel’s Founders Fund, has released Open Deep Search (ODS), an open-source framework that leverages existing LLMs to enhance search and reasoning capabilities. Essentially a system of custom plugins and tools, ODS works with DeepSeek’s open-source R1 model as well as proprietary systems like OpenAI’s GPT-4o and Anthropic’s Claude to deliver advanced search functionality. That modular aspect is in fact ODS’s main innovation, its creators say, claiming it beats Perplexity and OpenAI’s GPT-4o Search Preview on benchmarks for accuracy and transparency. Continue reading Non-Profit Sentient Launches New ‘Open Deep Search’ Model
By
Paula ParisiSeptember 27, 2024
The European Commission has released a list of more than 100 companies that have become signatories to the EU’s AI Pact. While Google, Microsoft and OpenAI are among them, Apple and Meta are not. The voluntary AI Pact is aimed at eliciting policies on AI deployment during the period before the legally binding AI Act takes full effect. The EU AI Pact focuses on transparency in three core areas: internal AI governance, high-risk AI systems mapping and promoting AI literacy and awareness among staff to support ethical development. It is aimed at “relevant stakeholders,” across industry, civil society and academia. Continue reading Amazon, Google, Microsoft and OpenAI Join the EU’s AI Pact
By
Paula ParisiSeptember 26, 2024
Microsoft has released a suite of “Trustworthy AI” features that address concerns about AI security and reliability. The four new capabilities include Correction, a content detection upgrade in Microsoft Azure that “helps fix hallucination issues in real time before users see them.” Embedded Content Safety allows customers to embed Azure AI Content Safety on devices where cloud connectivity is intermittent or unavailable, while two new filters flag AI output of protected material. Additionally, a transparency safeguard providing the company’s AI assistant, Microsoft 365 Copilot, with specific “web search query citations” is coming soon. Continue reading New Microsoft Safety Tools Fix AI Flubs, Detect Proprietary IP
By
Paula ParisiSeptember 25, 2024
Cloudflare has released AI Audit, a free set of new tools designed to help websites analyze and control how their content is used by artificial intelligence models. Described as “one-click blocking” to prevent unauthorized AI scraping, Cloudflare says it will also make it easier to identify the content bots scan most, so they can wall it off and negotiate payment in exchange for access. Helping its clients toward a sustainable future, Cloudflare is also creating a marketplace for sites to negotiate fees based on AI audits that trace cyber footprints on server files. Continue reading Cloudflare Tool Can Prevent AI Bots from Scraping Websites
By
Paula ParisiSeptember 19, 2024
AI-powered ad campaigns “are continuing to deliver big results for businesses large and small,” according to Google, which has put Gemini to work for Google Ads. The company announced at the DMEXCO digital marketing event in Cologne a new suite of Gemini-powered tools aimed at making the experience even better by providing additional insights and more control over where and how marketing assets are deployed globally using Google Ads. For starters, Gemini’s “conversational experience” for search campaigns will expand its language palette, making auto-generated headlines and images available in German, French and Spanish in the months ahead. Continue reading Google Unveils Gemini-Powered Ad Features and AI Image ID
By
Paula ParisiSeptember 18, 2024
The OpenAI board’s Safety and Security Committee will become an independent board oversight committee, chaired by Zico Kolter, machine learning department chair at Carnegie Mellon University. The committee will be responsible for “the safety and security processes guiding OpenAI’s model deployment and development.” Three OpenAI board members segue from their current SSC roles to the new committee: Quora founder Adam D’Angelo, former Sony Corporation EVP Nicole Seligman and erstwhile NSA chief Paul Nakasone. OpenAI is currently putting together a new funding round that reportedly aims to value the company at $150 billion. Continue reading OpenAI Bestows Independent Oversight on Safety Committee
By
Paula ParisiSeptember 9, 2024
The first legally binding international treaty on artificial intelligence was signed last week by the countries that negotiated it, including the United States, United Kingdom and European Union members. The Council of Europe Framework Convention on Artificial Intelligence is “aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.” Drawn up by the Council of Europe (COE), an international human rights organization, the treaty was signed at the COE’s Conference of Ministers of Justice in Lithuania. Other signatories include Israel, Iceland, Norway, the Republic of Moldova and Georgia. Continue reading U.S. and Europe Sign the First Legally Binding Global AI Treaty
By
Paula ParisiAugust 29, 2024
In a move toward increased transparency, San Francisco-based AI startup Anthropic has published the system prompts for three of its most recent large language models: Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku. The information is now available on the web and in the Claude iOS and Android apps. The prompts are instruction sets that reveal what the models can and cannot do. Anthropic says it will regularly update the information, emphasizing that evolving system prompts do not affect the API. Examples of Claude’s prompts include “Claude cannot open URLs, links, or videos” and, when dealing with images, “avoid identifying or naming any humans.” Continue reading Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’
By
Paula ParisiAugust 14, 2024
YouTube, which began testing crowdsourced fact-checking in June, is now expanding the experiment by inviting users to try the feature. Likened to the Community Notes accountability method introduced by Twitter and continued under X, YouTube’s as yet unnamed feature lets users provide context and corrections to posts that might be misleading or false. “You can sign up to submit notes on videos you find inaccurate or unclear,” YouTube explains, adding that “after submission, your note is reviewed and rated by others.” Notes widely rated as helpful “may be published and appear below the video.” Continue reading YouTube Tests Expanded Community Fact-Checking for Video
By
Paula ParisiAugust 7, 2024
Google has unveiled three additions to its Gemma 2 family of compact yet powerful open-source AI models, emphasizing safety and transparency. The company’s Gemma 2 2B is a 2.6 billion parameter update to the lightweight 2B parameter Gemma 2, with built-in improvements in safety and performance. Built on Gemma 2, ShieldGemma is a suite of safety content classifier models that “filter the input and outputs of AI models and keep the user safe.” Interoperability model tool Gemma Scope offers what Google calls “unparalleled insight into our models’ inner workings.” Continue reading Latest Gemma 2 Models Emphasize Security and Performance
By
ETCentric StaffApril 1, 2024
The White House is implementing a new AI policy across the federal government that will be implemented by the Office of Management and Budget (OMB). Vice President Kamala Harris announced the new rules, which require that all federal agencies have a senior leader overseeing AI systems use, in an effort to ensure that AI deployed in public service remains safe and unbiased. The move was positioned as making good on “a core component” of President Biden’s AI Executive Order (EO), issued in October. Federal agencies reported completing the 150-day actions tasked by the EO. Continue reading Federal Policy Specifies Guidelines for Risk Management of AI
By
ETCentric StaffMarch 20, 2024
YouTube has added new rules requiring those uploading realistic-looking videos that are “made with altered or synthetic media, including generative AI” to label them using a new tool in Creator Studio. The new labeling “is meant to strengthen transparency with viewers and build trust between creators and their audience,” YouTube says, listing examples of content that require disclosure as “likeness of a realistic person” including voice as well as image, “altering footage of real events or places” and “generating realistic scenes” of fictional major events, “like a tornado moving toward a real town.” Continue reading YouTube Adds GenAI Labeling Requirement for Realistic Video