By
Paula ParisiNovember 29, 2023
California Governor Gavin Newsom has released a report examining the beneficial uses and potential harms of artificial intelligence in state government. Potential plusses include improving access to government services by identifying groups that are hindered due to language barriers or other reasons, while dangers highlight the need to prepare citizens with next generation skills so they don’t get left behind in the GenAI economy. “This is an important first step in our efforts to fully understand the scope of GenAI and the state’s role in deploying it,” Newsom said, calling California’s strategy “a nuanced, measured approach.” Continue reading Newsom Report Examines Use of AI by California Government
By
Paula ParisiNovember 27, 2023
Stability AI has opened research preview on its first foundation model for generative video, Stable Video Diffusion, offering text-to-video and image-to-video. Based on the company’s Stable Diffusion text-to-image model, the new open-source model generates video by animating existing still frames, including “multi-view synthesis.” While the company plans to enhance and extend the model’s capabilities, it currently comes in two versions: SVD, which transforms stills into 576×1024 videos of 14 frames, and SVD-XT that generates up to 24 frames — each at between three and 30 frames per second. Continue reading Stability Introduces GenAI Video Model: Stable Video Diffusion
By
Paula ParisiNovember 8, 2023
CBS is launching a unit charged with identifying misinformation and avoiding deepfakes. Called CBS News Confirmed, it will operate out of the news-and-stations division, ferreting out false information generated by artificial intelligence. Claudia Milne, senior VP of CBS News and Stations and its standards and practices chief will run the new group with Ross Dagan, EVP and head of news operations and transformation, CBS News and Stations. CBS plans to hire forensic journalists and will expand training and invest in technologies to assist them in their role. In addition to flagging deepfakes, CBS News Confirmed will also report on them. Continue reading CBS News Confirmed: New Fact-Checking Unit Examining AI
By
Paula ParisiNovember 2, 2023
OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI
By
Paula ParisiOctober 30, 2023
Google is rolling out three new tools to verify images and search results. “About this image,” Fact Check Explorer and Search Generative Experience (SGE) all add context to Google Search results. “About this image” is rolling out globally to English-language users as part of the Google Search UI. Available in beta since summer, Fact Check Explorer will let journalists and professional fact checkers delve into an image or topic more deeply via API. Search Generative Experience lets GenAI investigate and share results about websites by populating source descriptions for some targets that will appear in “more about this page.” Continue reading Google Taps AI for Tools to Help Authenticate Search Results
By
Paula ParisiOctober 27, 2023
The University of Science and Technology of China (USTC) and Tencent YouTu Lab have released a research paper on a new framework called Woodpecker, designed to correct hallucinations in multimodal large language AI models. “Hallucination is a big shadow hanging over the rapidly evolving MLLMs,” writes the group, describing the phenomenon as when MLLMs “output descriptions that are inconsistent with the input image.” Solutions to date focus mainly on “instruction-tuning,” a form of retraining that is data and computation intensive. Woodpecker takes a training-free approach that purports to correct hallucinations from the basis of the generated text. Continue reading Woodpecker: Chinese Researchers Combat AI Hallucinations
By
Paula ParisiSeptember 22, 2023
OpenAI has released the DALL-E 3 generative AI imaging platform in research preview. The latest iteration features more safety options and integrates with OpenAI’s ChatGPT, currently driven by the now seasoned large language model GPT-4. That is the ChatGPT version to which Plus subscribers and enterprise customers have access — the same who will be able to preview DALL-E 3. The free chatbot is built around GPT-3.5. OpenAI says GPT-4 makes for better contextual understanding by DALL-E, which even in version 2 evidenced some glaring comprehension glitches. Continue reading OpenAI’s Latest Version of DALL-E Integrates with ChatGPT
By
Paula ParisiSeptember 8, 2023
California Governor Gavin Newsom signed an executive order for state agencies to study artificial intelligence and its impact on society and the economy. “We’re only scratching the surface of understanding what GenAI is capable of,” Newsom suggested. Recognizing “both the potential benefits and risks these tools enable,” he said his administration is “neither frozen by the fears nor hypnotized by the upside.” The move was couched as a “measured approach” that will help California “focus on shaping the future of ethical, transparent, and trustworthy AI, while remaining the world’s AI leader.” Continue reading Governor Newsom Orders Study of GenAI Benefits and Risks
By
Paula ParisiAugust 18, 2023
After announcing a partnership with OpenAI last month, the Associated Press has issued guidelines for using generative AI in news reporting, urging caution in using artificial intelligence. The news agency has also added a new chapter in its widely used AP Stylebook pertaining to coverage of AI, a story that “goes far beyond business and technology” and is “also about politics, entertainment, education, sports, human rights, the economy, equality and inequality, international law, and many other issues,” according to AP, which says stories about AI should “show how these tools are affecting many areas of our lives.” Continue reading AP Is Latest Org to Issue Guidelines for AI in News Reporting
By
Paula ParisiJuly 17, 2023
The Federal Trade Commission has opened a civil investigation into OpenAI to determine the extent to which its data policies are harmful to consumers as well as the potentially deleterious effects of misinformation spread through “hallucinations” by its ChatGPT chatbot. The FTC sent OpenAI dozens of questions last week in a 20-page letter instructing the company to contact FTC counsel “as soon as possible to schedule a telephonic meeting within 14 days.” The questions deal with everything from how the company trains its models to the handling of personal data. Continue reading FTC Investigates OpenAI Over Data Policies, Misinformation
By
Paula ParisiJuly 10, 2023
OpenAI believes artificial intelligence exceeding human intelligence “could arrive this decade.” Calling the massive compute power “superintelligence rather than AGI to stress a much higher capability level,” the company warns that even though this new cognition holds great promise it will not necessarily be benevolent. Preparing for the worst, OpenAI has formed an internal unit charged with developing ways to keep superintelligent AI in check. Led by OpenAI’s Ilya Sutskever and Jan Leike, the Superalignment Team will work toward “steering or controlling a potentially superintelligent AI and preventing it from going rogue.” Continue reading OpenAI Launches a Task Force to Control Superintelligent AI
By
Paula ParisiMay 23, 2023
Leaders at the G7 Summit in Hiroshima, Japan, are calling for discussions that could lead to global standards and regulations for generative AI, with the aim of responsible use of the technology. The chief executives of the world’s largest economies — which in addition to the host nation include Canada, France, Germany, Italy, the UK, the U.S. (and additionally the EU) — expressed the goal of forming a G7 working group to establish by the end of the year a “Hiroshima AI process” for discussion about uniform policies for dealing with AI technologies including chatbots and image generators. Continue reading G7 Leaders Call for Global AI Standards at Hiroshima Summit
By
Paula ParisiMay 3, 2023
The proliferation of websites spewing misinformation as a result of chatbot-powered “content farms” is creating increased concern. Misinformation tracker NewsGuard has identified 49 websites publishing falsehoods authored by generative AI. The discovery is raising questions as to the technology’s role in turbocharging existing fraud techniques. Several of the offending websites sprang up this year, just as AI tools were made widely available for use by the public. Some of the sites take the approach of masquerading as breaking news sites, while others have adopted tactics such as using generic-sounding names. Continue reading AI Content Farms Spreading Fake Stories and Misinformation
By
Paula ParisiMarch 13, 2023
The European Union’s implementation of the Digital Services Act (DSA) and the Digital Markets Act (DMA) is poised to trigger worldwide changes on familiar platforms like Google, Instagram, Wikipedia and YouTube. The DSA addresses consumer safety while the DMA deals with antitrust issues. Proponents say the new laws will help end the era of self-regulating tech companies. Although as in the U.S., the DSA makes clear that platforms aren’t liable for illegal user-generated content. Unlike U.S. law, the DSA does allow users to sue when tech firms are made aware of harmful content but fail to remove it. Continue reading Changes Ahead for Big Tech When EU Regulations Enforced
By
Paula ParisiFebruary 24, 2023
Meta Platforms is reforming its penalty system for Facebook policy violations. Based on recommendations from its Oversight Board, the company will focus more on educating users and less on punitive measures like suspending accounts or limiting posts. “While we are still removing violating content just as before,” explains Meta VP of content policy Monika Bickert, “under our new system we will focus more on helping people understand why we have removed their content, which is shown to help prevent re-offending, rather than so quickly restricting their ability to post.” The goal is fairer and more effective content moderation on Facebook. Continue reading Meta’s Penalty Reforms Designed to Be More Effective, Fair