OpenAI: GPT-4 Can Help with Content Moderation Workload

OpenAI has shared instructions for training to handle content moderation at scale. Some customers are already using the process, which OpenAI says can reduce time for fine-tuning content moderation policies from weeks or months to mere hours. The company proposes its customization technique can also save money by having GPT-4 do the work of tens of thousands of human moderators. Properly trained, GPT-4 could perform moderation tasks more consistently in that it would be free of human bias, OpenAI says. While AI can incorporate biases from training data, technologists view AI bias as more correctable than human predisposition. Continue reading OpenAI: GPT-4 Can Help with Content Moderation Workload

Viewers Choose Episode Order in Netflix Heist Series ‘Jigsaw’

Netflix is exploring another interactive story approach with its upcoming eight-part series “Jigsaw,” currently in production. The heist thriller will allow viewers to watch the first seven episodes in any order, culminating in a designated finale that will tie things up, no matter the path chosen by different viewers. Branching story structure is nothing new to fans of video games and Netflix previously experimented with the format in the “Black Mirror” special “Bandersnatch” and “Love, Death + Robots,” but “Jigsaw” shakes things up a bit in that the series arc can be constructed at random. Continue reading Viewers Choose Episode Order in Netflix Heist Series ‘Jigsaw’

USC Researchers Find Bias in Deepfake Detectors’ Datasets

The advent of deepfakes, which replace a person in a video or photo by likeness of someone else, has sparked concern that the ease of using machine learning tools to create them are readily available to criminals and provocateurs. In response, Amazon, Facebook and Microsoft sponsored the Deepfake Detection Challenge, which resulted in several potential tools. But now, researchers at the University of Southern California found that the datasets used to train some of these detection systems demonstrate racial and gender bias. Continue reading USC Researchers Find Bias in Deepfake Detectors’ Datasets

New Twitter Policy Aims to Combat Fake Photos and Video

Twitter announced yesterday that it would be more assertive in identifying fake and manipulated content on its platform. Beginning next month, the company plans to add labels or remove tweets that feature such manipulated images and video content. While short of an outright ban, the new policy is meant to address the growing concern of users frustrated by the practice of disinformation spread via social platforms. However, it also highlights the challenges faced by social media companies in regards to balancing freedom of speech, parody and satire, and false or manipulated content. On Monday, YouTube announced its plans to better manage misleading political content on its site. Continue reading New Twitter Policy Aims to Combat Fake Photos and Video

Google Offers Deepfakes for Researching Detection Methods

Google, in cooperation with its internal tech incubator Jigsaw, released a large number of deepfakes, which have been added to the FaceForensics benchmark run by the Technical University of Munich and the University of Naples Federico II. The deepfakes will be available for free to researchers developing detection techniques. Previously, Google released text-to-speech models as part of the AVspoof 2019 competition to develop systems to distinguish between real and computer-generated speech. Continue reading Google Offers Deepfakes for Researching Detection Methods

Fact Check: Google Takes on Fake News with Search Feature

Facebook is not the only tech giant looking to address the growing problem of fake news. Alphabet-owned Google, the world’s biggest search engine, is introducing a feature that offers users a new layer of fact checking in their search results. The move follows criticism that Google and other Internet companies are assisting with the spread of misinformation. After limited testing, Google rolled out the feature to its News pages and search catalog Friday. “Fact Check” tags will appear in News search results, but they will not be powered by Google. Instead, the feature will rely on fact-checking firms such as PolitiFact and Snopes, as well as reputable publishers including The New York Times and The Washington Post. Continue reading Fact Check: Google Takes on Fake News with Search Feature

Google Develops AI That Can Detect Hateful Internet Speech

Google technology incubator Jigsaw has released software designed to help Web publishers moderate the unruly comments on their sites. The software is called Perspective and it is available free of charge to publishers that apply for access. Jigsaw used machine learning to help train Perspective to identify toxic comments. Each comment is assigned a score, so that human moderators or even readers can filter out responses that score above a certain toxicity level. Perspective is part of Jigsaw’s Conversation AI initiative. The team wants to help foster more civil discourse and eradicate Internet trolls.  Continue reading Google Develops AI That Can Detect Hateful Internet Speech