New Twitter Policy Aims to Combat Fake Photos and Video

Twitter announced yesterday that it would be more assertive in identifying fake and manipulated content on its platform. Beginning next month, the company plans to add labels or remove tweets that feature such manipulated images and video content. While short of an outright ban, the new policy is meant to address the growing concern of users frustrated by the practice of disinformation spread via social platforms. However, it also highlights the challenges faced by social media companies in regards to balancing freedom of speech, parody and satire, and false or manipulated content. On Monday, YouTube announced its plans to better manage misleading political content on its site.

According to the Twitter Blog, “You may not deceptively share synthetic or manipulated media that are likely to cause harm. In addition, we may label tweets containing synthetic and manipulated media to help people understand the media’s authenticity and to provide additional context.”

The company will use the following criteria in evaluating content:
Are the media synthetic or manipulated?
Are the media shared in a deceptive manner?
Is the content likely to impact public safety or cause serious harm?

Twitter gathered feedback from more than 6,500 global users in order to formulate its new policy. The survey indicated that, “more than 70 percent of people who use Twitter said ‘taking no action’ on misleading altered media would be unacceptable,” while “nearly 9 out of 10 individuals said placing warning labels next to significantly altered content would be acceptable.”

Additionally, “more than 90 percent of people who shared feedback support Twitter removing this content when it’s clear that it is intended to cause certain types of harm,” and “more than 75 percent of people believe accounts that share misleading altered media should face enforcement action.”

“In January, Facebook banned ‘deepfake’ videos from its platform,” reports The New York Times. However, deepfakes of politicians such as Nancy Pelosi and Joe Biden “would not be removed under the policy because they had been edited with video editing software, not artificial intelligence.” 

“Our approach does not focus on the specific technologies used to manipulate or fabricate media,” explained Yoel Roth, Twitter’s head of site integrity. “Whether you’re using advanced machine learning tools or just slowing down a video using a 99-cent app on your phone, our focus under this policy is to look at the outcome, not how it was achieved.” 

Earlier this week, on the day of the Iowa caucuses, YouTube announced it would ban misleading political content on its video platform ahead of the upcoming presidential election. The company specified election-related video that could cause “serious risk of egregious harm.”

According to NYT, this marks “the first time the video platform has comprehensively laid out how it will handle such political videos and viral falsehoods.”

“Over the last few years, we’ve increased our efforts to make YouTube a more reliable source for news and information, as well as an open platform for healthy political discourse,” said Leslie Miller, VP of government affairs and public policy at YouTube. She added that policies would be enforced, “without regard to a video’s political viewpoint.” 

As social media companies struggle to navigate these waters, a new tool has emerged that could help journalists identify doctored images. This week, a Google-owned company named Jigsaw “unveiled a free tool that researchers said could help journalists spot doctored photographs — even ones created with the help of artificial intelligence,” explains NYT. “Jigsaw, known as Google Ideas when it was founded, said it was testing the tool, called Assembler, with more than a dozen news and fact-checking organizations around the world.”

Journalists can feed an image into Assembler and the program will analyze the image before identifying whatever traces of earlier changes it may detect. These could include “color pattern anomalies, areas of an image that have been copied and pasted several times over, and whether more than one camera model was used to create an image.”

“We observed an evolution in how disinformation was being used to manipulate elections, wage war and disrupt civil society,” said Jigsaw CEO Jared Cohen. “But as the tactics of disinformation were evolving, so too were the technologies used to detect and ultimately stop disinformation.” 

Assembler may eventually be used to test the authenticity of photographs and video (even deepfakes) as a vital tool to combat disinformation.

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.