DARPA Attempts to Stop Automated Disinformation Attacks

The Defense Advanced Research Projects Agency (DARPA) published a concept document for the Semantic Forensics (SemaFor) program, aimed at stopping “large-scale, automated disinformation attacks,” by detecting fakes among thousands of audio clips, photos, stories and video. As the 2020 Presidential election approaches, U.S. officials are working to prevent hackers from spreading disinformation on social platforms, but Senate majority leader Mitch McConnell won’t consider any election security laws.

Bloomberg quotes Syracuse University assistant professor Jennifer Grygiel, who said the DARPA system won’t be “perfect” until there is legislative oversight. “The risk factor is social media being abused and used to influence the elections,” she added. “There’s a huge gap and that’s a concern.”

At Stanford University’s Center for International Security, Andrew Grotto notes that the technology has advanced so quickly that “a decade ago, today’s state-of-the-art would have registered as sci-fi.” “There is no reason to think the pace of innovation will slow any time soon,” he added.

Although Facebook chief executive Mark Zuckerberg “played down fake news as a challenge” in the wake of the 2016 election, he has since back-pedaled on that assessment. But Facebook then made an “execution mistake” when a doctored video of House speaker Nancy Pelosi was posted and not immediately removed.

“Researchers can already produce convincing fake videos, generate persuasively realistic text, and deploy chatbots to interact with people,” said Grotto. “Imagine the potential persuasive impact on vulnerable people that integrating these technologies could have.”

DARPA hopes that it can “spot fake news with malicious intent before going viral …[via] a comprehensive suite of semantic inconsistency detectors [that] would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies.” Current systems, said the agency, are prone to “semantic errors,” such as “software not noticing mismatched earrings … weird teeth, messy hair and unusual backgrounds.”

The agency will test the algorithm’s ability to “scan and evaluate 250,000 news articles and 250,000 social media posts, with 5,000 fake items in the mix” in a three-phased program over the next four years. Program manager Matt Turek said the project “will also include week-long hackathons.”

DARPA has another project, MediFor, “which is trying to plug a technological gap in image authentication, as no end-to-end system can verify manipulation of images taken by digital cameras and smartphones.” “While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns,” said DARPA.

Grygiel noted the timeline for SemaFor is “too slow” and perhaps “a bit of PR.” “Educating the public on media literacy, along with legislation, is what is important,” she said. “But elected officials lack motivation themselves for change, and there is a conflict of interest as they are using these very platforms to get elected.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.