Big Tech, DARPA Ramp Up Deepfake Research, Detection

Movies and TV shows have combined real and CG images for decades, for the purpose of entertainment. But we’re seeing the rise of deepfakes, which mix fake and real elements in still images or videos with a malignant or harmful aim. Many Big Tech companies that have benefited from letting users post and share photos are now turning their attention to battling deepfakes. According to cybersecurity startup Deeptrace, the number of deepfakes online has nearly doubled to 14,678 from December 2018 to August 2019. Continue reading Big Tech, DARPA Ramp Up Deepfake Research, Detection

DARPA Attempts to Stop Automated Disinformation Attacks

The Defense Advanced Research Projects Agency (DARPA) published a concept document for the Semantic Forensics (SemaFor) program, aimed at stopping “large-scale, automated disinformation attacks,” by detecting fakes among thousands of audio clips, photos, stories and video. As the 2020 Presidential election approaches, U.S. officials are working to prevent hackers from spreading disinformation on social platforms, but Senate majority leader Mitch McConnell won’t consider any election security laws. Continue reading DARPA Attempts to Stop Automated Disinformation Attacks

Scientists and Military Look for Key to Identifying Deepfakes

The term “deepfakes” describes the use of artificial intelligence and computer-generated tricks to make a person (usually a well-known celebrity or politician) appear to do or say “fake” things. For example, actor Alden Ehrenreich’s face was recently replaced by Harrison Ford’s face in footage from “Solo: A Star Wars Story.” The technique could be meant simply for entertainment or for more sinister purposes. The more convincing deepfakes become, the more unease they create among AI scientists, and military and intelligence communities. As a result, new methods are being developed to help combat the technology. Continue reading Scientists and Military Look for Key to Identifying Deepfakes