Since 2016, Facebook has referred questionable posts to fact-checking teams at news organizations to determine if they contain misinformation. Now, Instagram (owned by Facebook), has started a similar policy, using image recognition to identify posts with possible misinformation. Those posts are then sent to Facebook’s fact-checkers for review and if determined problematic, they’re no longer recommended on the Explore tab or hashtag pages. While the posts are not removed and remain in users’ main feeds or Stories carousels, Instagram is introducing a new policy to remove accounts after repeated violations.
“An Instagram spokesperson said the company is focused on making it harder for new users to be algorithmically exposed to misinformation, rather than stemming the reach of misinformation,” reports Wired.
The same spokesperson indicated that Facebook’s and Instagram’s approaches differ, at least in part, because Instagram does not have a reshare button as Facebook has. Additionally, unlike on Facebook, content on a user’s Instagram feed comes entirely from accounts they’ve chosen to follow.
But do these measures go far enough to curb the use of Instagram for the spread of misinformation and inflammatory content?
“In 2017, the platform was the go-to social network for the Russian propaganda arm known as the Internet Research Agency, which used more than 130 fake Instagram accounts to spread polluted information and reinforce cultural divisions among Americans,” according to Wired.
Part of Instagram’s approach will be integrated with Facebook’s efforts. When an image is identified as misinformation on Facebook, Instagram will use image recognition technology to identify similar posts on its own platform, catching them and reducing their reach on the Explore feed and beyond. Instagram will monitor the effectiveness of these new efforts in the coming weeks.
Additionally, in terms of banning accounts, Instagram is working on new rules, coming after its decision to ban far-right extremist Alex Jones last week, reports Engadget.
As currently constructed, Instagram allows a certain percentage of violations within a time frame before a ban. But, as they’ve come to find, this creates leniency for those who post often. For that reason, the rules are about to change.
“With its new policy, Instagram said, accounts will be removed after an undisclosed amount of violations and an undisclosed window of time. And it’ll be the same bar for every user, regardless of how often (or not) they post on Instagram. The company said it doesn’t want to share the exact number of strikes it will put in place, or the timeframe for them, because it doesn’t want bad actors to take advantage of the system,” explains Engadget.
Instagram Is Trying to Curb Bullying. First, It Needs to Define Bullying., The New York Times, 5/9/19