Facebook Debates Self-Regulation to Combat Misinformation

Facebook is facing many challenges, none more pressing than the posts and memes covertly created by Russian government-led organizations whose goal was to influence the 2016 U.S. election. Inside Facebook, say a dozen current and former employees, the debate rages over how to deal with the issue. One side, supporting free speech, believes that nothing should be censored; the other side is worried about the problems created by this laissez-faire approach. Meanwhile, the company is reportedly in full-on defense mode.

The New York Times reports that, “next week, Facebook’s general counsel will be among several tech industry executives expected to testify at a series of Congressional hearings about the role the technology industry played in Russian interference of last year’s election.”

The Internet Research Agency, a “so-called troll farm” linked to the Kremlin, “amassed enormous followings for various Facebook Pages that masqueraded as destinations for discussion about all sorts of issues, from the Black Lives Matter movement to gun ownership.” These promoted posts led to a viral tsunami of fake news.

Facebook_Grid_Design

“The algorithms Facebook sets up that prioritize and control what’s shown in its powerful news feed are a lot more consequential than the ads,” said University of North Carolina associate professor Zeynep Tufekci. “The company seems stuck between exercising this massive power as it wishes, but also terrified about the conversation about its power and how to handle it.”

Elsewhere, NYT reports that, “Facebook has been much less vocal about the abuse of its services in other parts of the world,” in particular the issue of ethnic cleansing of the Rohingya Muslims of Myanmar, “fueled, in part, by misinformation and anti-Rohingya propaganda spread on Facebook.” In Myanmar, Facebook is “a primary news source” for many in the country, and “doctored photos and unfounded rumors have gone viral” there.

“In a lot of these countries, Facebook is the de facto public square,” said Human Rights Watch senior Internet researcher Cynthia Wong. “Because of that, it raises really strong questions about Facebook needing to take on more responsibility for the harms their platform has contributed to.”

Although Facebook is not alone in being an outlet for “viral misinformation,” its immense reach, and the speed of its growth in the developing world, “has made it an especially potent force among first-time Internet users, who may not be appropriately skeptical of what they see online.”

The Wall Street Journal reports that, in light of these issues, “many outside Facebook refuse to wait for the company to solve these problems.” Facebook, Google and Twitter are “relatively unregulated by federal and state law,” and now legislators in the U.S. and Europe are paying attention. “Sens. John McCain (R., Ariz.), Amy Klobuchar (D., Minn.) and Mark Warner (D., Va.) [are] proposing the Honest Ads Act, which would force Internet companies to tell users who funded political ads,” something “most forms of mass media are required” to do already.

Regulations may also come from state regulators, such as state attorneys general. In Europe, the EU’s General Data Protection Regulation, which will go “live” in May 2018, “opens a Pandora’s box of potential liabilities for all tech companies around how they handle and exploit individuals’ data, guard against breaches and transfer information across national borders.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.