Twitter Earns Praise for Transparency in Its Research Findings

Twitter has earned praise for transparency after it published “unflattering” research findings. The company analyzed “millions of Tweets” in an attempt to measure how its recommendation algorithms handle political content, and subsequently reported that it amplifies more content from right-wing politicians and media outlets than from left-wing sources. The findings, which were released in late October, were well-received at a time when social platforms are fast to tout positive findings, but quickly discredit critical data, as was the case with Facebook and whistleblower Frances Haugen.

Results of the Twitter study, which evaluated the tweets of elected officials in seven countries — Canada, France, Germany, Japan, Spain, the UK and the U.S. — were reported in The Verge, which had previously accused Facebook of attempting to “smear and discredit Haugen.”

Twitter earned compliments as a tech platform willing to publish controversial findings “for the world to see.” Twitter’s blog post on the matter “was accompanied by a 27-page paper that further describes the study’s findings and research and methodology,” The Verge points out.

“In six out of seven countries — all but Germany — Tweets posted by accounts from the political right receive more algorithmic amplification than the political left when studied as a group,” Twitter’s findings reveal. Although more pronounced on the right, the algorithmic amplification of political content took place across the board when compared to the chronological timeline, “regardless of party or whether the party is in power.”

Since 2016, Twitter users have had the option of choosing between tweet displays in reverse chronological order, or by using an algorithmic formula influenced by recent interactions that adds automated content recommendations. Facebook sparked debate over its resistance to allowing users the option of a purely chronological News Feed.

In addition, The Verge wrote over the summer that Twitter “hosted an open competition to find bias in its photo-cropping algorithms.” The winning entry revealed Twitter’s cropping algorithm “favors faces that are ‘slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits.’”

The second and third-ranked submissions showed the system the photo algorithms exhibited ageism, disfavoring people with white or grey hair, and demonstrated a preference for English over Arabic script in images.