Weekly Roundup 11/12/2020

By Elizabeth Sutterlin | November 12, 2020

Small Photo
Photo
Header image from Alex Castro at The Verge, illustrating silhouettes of human faces with computer-drawn polygons.
Caption
Image credit: Alex Castro, The Verge

In the days leading up to the U.S. general elections, Facebook, Twitter, and other social media platforms rolled out a series of features meant to combat misinformation. This included placing warning labels over false information, banning political ads, bringing in more moderators, and hampering the spread of information. While these emergency measures may not last forever, they could prove to be valuable tools for curbing election misinformation in other country contexts and for slowing the spread of gendered disinformation campaigns online.

 

Despite these recent measures, social media platforms have faced criticism for being ultimately ineffective at preventing the spread of misinformation. Some critics say that these disclaimers and fact-checks were too little, too late. Content that is flagged on Twitter, for example, can still be amplified on other media channels. 

 

Top tech headlines curated for you:

 

Global Tech Policy:

  • Amazon has been charged with violating antitrust regulations by the European Commission. Regulators accused the tech giant of using its dual role as both a retailer and marketplace to stifle competition. The company supposedly harvests non-public data from vendors on its platform to identify popular products in order to copy and sell them, boosting Amazon's own sales.
  • The Center for a New American Security released a report outlining a transatlantic strategy to address China, which included recommendations to curb digital authoritarianism and limit the spread of illiberal technology that enables human rights abuses.

 

Open Internet:

  • In an article for the East Asia Forum, a researcher from the University of Toronto's Citizen Lab argues that China's internet censorship and content moderation model has allowed coronavirus misinformation and extreme nationalist content to flourish online. In attempting to control information that could be critical or destabilizing for the government, Chinese social media platforms often took down even neutral references to COVID-19.
  • The Ethiopian government has shut down internet and phone communications in its northern Tigray region as it conducts a military offensive against the Tigray People's Liberation Front (TPLF) forces. Internet rights group Access Now has called on the prime minister of Ethiopia to restore connectivity.
  • A recent blog post for Global Voices delves into the decline of digital rights and internet freedom in Africa, with a focused analysis of Ethiopia and Algeria. Often under the pretext of national exams or preventing violent protest, the authors discuss the growing usage of internet shutdowns and throttling as tools of political control across the continent.
  • An article for The Guardian examines Telegram's role in organizing protesters in Belarus and around the world. The messaging app's combination of usability and encryption has made it popular in Belarus this summer, as well as in Hong Kong and Iran.
  • A statement from the Freedom Network group noted regressions in Pakistan's digital policies in 2020, which have limited space for free expression and dissent online while hate speech and censorship became more common. Journalists and activists have faced scrutiny for their online activity, and female journalists in particular have faced harassment and vitriol online from members of Pakistan's ruling party.

 

ICT4D:

  • A guest blog post for ICTWorks highlights 5 tips for inclusivity in human centered design when working with disabled people and communities. The author recommends considering accessibility at every stage of a project and ensuring that caregivers' voices are included in a way that preserves autonomy of people with disabilities.

 

Disinformation:

  • A Facebook employee shared internally that the company's metric on "violence and incitement trends" had increased by 45% between October 31 and November 5, as disinformation and calls to violence surrounding the U.S. election were propagated online. After the social media platform's role in enabling genocide in Myanmar and fomenting political unrest in the Philippines, critics from both government and civil society are calling on Facebook to not only track metrics of violence, but to take a more active stance in preventing it.
  • Facebook said on Friday that it removed seven separate networks of inauthentic accounts and pages that were active in Iran, Afghanistan, Egypt, Turkey, Morocco, Myanmar, Georgia, and Ukraine. Many of the networks taken down were involved in deceptive political influence campaigns, and others were spreading extremist content on the platform.
  • The Washington Post's tech columnist argues that efforts made by Twitter and Facebook to label misinformation surrounding the U.S. 2020 elections were more successful as a PR move than they were for preserving democracy and election integrity. While such labeling is a step in the right direction, recent studies into the effectiveness of content labeling have yielded mixed results.

 

Cybersecurity:

  • Eva Galperin, the head of cybersecurity for the Electronic Freedom Foundation, told SC Magazine that until the tech industry does more to welcome marginalized voices and underrepresented groups, they will "continue to make the same mistakes." She calls on companies in Silicon Valley to give impacted communities meaningful input into the design, marketing, and selling of tech products.

 

Games for Good:

  • Four Beijing and U.S.-based tech activists have created a browser game that teaches users about AI-powered censorship by playing as a content moderation engineer at a fictional tech company. At the end of the game, players can access a toolkit for countering algorithmic censorship in real life.

 

Countering Violent Extremism:

  • A recent paper from the Resonant Voices Initiative analyzes efforts to counter extremist messaging online. The author suggests ways to go beyond content moderation in order to promote digital counternarratives that deflate violent rhetoric and reinforce societal efforts to prevent violent behavior.

 

AI:

  • Portland, Maine has passed legislation banning the use of facial recognition technology by law enforcement. Citizens who are surveilled in violation of this law are eligible for up to $1000 in compensation, though use of facial recognition and surveillance software by private sector companies is not affected.
  • Politico covered the failures of automated content moderation to make judgments about appropriate content on social media platforms, after COVID-19 forced many companies to send home their human content moderators. Journalists using social media to report on war crimes in conflict environments found their accounts closed overnight without options to appeal, while larger volumes of content that include hate speech and child exploitation remained published than when human moderators were responsible for flagging.

Share