This week, the European Commission is unveiling its European Democracy Action Plan, which will include potential new limits on "microtargeting" of political advertising and partisan messages across social media platforms. The plan will also outline revisions to the Commission's approach to disinformation, which officials say include suggestions for better access to data from social media companies on who buys political ads and how messaging is targeted at voters.
These EU actions follow the examples set in the United States and UK, where online political advertisements are highly regulated, resulting in even non-political groups becoming caught by the "social issues or politics" advertising ban. Facebook's top lobbyist has told reporters that advertisements on Facebook are now far more transparent than the realms of print and television media. Any potential legislation will also need to address the limits of the European Union's ability to influence elections, which are under the oversight of individual national governments.
Top weekly tech headlines curated for you:
Global Tech Policy:
- France will require big tech companies to pay its digital services tax, a 3% tax on revenue from digital services in the country. Google, Facebook, and Amazon are among the US tech firms that will have to pay the tax, which applies to companies with global revenue of more than €750 million ($894 million).
- Many of the most prominent COVID-19 contract tracing apps are changing their approaches to privacy and transparency. MIT Technology Review's COVID tracing tracker compares which services have strong protections for user data, and which have lackluster safeguards in place.
- A new report from Amnesty International explores how Facebook and YouTube have increased their compliance with content takedown requests from censors in Vietnam, placing a harsh curb on Vietnamese users' rights to free online expression. Pro-democracy activists have lost faith in the platform, and often have no recourse once the platforms "geo-block" their content from appearing in the country.
- Telecommunications company MTN has released its first transparency report in response to mounting pressure from civil society, which outlines how it handles personal data of users across the Middle East and Africa. In addition to what it has laid out so far, Access Now has called for MTN to provide greater transparency on internet shutdowns and suspension of services, and to clarify which authorities have a mandate to request data.
- Facebook’s independent Oversight Board announced the first six cases where it could overrule the social media company’s decisions to remove certain pieces of content from its platforms. The global board, which Facebook created in response to criticism of its handling of problematic content, said it had received 20,000 cases since it opened its doors in October.
- The Danish Institute for Human Rights has developed practical guidelines for businesses and private-sector actors seeking to conduct human rights impact assessments of their digital activities. The guide will allow businesses to take the steps necessary to mitigate the risks to human rights in our increasingly digitized world.
- A digital divide impacts Kyrgyzstan's remote rural regions, which has prevented children from accessing remote learning resources during the COVID-19 pandemic. But a team of volunteers from Kyrgyzstan's Open Internet Foundation carried a large solar panel by foot across a narrow mountain path in order to provide electricity and internet to children in the village of Zardali in the Batken region.
- In the days after the U.S. election, Facebook altered its algorithm to more highly prioritize authoritative news sources that users saw in its feed. Executives at the company have decided to roll back these changes, which could impact how much time people spend on Facebook, despite employee requests to keep the more civil and less highly polarized news feed. This incident has highlighted the tension between Facebook's desire to curb disinformation and its desire to remain a dominant internet platform.
- Hosts of a Spanish-language radio station in the U.S. use their knowledge of indigenous languages Mixteco, Zapoteco and Purépecha to debunk social media misinformation and rumors about COVID-19 among Mexican farm workers living around Santa Barbara. Radio producers had to find ways to describe the coronavirus in Mixteco, a 2,000 year old language that does not include modern medical terminology.
- Since the outbreak of the COVID-19 pandemic, increased internet usage is radicalizing more people in Bangladesh. There are concerns that the uptick in consumption of online extremist content and disinformation will lead to a new wave of intergroup violence.
- Twitter’s rollout of a new feature, disappearing Fleets, allow users to share content that will automatically delete after 24 hours. Ephemeral posts, such as Instagram stories and now Fleets, raise concerns about lack of moderation and misinformation.
- Russian cryptocurrency miners have set up bitcoin mining operations in Georgia's breakaway region of Abkhazia, putting immense strain on its outdated electricity grid. Due to the low-regulation environment, the high energy consumption of Abkhazia's 150 "crypto-farms" has led to regular electricity blackouts for the residents, who report losing access to water and electricity more frequently than ever.
- A scaled-back version of Facebook's Libra cryptocurrency could be launched as early as January. The Libra Association plans to launch one stable cryptocurrency backed by the U.S. dollar after it gets the go-ahead from watchdogs in Switzerland.
- Recent research conducted on AI photo recognition from Google, Amazon, and Microsoft found that algorithms applied annotations related to physical appearance to three times as many photos of women as men. While photographs of male officials were tagged as "businessperson" and "official" by AI, top labels for women were words like "smile" and "chin." This study is the most recent addition to a body of literature that suggests that AI replicates rather than removes human biases.
- Researchers at Google have found that the way we train and test AI and neural networks on data is a poor predictor of how these programs will perform in real life. Across NLP systems and medical AIs, researchers found that models that should have been equally accurate according to the testing process performed very differently when tested with real-world data.
- Silicon Valley firm OpenAI has created GPT-3, an artificial intelligence algorithm trained on trillions of pieces of natural human language that can now write poetry, argue, and write code with some help from humans. However, it has drawn criticism in its beta testing for producing toxic language when asked to discuss minority racial and religious groups.
Other Tech News:
- After Snapchat introduced a TikTok-esque video feature, Axios observed that all social media platforms have started to look the same. They observed that while platforms used to focus on creating different products to attract audiences, they now instead share similar features, instead differentiating themselves by their philosophies and values.