Google's parent company, Alphabet, reportedly launched a “sensitive topics” review of research papers earlier this year, adding an additional layer of scrutiny to papers written by its AI scientists on topics including gender, race, and political ideology. The new review procedure asks its researchers to consult with legal, public relations, and policy teams before publishing evaluations of Google’s services that address biases. Google told employees to “strike a positive tone” in a paper on content recommendation algorithms, encouraging authors to remove references to YouTube and other Google products when their papers mentioned the algorithmic flaws that can lead users down rabbit holes to more obscure and potentially radicalizing content.
This news of a separate sensitive topics review process, which employees told Reuters began in June, comes shortly after one of Google's leading AI scientists, Timnit Gebru, was fired earlier this month for refusing to recant her paper studying the negative social and environmental consequences of artificial intelligence trained on massive datasets. Senior scientists at the company feel that Google has begun to interfere with the study of potential technological harms in order to preserve its image. The tech giant funds not only its internal AI research, but provides massive amounts of grant money to academic institutions that some now worry carry strings attached that could prevent computer scientists from sharing the truth of their work.
Top weekly headlines curated for you:
Global Tech Policy:
- China has begun taking a harder line toward big internet companies, opening an antimonopolistic investigation into the e-commerce group Alibaba and most recently ordering its sister company, the financial-technology giant Ant Group, to fix what they describe as a litany of business failings, including preventing merchants who sell on Alibaba from also selling on other platforms.
- The government of Spain has announced its plan to prepare a list of "safe" 5G mobile providers after conducting comprehensive risk assessments, avoiding an outright ban on Huawei or other 5G providers that have come under scrutiny. The Spanish 5G Cybersecurity Act will require companies to conduct risk assessments every two years.
- The BBC reported that Tanzania has been using Twitter's copyright policy to target and silence activists who expose government corruption. The social media company cracked down on one human rights activist after receiving hundreds of complaints that the account had violated the US Digital Copyright Millennium Act shortly before Tanzania's elections on October 28th.
- A group of NGOs have come together to condemn the sale of surveillance technology to repressive governments in the Middle East and North Africa, citing the risks the technology poses to journalists and human rights activists. A representative of Amnesty Germany has called on the EU to impose export controls on cyber-surveillance tools and to require companies to do their due diligence to avoid selling to customers with poor human rights track records.
- Paradigm Initiative, a digital rights organization based in West Africa, recently released Ayeta, a digital rights toolkit for civil society groups and human rights defenders whose work frequently makes them a target of online attacks and harassment. The toolkit provides numerous cybersecurity tips, case studies, and links to resources to assist in countering these threats.
- In a piece for the Atlantic, Evelyn Douek describes how an onslaught of COVID-19 misinformation forced tech companies like Facebook, Twitter, and YouTube to take on an unwanted role as "arbiters of truth." More aggressive content moderation decisions had a domino effect, as smaller platforms followed the lead of tech giants in taking down accounts linked to the QAnon conspiracy in 2020.
- Twitter plans to create labels for bot accounts next year, after years of calls from misinformation experts to disclose more information about automated accounts on the platform. The company will also update its policy for "verified" accounts in 2021 and unroll its new memorial features for accounts whose owners have passed away.
- The most recent issue of Logic Magazine published an interview with an anonymous Amazon employee about the company's business ventures, cybersecurity services, and partnerships with law enforcement. The article reveals the callous attitude emerging at Amazon and other companies about data breaches and cybercrime--while it can be embarrassing for corporations, breaches of people's data is "not that bad of a problem unless Congress comes calling."
- The EU's Agency for Fundamental Rights has warned against the use of AI in predictive policing, medical diagnoses, and targeted advertising. The rights watchdog urged policymakers to consider how current legislation applies to AI, as well as to ensure that future laws governing AI will protect human rights.
Other Tech News:
- A man in New Jersey was arrested and jailed based on a faulty facial recognition match last year. He is now suing the police department that arrested him for the violation of his civil rights, false arrest, and false imprisonment. He is the third known case of police arresting an innocent person after facial recognition software errors, all three of which have been Black men.
- A small business has filed a lawsuit against Facebook, claiming that the tech giant substantially misrepresented the effectiveness of its advertisement targeting. Internal Facebook documents released as part of the legal process revealed managers discussing the platform's low interest precision for advertisements and their "abysmal" targeting data accuracy outside the United States. This lawsuit comes as Facebook has tried to position itself as an ally to small businesses.