In preparation for DisInformation Week, NDI hosted Philip Howard and Samuel Woolley from the Computational Propaganda Lab at the Oxford Internet Institute to talk about their new report on the effects of online disinformation. NDI President Ken Wollack began by noting that the vision of social media as a means to create open societies around the world has given way to an acknowledgement that social media has instead turned into yet another battleground for democratic practices. Ken also quoted the Oxford report on the need for tech companies to “design for democracies” and ensure platforms are designed in a way that discourages disinformation. Phil talked about the need for public policy to help address this issue and not simply relying on tech companies to self-regulate, but noted that policy must not be overly burdensome or restrictive.
A common topic was how disinformation campaigns often involve pushing out multiple contradictory false narratives to muddy the water and make it unclear what’s true. As Sam put it, “The goal is to confuse. It’s not to necessarily sell a fake story, it’s to make people so apathetic about politics and policy in general that then they don’t really want to engage anymore.” Another point made was that there are multiple types of bots: those that are trying to act like real users and share stories with real people, and those that clearly aren't real people and are there purely to like or retweet stories to game a site’s metrics and get a story trending. One interesting finding was that many groups of bots are being run not by organizations but by individuals looking to push a certain view or advance a cause. Other notable trends mentioned were bots being used in conversations to attack individuals, targeting prominent women in particular, and the use of bots by organizations to test out and refine their messaging.