Social Media Algorithms and the Infrastructure of Disinformation

Irene Pasquetto, PhD, joins Michigan Minds for the Social Responsibility Series to discuss her research on misinformation and disinformation, and social media algorithms. Pasquetto is an Assistant Professor of Information at the U-M School of Information, where her research focuses on issues of scientific disinformation and the ethics of information technologies and digital curation. 

Social Media Algorithms and the Infrastructure of Disinformation
Irene Pasquetto, PhD


Pasquetto’s most recent research studies how misinformation and disinformation work, exploring how various groups on the Internet produce and share what they think is informational, but may also be known as alternative facts. Her work aims to understand both the rhetorical and media tactics that these groups use to justify their arguments online.

In a recent paper titled, Disinformation as Infrastructure, Pasquetto examines how Italian QAnon supporters designed and maintained a distributed, multi-layered “infrastructure of disinformation” that spanned multiple social media platforms, messaging apps, online forums, and alternative media channels. Pasquetto explains that during the first year of the pandemic, this group built a fast-growing network of interconnected websites and platforms, extending beyond Facebook and Twitter by creating new databases. 

“They use this infrastructure—what we call disinformation infrastructure—to present and share evidence of their theories, but also to recruit new followers and expanded networks.”

The most important observation that emerged from this work, Pasquetto says, was that it took a long time for social media platforms to intervene and remove these accounts in order to stop the spread of disinformation. The issue of a delayed response from social platforms provided the opportunity to rely on established infrastructures like forums, websites, and messaging apps. 

“The main idea here is that social media works as springboards for disinformation campaigns, which means that the deplatforming operations are very timely interventions. So the more disinformation infrastructures that grow over the Internet and the more we wait to deplatform these groups, the harder they are to eradicate.”

In relation to her work, Pasquetto expands on the dangers that the spread of misinformation causes. She says that as digital users, misinformation can lead people to make decisions that can harm themselves or others. “The main danger, the theme that kind of worries me and motivates my research, is the fact that these groups who are creating and spreading misinformation want to present all information as equal—as if anyone can provide useful, reliable information that can actually guide key societal decisions, which is actually super hard to do, especially in times of crisis.”

She encourages social media users to be critical of what they’re reading on the internet, and warns users that information is not equally valid and reliable. Research and factual information is actually slow to produce and requires a lot of work and expertise to spread, whereas disinformation is easy and inexpensive to create and distribute. 

Pasquetto also discusses her conversation with the Columbia Journalism Review about how social media algorithms accelerate the spread of disinformation, commenting on how social platforms profit off of this spread. She describes how social media platforms predict what a user will want to see or interact with based on personal data from a user’s account. 

“Platforms can try to reduce the harmful content by kind of tweaking this algorithm over time. For example, Facebook, at some point, made sure that posts that receive a lot of angry emojis are recommended less by the algorithm than posts that receive a lot of likes in an attempt to reduce the circulation of upsetting or sensationalist content. However, at the moment, users cannot really influence the way the algorithm works—at least not directly.”