Song Mi Lee joined Michigan Minds as part of a Social Responsibility Special Series, which aims to inform digital users about current issues regarding social media. Lee is a PhD student at the University of Michigan School of Information. Her research focuses on online harassment, disinformation, and misinformation on social media. She studies the idea of information literacy and how creating better platform environments for practicing information literacy can be part of a long term solution.
Lee recently co-authored a study with colleagues from the U-M School of Information and U-M Law School regarding online harassment. This research examined what psychological characteristics predicted how internet users behaved in aggressive online conflict, finding that anyone can really be a cyberbully, not just those with extreme antisocial traits. Lee shares insight from this work, in which American adults were asked to self-report if they have ever engaged in online harassing behaviors in conflict with others.
“We had a list of 16 different behaviors in the survey that are commonly bundled under the term online harassment. We avoided using the term harassment in our survey questions, because it is controversial and stigmatizing. We also asked participants to respond to seven different psychological skills that measure tendencies… These skills were selected based on previous studies on aggression and conflicts. And finally we asked some demographic questions,” she explains.
The survey found that over 50% of the 300 participants had participated in at least one harassing behavior. This research also found that the self-reported perpetuations were strongly predicted by the psychological characteristics that we measured.
“Our findings indicate that online trolls, bullies, or harassers are not some special species, nor are they pathologically anti-social. Online harassment may occur when someone may be having a terrible day and snap—some may feel like they are entitled to punish wrongdoers or people with ‘bad ideas,’ or someone harasses you and you harass them back,” Lee says. “So yes, any of us can make bad choices and harm others in online interactions.”
Lee explains how social media platforms rely on after-the-fact remediation of online harassment, such as deleting posts or banning reported users—after the harm has already been done. Using her research findings, she shares ways that social media platforms can prevent online harassment instead of using remediation strategies.
She explains that instead of content moderation, social media platforms should focus on harassment prevention. In relation to her research, Lee expands on how social platforms should actively take users’ psychological characteristics into account, and shares different ways it can be done. For example, using machine learning to infer individuals’ psychological tendencies from their behavior patterns.
“Analyzing how quickly someone clicks on something, or what kind of tone their language has when commenting to others, or when the topic is xyz—and that data could be used to identify who has tendencies to be at the higher risk of perpetrating harassment,” she says.
Lee provides advice for those experiencing or witnessing online harassment, and shares that reporting an inappropriate comment can help the platform understand and respond better to harassment. She warns digital users to stay alert, and encourages those who see harassment online to intervene if they are comfortable, and de-escalate the aggression.
“Even small actions like clicking the report button or downvoting hateful comments can be helpful. It not only helps the platforms to understand and respond to harassment, but more importantly, it can contribute to setting the social norm that harassment is not acceptable.”