Over the past few years, there have been extensive changes in content moderation policy on social media platforms. As companies like X and Meta loosen these policies, users have reported an increase in hate speech. So what caused this shift?
Professor Cuihua (Cindy) Shen, who specializes in computational communication, provided insight into the evolving policies of major social media platforms, the rise of hate speech, and the role of algorithms in its spread.
“My research has been focused on social media. More recently I’ve focused on misinformation, specifically multimodal misinformation in media ecosystems,” Shen explained.
The Evolution of Hate Speech Online
While there are many personal anecdotes regarding a perceived increase in hate speech, concrete data on the matter is hard to come by.
“Hate speech is a very fuzzy concept,” Shen explained. “Different platforms define it differently, and those definitions evolve over time.”
This shifting definition of hate speech is especially prevalent on platforms like X, formerly known as Twitter. After Musk’s takeover of the company, X significantly reduced its trust and safety team, leading to what users describe as a more hostile environment. In January, Meta loosened its rules on hate speech and rolled back its fact-checking programs, following X’s lead.
Algorithms and Hate Speech
The design of social media itself plays a major role in the spread of hate speech. Shen explained that people naturally gravitate toward others with similar opinions in a phenomenon known as homophily. This strengthens online echo chambers where extreme views go unchallenged.
“Humans have a tendency to bond with those who are similar to them,” Shen said. “We can just unfollow, mute, or block anyone whose views don’t align with ours, making it much easier to isolate ourselves from opposing perspectives.”
At the same time, social media platforms are designed to maximize engagement, and emotionally charged content tends to perform the best. This results in a cycle where contentious and hateful content spreads rapidly.
“Unfortunately, outrage and hostility tend to drive engagement,” said Shen. “The more people interact with extreme content, the more it gets promoted.”
Hate Speech and Misinformation
Hate speech and misinformation are strongly correlated with each other. Shen pointed out various instances where conspiracy theories and false narratives have fueled hate speech online.
“Think about the misinformation that spread during the COVID-19 pandemic,” Shen explained. “Many false claims about the origins of the virus had xenophobic undertones, and that led to real-world consequences like a rise in hate crimes.”
The rise of AI-generated images and content has only further complicated things. Shen noted that as these tools improve, distinguishing real content from fake will become increasingly difficult.
Looking Forward
As digital platforms continue to evolve, the challenge of keeping hate speech at bay while also allowing free speech will continue to remain complex. Shen emphasized the need for algorithmic solutions that discourage reactionary metrics. The main challenge that lies ahead when it comes to the future of social media regulation is curating environments where productive conversations thrive over hostility.
“We need a shift in how we design these platforms,” Shen explained. “Instead of amplifying outrage for engagement, companies should prioritize meaningful interactions that foster dialogue, not division.”