Actually, the internet's always been this bad
A new study considers 30 years of comments, with surprising conclusions
The internet is a festering, antisocial hellscape that has only gotten worse with time. That claim is held so widely — and so unequivocally — that it's become a ground truth of life online.
But a fascinating new study in Nature complicates the presumption that online discourse is bad and worsening. A team of Italian researchers evaluated more than half a billion comments spanning 30 years, and in a March paper concluded that online discourse is no more "toxic" today than it was in the early 1990s.
"The toxicity level in online conversations has been relatively consistent over time, challenging the perception of a continual decline in the quality of discourse," said Walter Quattrociocchi, one of the study's authors, in an email. "While the platforms and how we use them have evolved, human behaviors in these spaces have remained surprisingly stable."
Given the noxious virtual muck we all flail around in, some of you may well be wondering if Quattrociocchi — the head of the Data and Complexity for Society Lab at Rome's Sapienza University — is, in fact, for real. Does he know about QAnon? Kiwifarms? LibsofTikTok? True crime fanatics or Star Wars stans? Has he never seen Nextdoor after a bike lane installation? Has he logged into Twitter since it became X?!
Reader, he has. (Or, well — I assume; I did not specifically ask.) Quattrociocchi and his team aren't denying the toxicity of the social internet.
Instead, they find that early online conversations — far from avoiding the negativity and combativeness that we associate with modern social media — actually followed similar patterns.
It's not the internet, babe. It's human nature.
You don't have to go home, but…
To come to that rather dispiriting conclusion, Quattrociocchi and his team analyzed half a billion comments from eight platforms between 1992 and 2022. To determine the toxicity of any given comment, they ran them through Perspective AI, an automated classifier tool also used by a number of social platforms and major publishers.
Perspective AI claims to identify speech that is "rude, disrespectful or unreasonable" and "likely to make someone leave a discussion" — a squishy definition, admittedly, but one that definitely encompasses a broad range of objectionable speech and behavior. After scoring the toxic comments, researchers then looked at how toxicity accumulated across conversations, from '90s Usenet to 2010s YouTube and Twitter.
Those patterns proved surprisingly consistent across time and platforms: Overall, the study found that the prevalence of both toxic speech and highly toxic users was extremely low. But the longer any conversation goes on, on virtually any platform, the more toxic it becomes. At the same time, conversations tend to involve fewer, more active participants as they stretch on.
It's like a certain kind of bar around last call: Most people have gone home for the night, but the stragglers are loud, uninhibited and prone to a certain level of mischief and/or aggression.
"Longer conversations potentially offer more opportunities for disagreements to escalate into toxic exchanges, especially as more participants join the conversation," Quattrociocchi said.
Everything in moderation
That finding held true across seven of the eight platforms the team researched. By and large, those platforms also exhibited similar shares of toxic comments. On Facebook, for instance, roughly 4 to 6% of the sampled comments failed Perspective AI's toxicity test, depending on the community/subject matter. On YouTube, by comparison, it's 4 to 7%. On Usenet, 5 to 9%.
Even infamously lawless, undermoderated communities like Gab and Voat didn't fall so far from the norm for more mainstream platforms: About 13% of Gab's comments were toxic, the researchers found, and between 10 and 19% were toxic on Voat.
There's something deeply unfashionable and counterintuitive about all of this. The suggestion that online platforms have not single-handedly poisoned public life is entirely out of step with the very political discourse the internet is said to have polluted.
In recent years, U.S. and European lawmakers have sought to regulate online speech by compelling platforms to take action on dangerous, false or otherwise toxic content, ranging from hate speech to vaccine disinformation.
That narrative reached something of a high point in the U.S. last week, when President Biden signed legislation that could ban TikTok in part over concerns that the app fosters misinformation and prejudice. There is something unique to the design and ownership of this specific app, the logic goes, that makes it especially toxic.
Quattrociocchi said it would be a mistake to assume his team's findings suggest that moderation policies or other platform dynamics don't matter — they absolutely "influence the visibility and spread of toxic content," he said. But if "the root behaviors driving toxicity are more deeply ingrained in human interaction," than effective moderation might involve both removing toxic content and implementing larger strategies to "encourage positive discourse," he added.
Toward less toxic spaces
Consider Reddit's science communities, which contained the least toxicity (a mere 1%!) of all the communities this study considered. The site's largest science subreddit, r/science, is a masterclass in proactive moderation. More than 1,000 volunteer mods, many of them scientists, filter out untrustworthy or dated studies, personal anecdotes and off-topic or offensive comments, among other stuff.
But the virtuous culture of r/science also goes beyond the content mods remove: The group also invites a certain standard of conversation by verifying credentialed scientists, reinforcing its norms often (and automatically), and taking firm positions against things like climate change denial and anti-vax conspiracies.
It's worth noting two limitations of this paper: By its authors' own admission, it doesn't quite capture the disruptive effect of coordinated trolling campaigns, which can be far less predictable than individual behavior. The study also only considered text-based media, at a time when a great deal of online discourse plays out in videos.
If its findings still feel off to you, there might also be another culprit, Quattrociocchi said: People feel that online discourse has gotten worse because the growth of social media platforms, and the ease of commenting and sharing there, has increased the visibility of toxic content.
That creates "a bias in our perception of overall discourse quality," he said.
So: Happy posting! It's not all that bad.
Caitlin Dewey is a reporter and essayist based in Buffalo, N.Y. She was the first digital culture critic at the Washington Post and has hired fake boyfriends, mucked out cow barns and braved online mobs in pursuit of stories for outlets including The New York Times, The Atlantic, The Guardian, The Cut, Elle, Slate and Cosmopolitan.