Can Artificial Intelligence Solve Our Problems With Harassment Online?

The right kind of A.I. can respond to any threat of violence, thereby encouraging bystanders to take action to maintain responsible online communities.
Gen Z Development CASBS

Imagine a teenager who’s about to distribute nude photos online, or send a racist or bullying comment about someone at school. Then imagine a guardian angel who could swoop down to block these online missiles before they strike—and then warn the kid about the hazards of sharing things like that.

That guardian angel can be artificial intelligence (AI). As our children’s world becomes alarmingly rife with bullying and other forms of harassment, even the most vigilant parents may not be enough protection from a variety of destructive behaviors and the trauma that can ensue, including suicide.

A recent study by the Pew Research Center found that an astonishing 59 percent of teenagers in the United States have experienced bullying and harassment online. This includes everything from offensive name-calling and spreading of false rumors to sharing explicit images and making outright physical threats.

At the same time, no other generation has been so strongly concerned about issues like anxiety, depression, and bullying. According to another study published this year by Pew, 70 percent of teens perceive anxiety and depression as the most pressing problem among their peers, followed by bullying (55 percent).

In addressing this epidemic, every single one of us needs to take action: owners of online services and social networks, schools, and parents. Parents are the first line of defense. There is no replacement for good parenting when it comes to vigilance in identifying the signs that a child is being subjected to online harassment—or carrying it out. Efforts by a parent to educate and prevent harassment can go a long way.

But good parenting alone is not always enough. Children from poorer and/or troubled backgrounds sometimes lack the necessary parental oversight. And the Pew study found that teenagers from lower-income families are roughly twice as likely to experience online bullying.

A lot of responsibility, though, also lies with the online platforms. By allowing anonymity, they empower bullies who do not feel accountable. As a result, young and hyper-connected members of Generation Z are often targeted by fake accounts that purvey hate and abuse.

This prompts the question of how online platforms can better police their forums. We are all aware of the struggles that Facebook, Twitter, and others have encountered when it comes to stopping bots and accounts operated by third parties that spread false information and rumors. The same challenge presents itself in efforts to identify bullies, and thereby to punish and deter them—all of which poses major difficulties to the moderators of these platforms.

Social media, online sites, and games can do a lot to protect their users. After all, we protect our homes and cars with alarms and other technology. We accept security guards at parties, clubs, and public spaces. It’s time to implement such solutions in the online world. In the near future, AI will protect us against stalkers and violence in a world where a majority of teenagers are not willing to report cyberbullying to parents or educators.

At Samurai Labs, we work on AI that can distinguish among various types of cyber-violence—from bullying, to personal attacks and blackmail, to sexual harassment—and react in real time according to community guidelines, within specific social contexts.

Our system could block, and send a warning to, that teenager distributing nude photos or bullying a classmate, while still respecting the teenager’s privacy.

Leaving rude behavior, hate speech, or harassment to pass without consequence can legitimize it and encourage others to follow. By contrast, acting when an algorithm perceives a threat can be beneficial. It is in people’s nature not to help strangers as long as they think that others will help. Yet if one good Samaritan chooses to engage, others will follow. This phenomenon, called diffusion of responsibility, sits at the center of our interventions at Samurai. Our AI is trained to respond to every detection of violence, thereby encouraging other bystanders to take action—creating a momentum for responsible online stewardship.

Let’s say that AI detects a culprit who repeatedly sends harassing messages to women in a video game. The first time, a bot disguised as a player can step in and say, “Hey, that was rude, let’s not address each other this way here.” Another bot, posing as another player, could step in and say, “Yeah, I agree with you—stay cool, man, and be nice,” to reinforce the first reaction. That would encourage others to follow suit and stand up to bullies.

Kevin Munger from New York University followed every use of the N-word by a white person on Twitter with a direct message to the offender: “Hey, man, just remember that there are real people who are hurt when you harass them with that kind of language.” This simple intervention was already enough to lower racist behavior, he found.

We believe that such ways of using human monitoring and AI for good—rather than for censorship—are the best means to decrease online violence. Combined with good parenting, advocacy, and education, AI can add a necessary layer of protection, making the Internet a safer and more enjoyable space for all.

Gen Z footer image Generation Z
(Photo: Pacific Standard)

Understanding Gen Z, a collaboration between Pacific Standard and Stanford’s Center for Advanced Study in the Behavioral Sciences, investigates the historical context and social science research that helps explain the next generation. Join our newsletter to see new stories, and let us know your thoughts on Twitter, Facebook, and Instagram.

Understanding Gen Z was made possible by Stanford University’s Center for Advanced Study in the Behavioral Sciences (CASBS) and its director, Margaret Levi, who hosted the iGen Project. Further support came from the Knight Foundation.

See more in this series:

Is Generation Z More Scared Than Earlier Generations?

The messages they’re getting in the media are terrifying—and the sustained sense of real threats could leave this generation with psychological scars. Read more

Have Headphones Made Gen Z More Insular?

Gen Z kids spend an average of four hours a day listening to audio with headphones. Watching my children consume their music with plugs stuffed in their ears, I wonder if we’re all missing out on opportunities to engage. Read more

There’s a Crisis of Reading Among Generation Z

As young people read less and less, they may be short-circuiting their reading brains. Read more

Stop Associating Video Games With Youth Gun Violence

Separating our conversations about school shootings from our conversations about video games will improve our approach to both. Read more

I Helped Create the Internet, and I’m Worried About What It’s Doing to Young People

At some level, we are all experiencing the Web’s toxic possibilities. But as with other toxins, young developing bodies and brains are more susceptible. Read more

Tumblr Helped Me Plan My Eating Disorder. Then It Helped Me Heal.

There are upsides and downsides to social media—and I’m proud to be part of a generation tackling these issues to create a healthier future. Read more

Related Posts