Fighting Online Trolls With Bots
Saiph Savage, 13 Jan 17
       

Troll image from shutterstock.com

The wonder of internet connectivity can turn into a horror show if the people who use online platforms decide that instead of connecting and communicating, they want to mock, insult, abuse, harass and even threaten each other. In online communities since at least the early 1990s, this has been called “trolling.” More recently it has been called cyberbullying. It happens on many different websites and social media systems. Users have been fighting back for a while, and now the owners and managers of those online services are joining in.

The most recent addition to this effort comes from Twitch, one of a few increasingly popular platforms that allow gamers to play video games, stream their gameplay live online and type back and forth with people who want to watch them play. Players do this to show off their prowess (and in some cases make money). Game fans do this for entertainment or to learn new tips and tricks that can improve their own play.

When spectators get involved, they can help a player out. Saiph Savage, CC BY-

Large, diverse groups of people engaging with each other online can yield interesting cooperation. For example, in one video game I helped build, people watching a stream could make comments that would actually give the player help, like slowing down or attacking enemies. But of the thousands of people tuning in daily to watch gamer Sebastian “Forsen” Fors play, for instance, at least some try to overwhelm or hijack the chat away from the subject of the game itself. This can be a mere nuisance, but can also become a serious problem, with racism, sexism and other prejudices coming to the fore in toxic and abusive comment threads.

In an effort to help its users fight trolling, Twitch has developed bots – software programs that can run automatically on its platform – to monitor discussions in its chats. At present, Twitch’s bots alert the game’s host, called the streamer, that someone has posted an offensive word. The streamer can then decide what action to take, such as blocking the user from the channel.

Trolls can share pornographic images in a chat channel, instead of having conversations about the game. Chelly Con Carne/YouTubeCC BY-ND

Beyond just helping individual streamers manage their audiences’ behavior, this approach may be able to capitalize on the fact that online bots can help change people’s behavior, as my own research has documented. For instance, a bot could approach people using racist language, question them about being racist and suggest other forms of interaction to change how people interact with others.

Using bots to affect humans

In 2015 I was part of a team that created a system that uses Twitter bots to do the activist work of recruiting humans to do social good for their community. We called it Botivist.

We used Botivist in an experiment to find out whether bots could recruit and make people contribute ideas about tackling corruption instead of just complaining about corruption. We set up the system to watch Twitter for people complaining about corruption in Latin America, identifying the keywords “corrupcion” and “impunidad,” the Spanish words for “corruption” and “impunity.”

When it noticed relevant tweets, Botivist would tweet in reply, asking questions like “How do we fight corruption in our cities?” and “What should we change personally to fight corruption?” Then it waited to see if the people replied, and what they said. Of those who engaged, Botivist asked follow-up questions and asked them to volunteer to help fight the problem they were complaining about.

We found that Botivist was able to encourage people to go beyond simply complaining about corruption, pushing them to offer ideas and engage with others sharing their concerns. Bots could change people’s behavior! However, we also found that some individuals began debating whether – and how – bots should be involved in activism. But it nevertheless suggests that people who were comfortable engaging with bots online could be mobilized to work toward a solution, rather than just complaining about it.

Humans’ reactions to bots’ interventions matter, and inform how we design bots and what we tell them to do. In research at New York University in 2016, doctoral student Kevin Munger used Twitter bots to engage with people expressing racist views online. Calling out Twitter users for racist behavior ended up reducing those users’ racist communications over time – if the bot doing the chastising appeared to be a white man with a large number of followers, two factors that conferred social status and power. If the bot had relatively few followers or was a black man, its interventions were not measurably successful.

Raising additional questions

Bots’ abilities to affect how people act toward each other online brings up important issues our society needs to address. A key question is: What types of behaviors should bots encourage or discourage?

It’s relatively benign for bots to notify humans about specifically hateful or dangerous words – and let the humans decide what to do about it. Twitch lets streamers decide for themselves whether they want to use the bots, as well as what (if anything) to do if the bot alerts them to a problem. Users’ decisions not to use the bots include both technological factors and concerns about comments. In conversations I have seen among Twitch streamers, some have described disabling them for causing interference with browser add-ons they already use to manage their audience chat space. Other streamers have disabled the bots because they feel bots hinder audience participation.

Sign in to view full article

       
Are The Rich More Selfish Than The Rest Of Us?
Social scientists have long known that the rich are not exactly model citizens.
Jan Stoop, James Andreoni, Nikos Nikiforakis
Wed, 12 Apr 17
Here’s How We Can Protect Ourselves From The Hidden Algorithms That Influence Our Lives
In political terms, 2016 has been a year of uncertainty. Yet, it has also seen the rising dominance of algorithms, ...
Alan Reid
Sun, 26 Feb 17
My Smartphone, Myself: Digital Separation Anxiety in The Postmodern World
Has this ever happened to you: you accidentally leave your cell phone at home, and it feels like your soul ...
Abraham Martínez González
Wed, 18 Jan 17
Singapore’s Ageing Population, a Challenge for Hospitals and Nurses
The increase in hospital admission and ensuing demands on intensive medical care will trigger the need for more hospital beds: ...
Epoch Newsroom
Mon, 2 Jan 17
The Phone Calls That Helped Expose Organ Harvesting in China
Drhiyuan Wang has spent more than 10 years investigating how other doctors in China have killed massive numbers of people ...
James Burke
Wed, 8 Feb 17
Advertise with Us
At Epoch Times, We Care :o)
Get your July/August 2017 issue at Kinokuniya stores today!
Read about Forced Organ Harvesting
Sports Elements
BUCHERER