Well-Meaning Bots Duke It Out on Wikipedia

Just like Wikipedia’s human editors, automated computer programs designed to combat vandalism and create new pages sometimes end up fighting each other.

By Nathan Collins

(Photo: Wikimedia Commons)

If humans sometimes get into fights, whether through Twitter or fists, might that mean artificial intelligence (AI) can get into scuffles as well? Yes, according to new research: Even the relatively dumb Web robots that help maintain Wikipedia get into frequent, often long-lasting disputes with each other—a finding that doesn’t bode well for the future of AI.

It might come as a surprise that Wikipedia actually employs bots—automated computer programs that perform simple, repetitive tasks—but given the massive number of pages the encyclopedia hosts, it makes a lot of sense. One of these bots’ major tasks is policing obvious vandalism (think: words related to the male anatomy appearing in articles not related to the male anatomy). Some bots even create wiki pages. Early on, for example, “rambot” created around 30,000 entries on American towns and cities. There are now several thousand Wikipedia-approved bots operating on the site.

With all those different programs running, Oxford Internet Institute researchers Milena Tsvetkova, Ruth García-Gavilanes, Luciano Floridi, and Taha Yasseri started to wonder: Might those bots ever start arguing—so to speak—over changes to a Wiki page?

Yes, as it turns out. “We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other’s edits and these sterile ‘fights’ may sometimes continue for years,” the researchers write in a paper posted at arXiv.org. “Further, just like humans, Wikipedia bots exhibit cultural differences. Our research suggests that even relatively ‘dumb’ bots may give rise to complex interactions, and this provides a warning to the Artificial Intelligence research community.”

Even the relatively dumb Web robots that help maintain Wikipedia get into frequent, often long-lasting disputes with each other.

To measure conflicts between bots, the researchers focused on “reverts,” when an editor—human or bot—undoes the changes another editor has made. Long chains of reverts, essentially two or more editors going back and forth ad nauseum over the content of an entry, are a sign of conflict.

Going over every edit made on Wikipedia from 2001 to 2010, the researchers sorted out bots from humans, revealing in the process that, while growth in the number of bots leveled off somewhat, the number of bot-to-bot reverts has continued to grow.

“In general, bots revert each other a lot,” the team writes. On average, each bot reverted another 105 times over 10 years, compared to just three times for humans. Bot conflicts also tended to unfold over longer periods of time than human ones, a consequence of the fact that bots review many pages in sequence, while humans likely continuously monitor a small number of entries.

Finally, bots were considerably more likely than humans to end up in long-lasting revert battles. “In contrast, humans tend to have highly unbalanced interactions,” where one editor reverts another, and everyone leaves it at that.

“Our analysis shows that a system of simple bots may produce complex dynamics and unintended consequences,” the researchers write, including conflict between otherwise benevolent autonomous programs. “Although such disagreements represent a small proportion of the bots’ editorial activity, they nevertheless bring attention to the complexity of designing artificially intelligent agents.”

Related Posts