Artificial Intelligence Takes a Trip on the London Underground

Researchers have devised a system that can learn sophisticated tasks, like figuring out where in the heck you are.

By Nathan Collins

(Photo: Dan Kitwood/Getty Images)

It’s a bit overwhelming, figuring out how to get from one place to another on the London Underground. So imagine how hard that navigation must be for a simple computer, especially one that has neither a map nor any clear instructions on how to get around. Yet researchers have done exactly that: They let an artificial intelligence loose on a (computer-simulated) Underground and watched as it learned to navigate the third-largest subway system in the world.

The key, Alex Graves, Greg Wayne, and their colleagues at Google DeepMind write in Nature, is adding a computer-like memory to a staple of artificial intelligence research, the neural network.

Neural networks first emerged in the 1950s in the form of the perceptron, which in a modern incarnation might take the individual pixels in a digital image and use them to answer simple questions—for example, is this a dog or a human? The idea is to train the perceptron by using feedback. First, show it some dogs and humans, then have it compute an answer based on a weighted average of the inputs (that is, the pixels). If the perceptron answers incorrectly, adjust how each pixel is weighted, and repeat until it gets the answers right.

The original perceptron couldn’t answer anything but the simplest either/or questions. Over the decades, researchers developed more capable neural networks, often by layering perceptrons on top of one another in what’s called a feedforward network—so named because each layer takes a set of inputs, performs a set of simple computations, and feeds its results forward to the next layer. Feedforward networks and other neural network varieties have been proved useful for a variety of tasks in computer science and psychology.

After enough training, the differentiable neural computer answered correctly 98.8 percent of the time.

Still, neural networks have their limits, in part because they don’t possess good memories—whatever memory they do have is encoded in the computations they perform, and those computations depend in turn on their (ongoing) training.

A simple way around that, the DeepMind team figured, was to add a memory module, where the network could store and retrieve information related to the situation it found itself in. They trained the resulting model, called a differentiable neural computer (DNC), by leading it down random paths on the Underground and letting it see where it ended up. (To be clear, the DNC is not a robot let loose on the actual Underground—everything in these experiments happened inside a computer.)

To test the DNC, the team first gave it a series of instructions, including a starting point and a sequence of several instructions, such as “take the Victoria line north one stop.” They then asked the DNC to identify its location.

After enough training, the DNC answered correctly 98.8 percent of the time—pretty good considering it was essentially asked to trace out a random, convoluted path using vague instructions and a mental map of the Underground it had learned entirely from experience. It was a versatile learner too: In additional experiments, the DNC learned to solve block puzzles and answer simple reading comprehension questions—for example, after hearing, “John is in the playground. John picked up the football,” determining where the football lies (it’s in the playground).

Those tasks required relatively little memory—just 512 individual memories in all. By expanding memory, the researchers hope to tackle more involved tasks, including scene comprehension and sophisticated language processing.

Related Posts