Watch the two simulated robots above, and you’ll notice a big difference. Even though both of their “brains” have evolved over 300 generations to allow them to walk, only one succeeds; the other falls flat on its back.
That’s because only the bot on the left has learned to adapt to new circumstances. Artificial intelligence (AI) often relies on so-called neural networks, algorithms inspired by the human brain. But unlike ours, AI brains usually don’t learn new things once they’ve been trained and deployed; they’re stuck with the same thinking they’re born with.
So, in a new study, researchers created nets with “Hebbian rules”—mathematical formulas that allow AI brains to keep learning. Rather than their synaptic weights—the values dictating how activity spreads from one neuron to another—remaining static, they change based on experience. Then, the team partially removed the left front leg of both bots, forcing them to try to compensate for the injury. Both bots struggled at first, but the Hebbian bot was able to walk nearly seven times as far, the researchers report this month at the Conference on Neural Information Processing Systems.
Hebbian learning could someday improve algorithms used to recognize images, translate languages, or drive. In another test, a Hebbian net drove a video game race car about 20% better than its non-Hebbian counterpart. Even if you never plan on owning a robotic crawler like the one above, we can all benefit from AI that learns on its feet (or wheels).