When scientists or special effects wizards want to simulate a flood or visualize an asteroid impact, they turn to programs called physics engines. But handcrafting such software to match nature requires time and expertise. Now, researchers have found a way for artificial intelligence (AI) to learn to simulate complex physical phenomena simply by watching the real thing.
This week at the International Conference on Machine Learning, the AI company DeepMind presented a new type of model called a Graph Network-based Simulator (GNS). The program can realistically recreate the interactions between tens of thousands of particles of different materials, lasting thousands of animation frames. What happens when you throw a pile of sand, a glob of “goop,” and a torrent of water into a box? Take a look above.
The system uses “graph networks,” representing a scene as a network of interacting particles (each particle much bigger than a molecule—some clips above were later rendered in hi-resolution) that pass “messages” to each other about their positions, velocities, and material properties. The message passing, and how particles respond, is learned through trial and error by comparing forecasts with those of traditional physics engines. Once trained, the system can generalize to never-before-seen situations—predicting the behavior of many times more particles, or what would happen if you added more obstacles like ramps, or shook up the box.
In experiments, the new system was more accurate and better at generalizing various phenomena than competing AI approaches, despite its simplicity. Beyond cool visuals, the researchers hope the method can help machines reason about the world around them. You know, in case a robot ever needs to dodge a gelatinous hippo.