People have no trouble looking at a photo and understanding the 3D shapes of the objects within—people, cars, Shiba Inus. But computers, with little experience in the real world, aren’t so smart—yet. Now, scientists have created a new “unwrapping” method that comes much closer. They started by teaching an algorithm to treat 3D objects as 2D surfaces. Imagine, for example, hollowing out a mountainous globe and flattening it into a rectangular map, with each point on the surface displaying latitude, longitude, and altitude. After much practice, the new machine-learning algorithm learned to translate photos of 3D objects (like the first row of planes, above) into 2D surfaces, which can then be “stitched” into 3D forms. Researchers trained it to reconstruct cars, airplanes, and hands in almost any posture. Whereas an earlier method warped sedans into hatchbacks and rendered planes birdlike (see the second row of airplanes, above), this new method could more accurately infer 3D shapes from photos, the authors reported this week at the Institute of Electrical and Electronics Engineers Conference on Computer Vision and Pattern Recognition, in Honolulu. The new program, called SurfNet (after the word “surface”), could also invent brand new, realistic-looking 3D shapes for cars, planes, and hands. Future applications might include designing objects for virtual and augmented reality, creating 3D maps of rooms for robot navigation, and designing computer interfaces controlled with hand gestures. Thumbs up.