Artificial neural networks, computer algorithms that take inspiration from the human brain, have demonstrated fancy feats such as detecting lies, recognizing faces, and predicting heart attacks. But most computers can’t run them efficiently. Now, a team of engineers has designed a computer chip that uses beams of light to mimic neurons. Such “optical neural networks” could make any application of so-called deep learning—from virtual assistants to language translators—many times faster and more efficient.
“It works brilliantly,” says Daniel Brunner, a physicist at the FEMTO-ST Institute in Besançon, France, who was not involved in the work. “But I think the really interesting things are yet to come.”
Most computers work by using a series of transistors, gates that allow electricity to pass or not pass. But decades ago, physicists realized that light might make certain processes more efficient—for example, building neural networks. That’s because light waves can travel and interact in parallel, allowing them to perform lots of functions simultaneously. Scientists have used optical equipment to build simple neural nets, but these setups required tabletops full of sensitive mirrors and lenses. For years, photonic processing was dismissed as impractical.
Now, researchers at the Massachusetts Institute of Technology (MIT) in Cambridge have managed to condense much of that equipment to a microchip just a few millimeters across.
The new chip is made of silicon, and it simulates a network of 16 neurons in four “layers” of four. Data enters the chip in the form of a laser beam split into four smaller beams. The brightness of each entering beam signifies a different number, or piece of information, and the brightness of each exiting beam represents a new number, the “solution” after the information has been processed. In between, the paths of light cross and interact in ways that can amplify or weaken their individual intensities, the same way ocean waves can add or subtract from each other when they cross. These crossings simulate the way a signal from one neuron to another in the brain can be intensified or dampened based on the strength of the connection. The beams also pass through simulated neurons that further adjust their intensities.
Optical computation is efficient because once light rays are generated, they travel and interact on their own. You can guide them—without energy—using regular glass lenses, whereas transistors require electricity to operate.
The researchers then tested their optical neural network on a real-world task: recognizing vowel sounds. When trained on recordings of 90 people making four vowel sounds, old-school computers performed the task with relative ease: A computer simulating a 16-neuron network was right 92% of the time. When the scientists tested the same data set on the new network, they came surprisingly close, with a success rate of 77% (but of course faster and more efficiently), they report this month in Nature Photonics. The researchers say they can improve performance with future adjustments.
“Part of why this is new and exciting is that it uses silicon photonics, which is this new platform for doing optics on a chip,” says Alex Tait, an electrical engineer at Princeton University who was not involved in the work. “Because it uses silicon, it’s potentially low cost. They’re able to use existing foundries to scale up.” Tait and colleagues have also developed a partially optical neural net on a chip, which they plan to publish soon in Scientific Reports.
Once the system includes more neurons and the kinks are worked out, it could supply data centers, autonomous cars, and national security services with neural nets that are orders of magnitude faster than existing designs, while using orders of magnitude less power, according to the study’s two primary authors, Yichen Shen, a physicist, and Nicholas Harris, an electrical engineer, both at MIT. The two are starting a company and hope to have a product ready in 2 years.