Self-driving cars navigating down a street have to be able to interpret traffic signs.

ANGELO MERENDINO/AFP/Getty Images

Researchers teach self-driving cars to ‘see’ better at night

Today’s autonomous cars can already harness the power of artificial intelligence (AI) software to drive from Los Angeles, California, to New York City without any human input, as long as it’s a sunny day. But they still struggle to spot a stop sign in the rain. Now, researchers say they are on the cusp of giving self-driving cars the ability to read road signs in all sorts of weather and light conditions, an AI advance that brings the vehicles one step closer to being safe enough for everyday people to operate.

Self-driving cars usually identify traffic signs, such as those indicating stops or speed limits, by detecting their distinctive shape, color, or other features with a camera. But rain, dark, and even trees can obscure these signs, often making it too difficult for an autonomous car to confidently read them. That forces drivers to step in and manually take control when approaching an obscured sign—or stick to driving in the day.

To overcome these obstacles, researchers at Sookmyung Women’s University and Yonsei University in Seoul focused on the relative reflectiveness of road signs. Their approach requires autonomous cars to continuously capture images of their surroundings. Each image is evaluated by a machine learning algorithm—a computer program that can quickly look through an image and decide whether it matches a known pattern. In this case, the algorithm is looking for a section of the image that is likely to contain a sign. It’s able to simultaneously evaluate multiple sections of the image—a departure from previous systems that considered parts of an image one by one. At this stage, it’s possible it will also detect irrelevant signs placed along roads.

The section of the image flagged as a possible sign then passes through what’s known as a convolutional neural network. Inspired by how humans see, this network picks up on specific features like shapes, symbols, and numbers in the image to decide which type of sign it most likely depicts. For example, in some countries it knows that a circular sign will depict a traffic rule whereas a triangular shape indicates a warning. From there, it can go on to look for a number that indicates a speed limit or a symbol that clarifies the warning. If it’s not a traffic sign, it’s discarded. If it is, it’s passed along so the car can decide what to do with the information.

Sample analysis of traffic signs.

K. Lim et al., PLOS ONE 12, 3 (6 March 2017) PLOS

The method, which was tested on previously captured images of roads in the United States, Germany, and South Korea, does all this quickly and with a relatively modest amount of computing power, Yeongwoo Choi, an artificial intelligence and computer graphics researcher at Sookmyung, and his colleagues report this month in PLOS ONE. This is made possible by a computing platform called the DRIVE PX 2. Built by the California-based firm NVIDIA specifically for autonomous vehicles, it’s a small but powerful computer that can combine data from multiple sensors and cameras to help the car make sense of its surroundings. The boost in computing power means the system is able to evaluate high-definition images that contain multiple signs while still being speedy enough to give the car timely information.

In the real world, this should mean that an autonomous car can drive down the street and accurately pinpoint and decipher every single sign it passes. It would take a picture of a road scene, find the octagon-shaped sign, and decide it’s a stop sign, with enough time left for the car to stop at the intersection. Another self-driving car might miss some signs because of bad weather or have to use less processing power to identify each sign, causing it to be less accurate.

Kang-Hyun Jo, a self-driving car researcher at the University of Ulsan in South Korea who is not involved with the research, says it would be impossible for a self-driving car to safely navigate a complex road environment without a strong traffic sign recognition system. “Autonomous cars should see and recognize arbitrary objects because we can’t guarantee what happens outside ourselves,” he says. “To perform this task, it is so important to figure out and identify the information that directly endows the car with safe navigation.”

As fully autonomous vehicles are not yet ready for the real world, carmakers are experimenting with hybrid features that delegate some tasks to the car and some to the human driver. Semiautonomous cars that can recognize street signs could correct for human mistakes by automatically stopping at a stop sign or alerting drivers when they drive above the speed limit.

Choi says his team will continue improving on their method, with a specific focus on country-specific signs. The team is also working on recognition for general road features like lane markers, though this has yet to be tested in the real world.