Read our COVID-19 research and news.

Quantum computers rely on superconducting chips like this one from Rigetti Computing in Berkeley, California.

Rigetti Computing

How to evaluate computers that don’t quite exist

To gauge the performance of a supercomputer, computer scientists turn to a standard tool: a set of algorithms called LINPACK that tests how fast the machine solves problems with huge numbers of variables. For quantum computers, which might one day solve certain problems that overwhelm conventional computers, no such benchmarking standard exists.

One reason is that the computers, which aim to harness the laws of quantum mechanics to accelerate certain computations, are still rudimentary, with radically different designs contending. In some, the quantum bits, or qubits, needed for computation are embodied in the spin of strings of trapped ions, whereas others rely on patches of superconducting metal resonating with microwaves. Comparing the embryonic architectures “is sort of like visiting a nursery school to decide which of the toddlers will become basketball stars,” says Scott Aaronson, a computer scientist at the University of Texas in Austin.

Yet researchers are making some of their first attempts to take the measure of quantum computers. Last week, Margaret Martonosi, a computer scientist at Princeton University, and colleagues presented a head-to-head comparison of quantum computers from IBM, Rigetti Computing in Berkeley, California, and the University of Maryland (UMD) in College Park. The UMD machine, which uses trapped ions, ran a majority of 12 test algorithms more accurately than the other superconducting machines, the team reported at the International Symposium on Computer Architecture in Phoenix. Christopher Monroe, a UMD physicist and founder of the company IonQ, predicts such comparisons will become the standard. “These toy algorithms give you a simple answer—did it work or not?” But even Martonosi warns against making too much of the tests. In fact, the analysis underscores how hard it is to compare quantum computers—which leaves room for designers to choose metrics that put their machines in a favorable light.

A conventional computer manipulates bits of information, encoded in transistors that can be switched on or off to represent zero or one. A qubit, however, can be set to zero and one simultaneously, say, by encoding it in an ion that can spin down for zero, up for one, or both ways at once. Qubits enable the machine to process many inputs simultaneously instead of one at a time. But the machine’s real power comes not through that massive parallelism, but in problems where possible solutions can be encoded in quantum waves that slosh among the qubits. The waves then interfere so that wrong solutions wash out and the right one emerges.

A quantum computer would be able to, for example, crack internet encryption schemes based on the factoring of huge numbers—a tough problem for a classical computer. But solving such problems would require 100,000 qubits and the means to correct errors in the delicate quantum waves. Such machines are decades away, researchers say. But quantum computers with even a few dozen noisy qubits will soon best conventional computers at certain tasks, developers say, and they’re searching for the metrics to prove it.

A quantum leap

With a quantum computer that relies on a superconducting chip, Rigetti Computing is seeking an application that will give it a practical advantage over conventional computers. Other companies are pushing other metrics to gauge progress.

72Quantum supremacy
20Quantum volume
Rigetti ComputingSuper-
16Quantum advantage
University of MarylandTrapped ions5Benchmark comparison

Solving a problem that a conventional computer cannot—so-called quantum supremacy—is one well-publicized metric. “It’s a ‘Hello world!’ project that shows your quantum computer works,” says John Martinis, a physicist in Santa Barbara, California, who leads Google’s efforts to achieve supremacy on a machine with 72 superconducting qubits.

The problem Google researchers have chosen is exceedingly abstract. Essentially, they program the quantum computer to repeatedly perform a random set of operations on the qubits. Thanks to quantum interference, the machine should spit out certain strings of zeros and ones with greater probability than others, instead of producing all strings with equal probabilities, as it would if there were no interference. What’s more, predicting this exact distribution of outcomes overwhelms classical computers once the number of qubits climbs too high. So if Google researchers can measure that telltale distribution for their 72-qubit machine, then, in a sense, it will have achieved quantum supremacy by calculating something a conventional computer cannot. However, the arcane exercise won’t usher in practical quantum computers, says Greg Kuperberg, a mathematician at the University of California, Davis. “It’s supremacy to do something completely useless.”

In contrast, researchers at Rigetti aim to show that a quantum computer can perform some useful task more accurately, faster, or more cheaply than conventional computers—a metric they call quantum advantage. “What we want are things that put us on the shortest path to commercial value,” says Chad Rigetti, a physicist and founder of the startup. For example, he says, a quantum computer might be ideal for modeling the complex interplay of financial assets in a hedge fund.

In September 2018, Rigetti pledged $1 million to the first user who achieves quantum advantage on its publicly available machines. The current version has 16 superconducting qubits. Because the measure includes factors like cost, quantum advantage is not so tightly defined, says Aram Harrow, a physicist at the Massachusetts Institute of Technology in Cambridge. “If it’s a little vague, that’s not bad for Rigetti,” Harrow says.

IBM researchers have defined a metric, called quantum volume, that measures a quantum computer’s performance without comparing it to a conventional machine. It involves testing a quantum computer using random calculations like those Google is using. And it depends on both the number of qubits and the number of computational cycles a machine can handle before its quantum states fuzz out.

Using a machine with 20 superconducting qubits, IBM scientists have reached a quantum volume of 16 and aim to double it every year, says Jay Gambetta, a physicist at IBM’s Thomas J. Watson Research Center in Yorktown Heights, New York. Breakthrough applications will follow naturally, he says. “I don’t think that supremacy is something you shoot for. It’s something we’ll recognize once we’ve passed on to bigger and bigger things.”

Then there are head-to-head comparisons like Martonosi’s. In her test, the 5-qubit ion-based machine solved most test problems correctly 90% of the time, compared with 50% or less for superconducting-qubit machines. That difference reflects the current states of the technologies and not their potential, Martonosi says. For example, in a superconducting machine each qubit interacts only with its neighbors, but every ion in the UMD machine interacts with all the others, giving it an edge. Bigger ion-based machines won’t share that advantage, however.

Martonosi says such comparisons show that all the quantum computers performed significantly better when programmed to account for differences in qubit noise and connectivity. “Across quite a wide range of [hardware] implementations, this appears to work,” she says. “That’s quite exciting.”

Harrow questions how useful any of the current metrics will prove in the long run. The main challenge in quantum computing remains finding a technology that will scale up to thousands of qubits, he says. “These metrics are only loosely related to the scaling question.”