# The biggest flipping challenge in quantum computing

In October 2019, researchers at Google announced to great fanfare that their embryonic quantum computer had solved a problem that would overwhelm the best supercomputers. Some said the milestone, known as quantum supremacy, marked the dawn of the age of quantum computing. However, Greg Kuperberg, a mathematician at the University of California, Davis, who specializes in quantum computing, wasn’t so impressed. He had expected Google to aim for a goal that is less flashy but, he says, far more important.

Whether it’s calculating your taxes or making Mario jump a canyon, your computer works its magic by manipulating long strings of bits that can be set to 0 or 1. In contrast, a quantum computer employs quantum bits, or qubits, that can be both 0 and 1 at the same time, the equivalent of you sitting at both ends of your couch at once. Embodied in ions, photons, or tiny superconducting circuits, such two-way states give a quantum computer its power. But they’re also fragile, and the slightest interaction with their surroundings can distort them. So scientists must learn to correct such errors, and Kuperberg had expected Google to take a key step toward that goal. “I consider it a more relevant benchmark,” he says.

If some experts question the significance of Google’s quantum supremacy experiment, all stress the importance of quantum error correction. “It is really the difference between a $100 million, 10,000-qubit quantum computer being a random noise generator or the most powerful computer in the world,” says Chad Rigetti, a physicist and co-founder of Rigetti Computing. And all agree with Kuperberg on the first step: spreading the information ordinarily encoded in a single jittery qubit among many of them in a way that maintains the information even as noise rattles the underlying qubits. “You’re trying to build a ship that remains the same ship, even as every plank in it rots and has to be replaced,” explains Scott Aaronson, a computer scientist at the University of Texas, Austin.

The early leaders in quantum computing—Google, Rigetti, and IBM—have all trained their sights on that target. “That’s very explicitly the next big milestone,” says Hartmut Neven, who leads Google’s Quantum Artificial Intelligence lab. Jay Gambetta, who leads IBM’s quantum computing efforts, says, “In the next couple of years, you’ll see a series of results that will come out from us to deal with error correction.”

Physicists have begun to test their theoretical schemes in small experiments, but the challenge is grand. To demonstrate quantum supremacy, Google scientists had to wrangle 53 qubits. To encode the data in a single qubit with sufficient fidelity, they may need to master 1000 of them.

The quest for quantum computers took off in 1994 when Peter Shor, a mathematician at the Massachusetts Institute of Technology, showed that such a machine—then hypothetical—should be able to quickly factor huge numbers. Shor’s algorithm represents the possible factorizations of a number as quantum waves that can slosh simultaneously through the computer’s qubits, thanks to the qubits’ two-way states. The waves interfere so that the wrong factorizations cancel one another and the right one pops out. A machine running Shor’s algorithm could, among other things, crack the encryption systems that now secure internet communications, which rely on the fact that searching for the factors of a huge number overwhelms any ordinary computer.

However, Shor assumed each qubit would maintain its state so the quantum waves could slosh around as long as necessary. Real qubits are far less stable. Google, IBM, and Rigetti use qubits made of tiny resonating circuits of superconducting metal etched into microchips, which so far have proved easier to control and integrate into circuits than other types of qubits. Each circuit has two distinct energy states, which can denote 0 or 1. By plying a circuit with microwaves, researchers can ease it into either state or any combination of the two—say, 30% 0 and 70% 1. But those in-between states will fuzz out or “decohere” in a fraction of a second. Even before that happens, noise can jostle the state and alter it, potentially derailing a calculation.

Such noise nearly drowned out the signal in Google’s quantum supremacy experiment. Researchers began by setting the 53 qubits to encode all possible outputs, which ranged from zero to 2^{53}. They implemented a set of randomly chosen interactions among the qubits that in repeated trials made some outputs more likely than others. Given the complexity of the interactions, a supercomputer would need thousands of years to calculate the pattern of outputs, the researchers said. So by measuring it, the quantum computer did something that no ordinary computer could match. But the pattern was barely distinguishable from the random flipping of qubits caused by noise. “Their demonstration is 99% noise and only 1% signal,” Kuperberg says.

To realize their ultimate dreams, developers want qubits that are as reliable as the bits in an ordinary computer. “You want to have a qubit that stays coherent until you switch off the machine,” Neven says.

Scientists’ approach of spreading the information of one qubit—a “logical qubit”—among many physical ones traces its roots to the early days of ordinary computers in the 1950s. The bits of early computers consisted of vacuum tubes or mechanical relays, which were prone to flip unexpectedly. To overcome the problem, famed mathematician John von Neumann pioneered the field of error correction.

Von Neumann’s approach relied on redundancy. Suppose a computer makes three copies of each bit. Then, even if one of the three flips, the majority of the bits will preserve the correct setting. The computer can find and fix the flipped bit by comparing the bits in pairs, in so-called parity checks. If the first and third bits match, but the first and second and second and third differ, then most likely, the second bit flipped, and the computer can flip it back. Greater redundancy means greater ability to correct errors. Ironically, the transistors, etched into microchips, that modern computers use to encode their bits are so reliable that error correction isn’t much used.

But a quantum computer will depend on it, at least if it’s made of superconducting qubits. (Qubits made of individual ions suffer less from noise, but are harder to integrate.) Unfortunately for developers, quantum mechanics itself makes their task much harder by depriving them of their simplest error-correcting tool, copying. In quantum mechanics, a no-cloning theorem says it’s not possible to copy the state of one qubit onto another without altering the state of the first one. “This means that it’s not possible to directly translate our classical error correction codes to quantum error correction codes,” says Joschka Roffe, a theorist at the University of Sheffield.

## An easy fix

In a conventional computer, a bit is a switch that can be set to either 0 or 1. To protect a bit, a computer can copy it. If noise then flips a copy, the machine can find the error by making parity measurements: comparing pairs of bits to see whether they’re the same or different.

Even worse, quantum mechanics requires researchers to find errors blindfolded. Although a qubit can have a state that is both 0 and 1 at the same time, according to quantum theory, experimenters can’t measure that two-way state without collapsing it into either 0 or 1. Checking a state obliterates it. “The simplest [classical error] correction is that you look at all the bits to see what’s gone wrong,” Kuperberg says. “But if it’s qubits then you have to find the error without looking.”

Those hurdles may sound insurmountable, but quantum mechanics points to a potential solution. Researchers cannot copy a qubit’s state, but they can extend it to other qubits using a mysterious quantum connection called entanglement.

How the entangling is done shows just how subtle quantum computing is. Prodded with microwaves, the original qubit interacts with another that must start in the 0 state through a “controlled not” (CNOT) operation. The CNOT will change the state of the second qubit if the state of the first is 1 and leave it unchanged if the first qubit is 0. However, the maneuver doesn’t actually measure the first qubit and collapse its state. Instead, it maintains the both-ways state of the first qubit while both changing and not changing the second qubit at the same time. It leaves the two qubits in a state in which, simultaneously, they are both 0 and both 1.

If the original qubit is in, for example, a 30% 0 and 70% 1 state, physicists can link it to other qubits to make a chain of, say, three qubits that share an entangled state that’s 30% all three are 0 and 70% all three are 1. That state is distinct from three copies of the original qubit. In fact, none of the three entangled qubits in the string possesses a well defined quantum state of its own. But now, the three qubits are completely correlated: If you measure the first one and it collapses to 1, then the other two must also instantly collapse to 1. If the first collapses to 0, the others must also. That correlation is the essence of entanglement.

With that bigger entangled state, scientists can now keep an eye out for errors. To do that, they entangle still other “ancillary” qubits with the chain of three, one with first and second qubits in the string and another with the second and third. They then use measurements on the ancillas to make the quantum mechanical equivalent of parity checks. For example, without breaking the entanglement, noise can flip any one of the three coding qubits so that its 0 and 1 parts get switched, changing the latent correlations among all three. If researchers set things up right, they can make “stabilizer” measurements on the ancillary qubits to probe those correlations.

Although measuring the ancillary qubits collapses their states, it leaves the coding qubits unperturbed. “These are specially designed parity measurements that don’t collapse the information encoded in the logical state,” Roffe says. For example, if the measurement shows the first ancilla is 0, it reveals only that the first and second coding qubits must be in the same state, but not which state that is. If the ancilla is 1, then the measurement reveals only that the coding qubits must be in opposite states. If researchers can find a flipped qubit more quickly than the qubits tend to fuzz out, they can use microwaves to flip it back to its original state and restore its coherence.

## Quantum fixes are harder

The rules of quantum mechanics make it impossible to watch for errors by copying and measuring qubits (top). Instead, physicists want to spread the qubit’s state to other qubits through “entanglement” (middle) and monitor those to detect errors; then nudge an errant bit back to the correct state (bottom).

That’s just the basic idea. The state of a qubit is more complex than just a combination of 0 and 1. It also depends on exactly how those two parts mesh, which, in turn, depends on an abstract angle called the phase. The phase can range from 0° to 360° and is key to the wavelike interference effects that give a quantum computer its power. Quantum mechanically, any error in a qubit’s state can be thought of as some combination of a bit-flip error that swaps 0 and 1 and a phase flip that changes the phase by 180°.

To correct both types, researchers can expand into another dimension—literally. Whereas a string of three entangled qubits, with two ancillas woven between them, is the smallest array that can detect and correct a bit-flip error, a three-by-three grid of qubits, with eight interspersed ancillas, is the simplest one that can detect and correct both bit-flip and phase-flip errors. The logical qubit now resides in an entangled state of the nine qubits—be thankful you don’t have to write it out mathematically! Stabilizer measurements along one dimension of the grid check for bit-flip errors, while slightly different stabilizer measurements along the other dimension check for phase-flip errors.

Schemes for pushing into two dimensions vary, depending on the geometric arrangement of the qubits and the details of the stabilizer measurements. Nevertheless, researchers’ road to error correction is now clear: Encode a single logical qubit in a grid of physical qubits and show that the fidelity of the logical qubit gets better as the size of the grid increases.

Experimenters have already made a start. For example, in a *Nature Physics *study published on 8 June, Andreas Wallraff at ETH Zurich and colleagues demonstrated that they could detect—but not correct—errors in a logical qubit encoded in a square of four qubits with three ancillary qubits.

But experimenters face a daunting challenge. Manipulating individual qubits can introduce errors, and unless that error rate falls below a certain level, then entangling more qubits with the original one only adds more noise to the system, says Maika Takita, a physicist at IBM. “To demonstrate anything you have to get below that threshold,” she says. The ancillary qubits and other error-correction machinery add even more noise, and once those effects are included, the necessary error threshold plummets further. To make the scheme work, physicists must lower their error rate to less than 1%. “When I heard we achieved an 3% error rate, I thought that was great,” Takita says. “Now, it needs to be much lower.”

Error correction also requires twiddling with qubits repeatedly. That makes the process more demanding than quantum supremacy, which involved measuring all the qubits just once, says Marissa Giustina, a physicist with Google. Error correction “requires you to measure and measure and measure over and over again in a cycle, and that has to be done quickly and reliably,” she says.

Although a handful of qubits would suffice to demonstrate the principle of quantum error correction, in practice physicists will have to control huge numbers of them. To run Shor’s algorithm well enough to factor, say, a number 1000 bits long—roughly the size used in some internet encryption schemes—they’ll need to maintain logical qubits with a part-in-1-billion error rate. That may require entangling a grid of 1000 physical qubits to safeguard a single logical qubit, researchers say, a prospect that will take generations of bigger and better quantum computing chips.

Ironically, overcoming that challenge would put developers back where they were 20 years ago, when they were just setting out to make pairs of physical qubits interact to perform the various logical operations, or “gates,” needed for computation. Once scientists have begun to master error correction, they’ll have to repeat nearly every development so far in quantum computing with the more robust but highly complex logical qubits. “People say that error correction is the next step in quantum computing; it’s the next 25 steps,” Giustina quips.

Retracing those steps won’t be easy. It’s not just that any logical gate currently involving two qubits will require thousands of them. Worse, another theorem from quantum mechanics states that, no matter what scheme researchers use, not all of the logical gates can be easily translated from individual physical qubits to diffuse logical ones.

Researchers think they can sidestep that problem if they can initialize all the qubits in their computer in particular “magic states” that, more or less, do half the work of the problematic gates. Unfortunately, still more qubits may be needed to produce those magic states. “If you want to perform something like Shor’s algorithm, probably 90% of the qubits would have to be dedicated to preparing these magic states,” Roffe says. So a full-fledged quantum computer, with 1000 logical qubits, might end up containing many millions of physical qubits.

Google has a plan to build just such a machine within 10 years. At first blush, that sounds preposterous. Superconducting qubits need to be cooled to near absolute zero, in a device called a cryostat that fills a small room. A million-qubit machine conjures visions of a thousand cryostats in a huge factory. But Google researchers think they can keep their device compact. “I don’t want to tip my hand, but we believe we figured this out,” Neven says.

Others are taking different tacks. Google’s scheme would require 1000 physical qubits to encode a single logical qubit because its chip allows only neighboring qubits to interact. If more distant qubits can be made to interact, too, the number of physical qubits could be much smaller, Gambetta says. “If I can achieve that, then these ridiculously scary numbers for the overhead of error correction can come crashing down,” he says. So IBM researchers are exploring a scheme with more distant interconnections among the qubits.

Nobody is willing to predict how long it will take researchers to master error correction. But it is time to turn to the problem in earnest, Rigetti says. “Thus far, substantially all the researchers who would identify themselves as error correction researchers are theorists,” he says. “We have to make this an empirical field with real feedback on real data generated with real machines.” Quantum supremacy is so 2019. In quantum computing, error correction is the next hot thing.