In 2019, Google proudly announced that they had achieved what quantum computing researchers had been searching for for years: proof that the esoteric technique could surpass the traditional. But this demonstration of “quantum supremacy” is being challenged by researchers who claim to have predated Google on a relatively normal supercomputer.
To be clear, no one is saying that Google lied or misrepresented its work — the painstaking and groundbreaking research that led to the announcement of the quantum domination in 2019 is still hugely significant. But if this new article is correct, the competition between classical and quantum computing is still a game for everyone.
You can read the full story of how Google brought quantum from theory to reality in the original article, but here’s the very short version. Quantum computers like Sycamore are no better at anything than classical computers, with the exception of one task: simulating a quantum computer.
It sounds like a subterfuge, but the point of quantum supremacy is to prove the method’s viability by finding even one very specific and weird task that can do better than even the fastest supercomputer. Because that gets the quantum foot in the door to expand that library of tasks. Maybe eventually all tasks in quantum will be faster, but for Google’s purposes in 2019 there was only one, and they showed how and why in great detail.
Now, a team from the Chinese Academy of Sciences led by Pan Zhang has published a paper describing a new technique for simulating a quantum computer (specifically certain noise patterns it emits) that appears to take a fraction of the estimated time for classic calculation to do this in 2019.
Not being an expert on quantum computers or a professor of statistical physics myself, I can only give a general idea of the technique that Zhang et al have used. They cast the problem as a large 3D network of tensors, with the 53 qubits in Sycamore represented by a grid of nodes, extruded 20 times to represent the 20 cycles the Sycamore gates went through in the simulated process. The mathematical relationships between these tensors (each its own set of interrelated vectors) were then calculated using a cluster of 512 GPUs.
Google’s original article estimated that running this simulation scale on the most powerful supercomputer available at the time (Summit at Oak Ridge National Laboratory) would take about 10,000 years — though just to be clear, that was their estimate for 54 qubits covering 25 cycles. 53 qubits doing 20 is significantly less complex, but would still take on the order of several years by their estimate.
Zhang’s group claims to have done it in 15 hours. And if they had access to a real supercomputer like Summit, it could happen in a handful of seconds — faster than Sycamore. Their paper will be published in the journal Physical assessment letters; you can read it here (PDF).
These results have yet to be fully vetted and replicated by those who have knowledge of such things, but there is no reason to think it is some kind of error or trickery. Google even admitted that the baton may be passed back and forth a few times before supremacy is firmly established, because it is incredibly difficult to build and program quantum computers, while classical computers and their software are constantly being improved. (Others in the quantum world were initially skeptical of their claims, but some are direct competitors.)
As quantum scientist Dominik Hangleiter of the University of Maryland told Sciencethis is in no way a black eye for Google or a knockout for quantum in general: “The Google experiment did what it was supposed to do, start this race.”
Google may very well hit back with new claims of its own — it hasn’t been sitting idle either — and I’ve reached out to the company for comment. But the fact that it is even competitive is good news for everyone involved; this is an exciting area of computing, and work like that of Google and Zhang continues to raise the bar for everyone.