2026-04-15

Quantum Advantage Refuted: GPUs Outpace Google’s Sycamore

A new classical simulation using 1,432 GPUs generates uncorrelated samples seven times faster than Google's 2019 landmark quantum experiment.

This research provides the first unambiguous experimental evidence to refute the 2019 quantum advantage claim, using 1,432 GPUs to outperform the Sycamore processor by a factor of seven.

— BrunoSan Quantum Intelligence · 2026-04-15
· 6 min read · 1347 words
quantum computingarxivresearch2024

In 2019, the physics world shifted. Google announced that its 53-qubit Sycamore processor had achieved a milestone known as quantum advantage, performing a specific calculation in 200 seconds that would supposedly take a classical supercomputer 10,000 years. This claim rested on a task called random circuit sampling—a complex mathematical game of generating numbers that follow a specific probability distribution. For five years, this benchmark stood as the primary evidence that quantum hardware had finally outstripped the capabilities of classical silicon. However, the gap between quantum promise and classical reality has been closing steadily, leaving researchers to wonder if the goalposts were moved too soon. [arXiv:2406.18889]

The Core Finding

Researchers from the Institute of Computing Technology at the Chinese Academy of Sciences have now delivered what they describe as the first unambiguous experimental evidence to refute Sycamore’s claim. By harnessing the collective power of 1,432 high-performance GPUs, the team executed a classical simulation that did not just match Google’s quantum processor—it dominated it. The simulation generated uncorrelated samples with a higher linear cross-entropy score than the original experiment and did so at a rate seven times faster than the Sycamore hardware. This was achieved through a sophisticated combination of tensor network methods and a novel post-processing algorithm designed to slash the overall computational complexity of the simulation.

Think of it like a high-stakes race where the quantum processor took a shortcut across a field, while the classical computer was forced to stay on the winding road; the researchers have now built a vehicle so fast that it finishes the road course before the quantum runner reaches the finish line. The authors state their achievement is an

energy-efficient classical simulation algorithm... which generates uncorrelated samples with higher linear cross entropy score and is 7 times faster than Sycamore 53 qubits experiment.
This leap represents a 7x improvement in time-to-solution while maintaining two orders of magnitude lower energy consumption than previous classical attempts.

The State of the Field

The quest to simulate quantum circuits classically is not new. Since the 2019 Sycamore announcement, various groups have chipped away at the 10,000-year estimate. In 2021, Pan Zhang and colleagues used tensor network contraction to reduce the simulation time to days. Later, researchers at Alibaba and other institutions utilized massive clusters to bring the time down further. However, these previous attempts often struggled with "uncorrelated sampling"—the ability to produce truly random-looking data points—or they consumed astronomical amounts of electricity, leaving Google’s claim of an "advantage" technically intact based on efficiency and sampling quality.

The broader quantum computing landscape is currently in a state of recalibration. We are moving away from the era of "noisy intermediate-scale quantum" (NISQ) devices being judged by abstract benchmarks like random circuit sampling. The realization that classical algorithms can keep pace with 50-to-60 qubit systems suggests that the threshold for true, unassailable quantum advantage is likely much higher—perhaps in the range of hundreds or thousands of high-fidelity, error-corrected qubits. This paper effectively closes the chapter on the first generation of quantum advantage claims, forcing the industry to look toward more practical and harder-to-simulate applications.

From Lab to Reality

For the scientific community, this breakthrough unlocks a powerful new tool for verifying future quantum hardware. If we can simulate 53-qubit systems on a GPU cluster in minutes, we can use these simulations to debug and calibrate the next generation of 100-qubit processors. For engineers, this work demonstrates the untapped potential of high-performance general-purpose GPUs (GPGPUs) in handling tensor network contractions, which are vital for everything from condensed matter physics to machine learning. The integration of state-of-the-art GPU architectures with optimized post-processing suggests that classical hardware still has significant "classical headroom" to compete with early quantum devices.

For investors and industry strategists, this research recalibrates the timeline for quantum disruption in fields like quantum cryptography and materials science. It highlights that the "quantum threat" to current encryption—often cited as a driver for the post-quantum cryptography market—may be further off than the 2019 headlines suggested. If classical algorithms can be optimized this aggressively, the bar for quantum computers to provide a meaningful return on investment (ROI) in commercial settings is raised. The focus must now shift from "can a quantum computer do this?" to "can a quantum computer do this better than a highly optimized GPU cluster?"

What Still Needs to Happen

Despite this classical victory, two major technical hurdles remain. First, classical simulation complexity still scales exponentially with the depth and width of the quantum circuit. While 53 qubits are now manageable, a 100-qubit system with high circuit depth would still likely break even the most optimized GPU clusters. Groups led by John Martinis and teams at IBM are already pushing toward these higher qubit counts. Second, the current simulation relies on the fact that NISQ devices like Sycamore have a non-zero error rate. If quantum hardware achieves significant error correction, the "noise" that classical algorithms sometimes exploit to simplify simulations will vanish, making the classical task much harder.

We are likely at least a decade away from a quantum computer that can perform a task of actual economic value—such as nitrogen fixation simulation or drug discovery—that is truly beyond classical reach. The researchers at the Chinese Academy of Sciences have shown that for now, the silicon empire is striking back. The boundary of quantum advantage has been redefined, and it is now much further into the horizon than we previously believed.

Conclusion

This study proves that the first claimed instance of quantum advantage has been overtaken by classical innovation, setting a new, higher bar for the quantum industry. In short: This research refutes the 2019 quantum advantage claim by using 1,432 GPUs to simulate the Sycamore processor 7 times faster than the original hardware.

Frequently Asked Questions

What is quantum advantage?
Quantum advantage is the point where a quantum computer can perform a calculation that is practically impossible for any classical supercomputer to complete in a reasonable timeframe. It was first claimed by Google in 2019 using its 53-qubit Sycamore processor. This new research shows that classical computers have caught up, meaning that specific advantage no longer exists.
How does the new classical simulation work?
The simulation uses a method called tensor network contraction, which breaks down the complex quantum states into smaller, manageable mathematical structures. The researchers combined this with 1,432 high-performance GPUs and a new post-processing algorithm to speed up the calculation. This allowed them to generate the same type of random data the quantum computer produced but at a much higher speed.
How does this compare to Google's Sycamore experiment?
Google's Sycamore took 200 seconds to perform the random circuit sampling task, while the new classical method is 7 times faster. Additionally, the classical simulation achieved a higher fidelity score, meaning its results were more accurate to the mathematical ideal than the quantum hardware's results. It also proved to be more energy-efficient than previous attempts to simulate the circuit classically.
When could this be commercially relevant?
The simulation techniques themselves are relevant today for researchers designing and testing new quantum hardware. However, the fact that classical computers can still beat quantum ones suggests that commercial quantum applications are still years, if not a decade, away. Industries will continue to rely on classical GPU clusters for complex simulations for the foreseeable future.
Which industries would benefit most from this research?
The high-performance computing (HPC) and semiconductor industries benefit by proving that GPU-based systems are still the gold standard for complex simulations. It also impacts the cybersecurity sector by providing a more realistic timeline for when quantum computers might actually threaten current encryption. Finally, it aids quantum hardware startups by providing better classical tools to verify their progress.
What are the current limitations of this research?
The main limitation is that classical simulation still faces exponential growth in difficulty as more qubits are added to a quantum processor. While 53 qubits are now beatable, a system with 100 or 200 high-quality qubits would still likely be impossible to simulate classically. The research also focuses on a specific benchmark that does not have a direct real-world application like chemistry or finance.

Follow quantum advantage Intelligence

BrunoSan Quantum Intelligence tracks quantum advantage and 44+ quantum computing signals daily — ArXiv papers, Nature, APS, IonQ, IBM, Rigetti and more. Updated every cycle.

Explore Quantum MCP →