In the emerging world of quantum computing, we just broke a record with big impact, and weβre making our software available so anyone can do this work.
Quantum computing will propel a new wave of advances in climate research, drug discovery, finance and more. By simulating tomorrowβs quantum computers on todayβs classical systems, researchers can develop and test quantum algorithms more quickly and at scales not otherwise possible.
Driving toward that future, NVIDIA created the largest ever simulation of a quantum algorithm for solving the MaxCut problem using cuQuantum, our SDK for accelerating quantum circuit simulations on a GPU.
In the math world, MaxCut is often cited as an example of an optimization problem no known computer can solve efficiently. MaxCut algorithms are used to design large computer networks, find the optimal layout of chips with billions of silicon pathways and explore the field of statistical physics.
MaxCut is a key problem in the quantum community because itβs one of the leading candidates for demonstrating an advantage from using a quantum algorithm.
We used the cuTensorNet library in cuQuantum running on NVIDIAβs in-house supercomputer, Selene, to simulate a quantum algorithm to solve the MaxCut problem. Using 896 GPUs to simulate 1,688 qubits, we were able to solve a graph with a whopping 3,375 vertices. Thatβs 8x more qubits than the previous largest quantum simulation.
Our solution was also highly accurate, reaching 96% of the best known answer. We set this new record with an algorithm developed by NVIDIA researchers and an open source framework. (Editorβs note: Since publishing this post, weβve announced larger simulations up to 10,000 vertices with 5,000 qubits, using 20 NVIDIA DGX A100 nodes, and achieving 93% accuracy.)

Our breakthrough opens the door for using cuQuantum on NVIDIA DGX systems to research quantum algorithms at a previously impossible scale, accelerating the path to tomorrowβs quantum computers.
Keys to the Quantum World
You can test drive the same software that set this world record.
Starting today, the first library from cuQuantum, cuStateVec, is in public beta, available to download. It uses state vectors to accelerate simulations with tens of qubits.
The cuTensorNet library that helped us set the world record uses tensor networks to simulate up to hundreds or even thousands of qubits on some promising near-term algorithms. It will be available in December.
Get the Latest News at GTC
We invite you to try cuQuantum, get dramatically accelerated performance on your simulations and go break some big records.
Learn more about cuQuantumβs partner ecosystem here.
To get the big picture, attend NVIDIA GTC, taking place online through Nov. 11. Watch NVIDIA CEO Jensen Huangβs GTC keynote address below.