DistributedEstimator: Distributed Training of Quantum Neural Networks via Circuit Cutting

arXiv:2602.16233v2 Announce Type: replace-cross Abstract: Circuit cutting decomposes a large quantum circuit into smaller subcircuits whose outputs are classically reconstructed to recover original expectation values. While prior work characterises cutting overhead via subcircuit counts and sampling complexity, its end-to-end impact on iterative, estimator-driven training pipelines remains insufficiently measured from a systems perspective. We propose DistributedEstimator, a cut-aware estimator execution pipeline that treats circuit cutting as a staged distributed workload, instrumenting each query across four phases: partitioning, subexperiment generation, parallel execution, and classical reconstruction. Using logged runtime traces and learning outcomes on two binary classification workloads (Iris and MNIST), we quantify cutting overheads, scaling limits, and sensitivity to injected stragglers, and evaluate whether accuracy and robustness are preserved under matched training budgets. Reconstruction dominates per-query time -- reaching a median of 53% and 95th percentile of 58% at three cuts -- bounding achievable speed-up under parallelism. Despite these overheads, test accuracy is fully preserved on Iris and maintained without systematic degradation on MNIST across all cut configurations. Robustness under Gaussian noise and FGSM perturbations is similarly preserved, with several cut configurations exhibiting comparable or improved robustness relative to the uncut baseline. Exponential growth of subexperiment counts (${O}(9^c)$ for CNOT-based decomposition) is a fundamental barrier limiting practical experimentation to small qubit counts. These results establish that practical scaling for learning workloads requires reducing and overlapping reconstruction, scheduling policies for barrier-dominated critical paths, and computationally efficient reconstruction strategies for larger qubit counts.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top