RESUMEN
The impressive pace of advance of quantum technology calls for robust and scalable techniques for the characterization and validation of quantum hardware. Quantum process tomography, the reconstruction of an unknown quantum channel from measurement data, remains the quintessential primitive to completely characterize quantum devices. However, due to the exponential scaling of the required data and classical post-processing, its range of applicability is typically restricted to one- and two-qubit gates. Here, we present a technique for performing quantum process tomography that addresses these issues by combining a tensor network representation of the channel with a data-driven optimization inspired by unsupervised machine learning. We demonstrate our technique through synthetically generated data for ideal one- and two-dimensional random quantum circuits of up to 10 qubits, and a noisy 5-qubit circuit, reaching process fidelities above 0.99 using several orders of magnitude fewer (single-qubit) measurement shots than traditional tomographic techniques. Our results go far beyond state-of-the-art, providing a practical and timely tool for benchmarking quantum circuits in current and near-term quantum computers.
RESUMEN
Classical machine learning (ML) provides a potentially powerful approach to solving challenging quantum many-body problems in physics and chemistry. However, the advantages of ML over traditional methods have not been firmly established. In this work, we prove that classical ML algorithms can efficiently predict ground-state properties of gapped Hamiltonians after learning from other Hamiltonians in the same quantum phase of matter. By contrast, under a widely accepted conjecture, classical algorithms that do not learn from data cannot achieve the same guarantee. We also prove that classical ML algorithms can efficiently classify a wide range of quantum phases. Extensive numerical experiments corroborate our theoretical results in a variety of scenarios, including Rydberg atom systems, two-dimensional random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases.
RESUMEN
We demonstrate quantum many-body state reconstruction from experimental data generated by a programmable quantum simulator by means of a neural-network model incorporating known experimental errors. Specifically, we extract restricted Boltzmann machine wave functions from data produced by a Rydberg quantum simulator with eight and nine atoms in a single measurement basis and apply a novel regularization technique to mitigate the effects of measurement errors in the training data. Reconstructions of modest complexity are able to capture one- and two-body observables not accessible to experimentalists, as well as more sophisticated observables such as the Rényi mutual information. Our results open the door to integration of machine learning architectures with intermediate-scale quantum hardware.
RESUMEN
Machine learning is actively being explored for its potential to design, validate, and even hybridize with near-term quantum devices. A central question is whether neural networks can provide a tractable representation of a given quantum state of interest. When true, stochastic neural networks can be employed for many unsupervised tasks, including generative modeling and state tomography. However, to be applicable for real experiments, such methods must be able to encode quantum mixed states. Here, we parametrize a density matrix based on a restricted Boltzmann machine that is capable of purifying a mixed state through auxiliary degrees of freedom embedded in the latent space of its hidden units. We implement the algorithm numerically and use it to perform tomography on some typical states of entangled photons, achieving fidelities competitive with standard techniques.
RESUMEN
We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.