RESUMO
Performing large calculations with a quantum computer will likely require a fault-tolerant architecture based on quantum error-correcting codes. The challenge is to design practical quantum error-correcting codes that perform well against realistic noise using modest resources. Here we show that a variant of the surface code-the XZZX code-offers remarkable performance for fault-tolerant quantum computation. The error threshold of this code matches what can be achieved with random codes (hashing) for every single-qubit Pauli noise channel; it is the first explicit code shown to have this universal property. We present numerical evidence that the threshold even exceeds this hashing bound for an experimentally relevant range of noise parameters. Focusing on the common situation where qubit dephasing is the dominant noise, we show that this code has a practical, high-performance decoder and surpasses all previously known thresholds in the realistic setting where syndrome measurements are unreliable. We go on to demonstrate the favourable sub-threshold resource scaling that can be obtained by specialising a code to exploit structure in the noise. We show that it is possible to maintain all of these advantages when we perform fault-tolerant quantum computation.
RESUMO
The code capacity threshold for error correction using biased-noise qubits is known to be higher than with qubits without such structured noise. However, realistic circuit-level noise severely restricts these improvements. This is because gate operations, such as a controlled-NOT (CX) gate, which do not commute with the dominant error, unbias the noise channel. Here, we overcome the challenge of implementing a bias-preserving CX gate using biased-noise stabilized cat qubits in driven nonlinear oscillators. This continuous-variable gate relies on nontrivial phase space topology of the cat states. Furthermore, by following a scheme for concatenated error correction, we show that the availability of bias-preserving CX gates with moderately sized cats improves a rigorous lower bound on the fault-tolerant threshold by a factor of two and decreases the overhead in logical Clifford operations by a factor of five. Our results open a path toward high-threshold, low-overhead, fault-tolerant codes tailored to biased-noise cat qubits.
RESUMO
Noise in quantum computing is countered with quantum error correction. Achieving optimal performance will require tailoring codes and decoding algorithms to account for features of realistic noise, such as the common situation where the noise is biased towards dephasing. Here we introduce an efficient high-threshold decoder for a noise-tailored surface code based on minimum-weight perfect matching. The decoder exploits the symmetries of its syndrome under the action of biased noise and generalizes to the fault-tolerant regime where measurements are unreliable. Using this decoder, we obtain fault-tolerant thresholds in excess of 6% for a phenomenological noise model in the limit where dephasing dominates. These gains persist even for modest noise biases: we find a threshold of â¼5% in an experimentally relevant regime where dephasing errors occur at a rate 100 times greater than bit-flip errors.
RESUMO
Quantum computers will require encoding of quantum information to protect them from noise. Fault-tolerant quantum computing architectures illustrate how this might be done but have not yet shown a conclusive practical advantage. Here we demonstrate that a small but useful error detecting code improves the fidelity of the fault-tolerant gates implemented in the code space as compared to the fidelity of physically equivalent gates implemented on physical qubits. By running a randomized benchmarking protocol in the logical code space of the [4,2,2] code, we observe an order of magnitude improvement in the infidelity of the gates, with the two-qubit infidelity dropping from 5.8(2)% to 0.60(3)%. Our results are consistent with fault-tolerance theory and conclusively demonstrate the benefit of carrying out computation in a code space that can detect errors. Although the fault-tolerant gates offer an impressive improvement in fidelity, the computation as a whole is not below the fault-tolerance threshold because of noise associated with state preparation and measurement on this device.
RESUMO
We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
RESUMO
Achieving error rates that meet or exceed the fault-tolerance threshold is a central goal for quantum computing experiments, and measuring these error rates using randomized benchmarking is now routine. However, direct comparison between measured error rates and thresholds is complicated by the fact that benchmarking estimates average error rates while thresholds reflect worst-case behavior when a gate is used as part of a large computation. These two measures of error can differ by orders of magnitude in the regime of interest. Here we facilitate comparison between the experimentally accessible average error rates and the worst-case quantities that arise in current threshold theorems by deriving relations between the two for a variety of physical noise sources. Our results indicate that it is coherent errors that lead to an enormous mismatch between average and worst case, and we quantify how well these errors must be controlled to ensure fair comparison between average error probabilities and fault-tolerance thresholds.
RESUMO
We use a simple real-space renormalization-group approach to investigate the critical behavior of the quantum Ashkin-Teller model, a one-dimensional quantum spin chain possessing a line of criticality along which critical exponents vary continuously. This approach, which is based on exploiting the on-site symmetry of the model, has been shown to be surprisingly accurate for predicting some aspects of the critical behavior of the quantum transverse-field Ising model. Our investigation explores this approach in more generality, in a model in which the critical behavior has a richer structure but which reduces to the simpler Ising case at a special point. We demonstrate that the correlation length critical exponent as predicted from this real-space renormalization-group approach is in broad agreement with the corresponding results from conformal field theory along the line of criticality. Near the Ising special point, the error in the estimated critical exponent from this simple method is comparable to that of numerically intensive simulations based on much more sophisticated methods, although the accuracy decreases away from the decoupled Ising model point.
RESUMO
Bender et al. [Phys. Rev. Lett. 80, 5243 (1998)] have developed PT-symmetric quantum theory as an extension of quantum theory to non-Hermitian Hamiltonians. We show that when this model has a local PT symmetry acting on composite systems, it violates the nonsignaling principle of relativity. Since the case of global PT symmetry is known to reduce to standard quantum mechanics A. Mostafazadeh [J. Math. Phys. 43, 205 (2001)], this shows that the PT-symmetric theory is either a trivial extension or likely false as a fundamental theory.
RESUMO
We study the computational difficulty of computing the ground state degeneracy and the density of states for local Hamiltonians. We show that the difficulty of both problems is exactly captured by a class which we call #BQP, which is the counting version of the quantum complexity class quantum Merlin Arthur. We show that #BQP is not harder than its classical counting counterpart #P, which in turn implies that computing the ground state degeneracy or the density of states for classical Hamiltonians is just as hard as it is for quantum Hamiltonians.
RESUMO
We describe a simple method for certifying that an experimental device prepares a desired quantum state ρ. Our method is applicable to any pure state ρ, and it provides an estimate of the fidelity between ρ and the actual (arbitrary) state in the lab, up to a constant additive error. The method requires measuring only a constant number of Pauli expectation values, selected at random according to an importance-weighting rule. Our method is faster than full tomography by a factor of d, the dimension of the state space, and extends easily and naturally to quantum channels.
RESUMO
We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rdlog²d) measurement settings, compared to standard methods that require d² settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed.
RESUMO
Quantum state tomography--deducing quantum states from measured data--is the gold standard for verification and benchmarking of quantum devices. It has been realized in systems with few components, but for larger systems it becomes unfeasible because the number of measurements and the amount of computation required to process them grows exponentially in the system size. Here, we present two tomography schemes that scale much more favourably than direct tomography with system size. One of them requires unitary operations on a constant number of subsystems, whereas the other requires only local measurements together with more elaborate post-processing. Both rely only on a linear number of experimental operations and post-processing that is polynomial in the system size. These schemes can be applied to a wide range of quantum states, in particular those that are well approximated by matrix product states. The accuracy of the reconstructed states can be rigorously certified without any a priori assumptions.
RESUMO
The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.
RESUMO
We generalize the topological entanglement entropy to a family of topological Rényi entropies parametrized by a parameter alpha, in an attempt to find new invariants for distinguishing topologically ordered phases. We show that, surprisingly, all topological Rényi entropies are the same, independent of alpha for all nonchiral topological phases. This independence shows that topologically ordered ground-state wave functions have reduced density matrices with a certain simple structure, and no additional universal information can be extracted from the entanglement spectrum.
RESUMO
One-way quantum computing allows any quantum algorithm to be implemented easily using just measurements. The difficult part is creating the universal resource, a cluster state, on which the measurements are made. We propose a scalable method that uses a single, multimode optical parametric oscillator (OPO). The method is very efficient and generates a continuous-variable cluster state, universal for quantum computation, with quantum information encoded in the quadratures of the optical frequency comb of the OPO.
RESUMO
A parameter whose coupling to a quantum probe of n constituents includes all two-body interactions between the constituents can be measured with an uncertainty that scales as 1/n3/2, even when the constituents are initially unentangled. We devise a protocol that achieves the 1/n3/2 scaling without generating any entanglement among the constituents, and we suggest that the protocol might be implemented in a two-component Bose-Einstein condensate.
RESUMO
We develop generalized bounds for quantum single-parameter estimation problems for which the coupling to the parameter is described by intrinsic multisystem interactions. For a Hamiltonian with k-system parameter-sensitive terms, the quantum limit scales as 1/Nk, where N is the number of systems. These quantum limits remain valid when the Hamiltonian is augmented by any parameter-independent interaction among the systems and when adaptive measurements via parameter-independent coupling to ancillas are allowed.