Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Nature ; 626(7997): 58-65, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38056497

RESUMO

Suppressing errors is the central challenge for useful quantum computing1, requiring quantum error correction (QEC)2-6 for large-scale processing. However, the overhead in the realization of error-corrected 'logical' qubits, in which information is encoded across many physical qubits for redundancy2-4, poses substantial challenges to large-scale logical quantum computing. Here we report the realization of a programmable quantum processor based on encoded logical qubits operating with up to 280 physical qubits. Using logical-level control and a zoned architecture in reconfigurable neutral-atom arrays7, our system combines high two-qubit gate fidelities8, arbitrary connectivity7,9, as well as fully programmable single-qubit rotations and mid-circuit readout10-15. Operating this logical processor with various types of encoding, we demonstrate improvement of a two-qubit logic gate by scaling surface-code6 distance from d = 3 to d = 7, preparation of colour-code qubits with break-even fidelities5, fault-tolerant creation of logical Greenberger-Horne-Zeilinger (GHZ) states and feedforward entanglement teleportation, as well as operation of 40 colour-code qubits. Finally, using 3D [[8,3,2]] code blocks16,17, we realize computationally complex sampling circuits18 with up to 48 logical qubits entangled with hypercube connectivity19 with 228 logical two-qubit gates and 48 logical CCZ gates20. We find that this logical encoding substantially improves algorithmic performance with error detection, outperforming physical-qubit fidelities at both cross-entropy benchmarking and quantum simulations of fast scrambling21,22. These results herald the advent of early error-corrected quantum computation and chart a path towards large-scale logical processors.

2.
Phys Rev Lett ; 133(2): 020601, 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-39073933

RESUMO

A central challenge in the verification of quantum computers is benchmarking their performance as a whole and demonstrating their computational capabilities. In this Letter, we find a universal model of quantum computation, Bell sampling, that can be used for both of those tasks and thus provides an ideal stepping stone toward fault tolerance. In Bell sampling, we measure two copies of a state prepared by a quantum circuit in the transversal Bell basis. We show that the Bell samples are classically intractable to produce and at the same time constitute what we call a "circuit shadow": from the Bell samples we can efficiently extract information about the quantum circuit preparing the state, as well as diagnose circuit errors. In addition to known properties that can be efficiently extracted from Bell samples, we give several new and efficient protocols: an estimator of state fidelity, an error-mitigated estimator of Pauli expectation values, a test for the depth of a circuit, and an algorithm to estimate a lower bound on the number of T gates in the circuit. With some additional measurements, the latter algorithm can be used to learn a full description of states prepared by circuits with low T count.

3.
Phys Rev Lett ; 131(3): 030601, 2023 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37540875

RESUMO

Entanglement is one of the physical properties of quantum systems responsible for the computational hardness of simulating quantum systems. But while the runtime of specific algorithms, notably tensor network algorithms, explicitly depends on the amount of entanglement in the system, it is unknown whether this connection runs deeper and entanglement can also cause inherent, algorithm-independent complexity. In this Letter, we quantitatively connect the entanglement present in certain quantum systems to the computational complexity of simulating those systems. Moreover, we completely characterize the entanglement and complexity as a function of a system parameter. Specifically, we consider the task of simulating single-qubit measurements of k-regular graph states on n qubits. We show that, as the regularity parameter is increased from 1 to n-1, there is a sharp transition from an easy regime with low entanglement to a hard regime with high entanglement at k=3, and a transition back to easy and low entanglement at k=n-3. As a key technical result, we prove a duality for the simulation complexity of regular graph states between low and high regularity.

4.
Phys Rev Lett ; 122(21): 210502, 2019 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-31283328

RESUMO

Results on the hardness of approximate sampling are seen as important stepping stones toward a convincing demonstration of the superior computational power of quantum devices. The most prominent suggestions for such experiments include boson sampling, instantaneous quantum polynomial time (IQP) circuit sampling, and universal random circuit sampling. A key challenge for any such demonstration is to certify the correct implementation. For all these examples, and in fact for all sufficiently flat distributions, we show that any noninteractive certification from classical samples and a description of the target distribution requires exponentially many uses of the device. Our proofs rely on the same property that is a central ingredient for the approximate hardness results, namely, that the sampling distributions, as random variables depending on the random unitaries defining the problem instances, have small second moments.

5.
Sci Adv ; 8(1): eabi7894, 2022 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-34985960

RESUMO

Photonics is a promising platform for demonstrating a quantum computational advantage (QCA) by outperforming the most powerful classical supercomputers on a well-defined computational task. Despite this promise, existing proposals and demonstrations face challenges. Experimentally, current implementations of Gaussian boson sampling (GBS) lack programmability or have prohibitive loss rates. Theoretically, there is a comparative lack of rigorous evidence for the classical hardness of GBS. In this work, we make progress in improving both the theoretical evidence and experimental prospects. We provide evidence for the hardness of GBS, comparable to the strongest theoretical proposals for QCA. We also propose a QCA architecture we call high-dimensional GBS, which is programmable and can be implemented with low loss using few optical components. We show that particular algorithms for simulating GBS are outperformed by high-dimensional GBS experiments at modest system sizes. This work thus opens the path to demonstrating QCA with programmable photonic processors.

6.
Sci Adv ; 6(33): eabb8341, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32851184

RESUMO

Quantum Monte Carlo (QMC) methods are the gold standard for studying equilibrium properties of quantum many-body systems. However, in many interesting situations, QMC methods are faced with a sign problem, causing the severe limitation of an exponential increase in the runtime of the QMC algorithm. In this work, we develop a systematic, generally applicable, and practically feasible methodology for easing the sign problem by efficiently computable basis changes and use it to rigorously assess the sign problem. Our framework introduces measures of non-stoquasticity that-as we demonstrate analytically and numerically-at the same time provide a practically relevant and efficiently computable figure of merit for the severity of the sign problem. Complementing this pragmatic mindset, we prove that easing the sign problem in terms of those measures is generally an NP-complete task for nearest-neighbor Hamiltonians and simple basis choices by a reduction to the MAXCUT-problem.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA