Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Nature ; 619(7971): 738-742, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37438533

RESUMO

Scalable generation of genuine multipartite entanglement with an increasing number of qubits is important for both fundamental interest and practical use in quantum-information technologies1,2. On the one hand, multipartite entanglement shows a strong contradiction between the prediction of quantum mechanics and local realization and can be used for the study of quantum-to-classical transition3,4. On the other hand, realizing large-scale entanglement is a benchmark for the quality and controllability of the quantum system and is essential for realizing universal quantum computing5-8. However, scalable generation of genuine multipartite entanglement on a state-of-the-art quantum device can be challenging, requiring accurate quantum gates and efficient verification protocols. Here we show a scalable approach for preparing and verifying intermediate-scale genuine entanglement on a 66-qubit superconducting quantum processor. We used high-fidelity parallel quantum gates and optimized the fidelitites of parallel single- and two-qubit gates to be 99.91% and 99.05%, respectively. With efficient randomized fidelity estimation9, we realized 51-qubit one-dimensional and 30-qubit two-dimensional cluster states and achieved fidelities of 0.637 ± 0.030 and 0.671 ± 0.006, respectively. On the basis of high-fidelity cluster states, we further show a proof-of-principle realization of measurement-based variational quantum eigensolver10 for perturbed planar codes. Our work provides a feasible approach for preparing and verifying entanglement with a few hundred qubits, enabling medium-scale quantum computing with superconducting quantum systems.

2.
Entropy (Basel) ; 26(1)2024 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-38248196

RESUMO

Errors are common issues in quantum computing platforms, among which leakage is one of the most-challenging to address. This is because leakage, i.e., the loss of information stored in the computational subspace to undesired subspaces in a larger Hilbert space, is more difficult to detect and correct than errors that preserve the computational subspace. As a result, leakage presents a significant obstacle to the development of fault-tolerant quantum computation. In this paper, we propose an efficient and accurate benchmarking framework called leakage randomized benchmarking (LRB), for measuring leakage rates on multi-qubit quantum systems. Our approach is more insensitive to state preparation and measurement (SPAM) noise than existing leakage benchmarking protocols, requires fewer assumptions about the gate set itself, and can be used to benchmark multi-qubit leakages, which has not been achieved previously. We also extended the LRB protocol to an interleaved variant called interleaved LRB (iLRB), which can benchmark the average leakage rate of generic n-site quantum gates with reasonable noise assumptions. We demonstrate the iLRB protocol on benchmarking generic two-qubit gates realized using flux tuning and analyzed the behavior of iLRB under corresponding leakage models. Our numerical experiments showed good agreement with the theoretical estimations, indicating the feasibility of both the LRB and iLRB protocols.

3.
Sci Bull (Beijing) ; 65(10): 832-841, 2020 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-36659202

RESUMO

Gaussian boson sampling is an alternative model for demonstrating quantum computational supremacy, where squeezed states are injected into every input mode, instead of applying single photons as in the case of standard boson sampling. Here by analyzing numerically the computational costs, we establish a lower bound for achieving quantum computational supremacy for a class of Gaussian boson-sampling problems. Specifically, we propose a more efficient method for calculating the transition probabilities, leading to a significant reduction of the simulation costs. Particularly, our numerical results indicate that one can simulate up to 18 photons for Gaussian boson sampling at the output subspace on a normal laptop, 20 photons on a commercial workstation with 256 cores, and about 30 photons for supercomputers. These numbers are significantly smaller than those in standard boson sampling, suggesting that Gaussian boson sampling could be experimentally-friendly for demonstrating quantum computational supremacy.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA