Your browser doesn't support javascript.
loading
Faster State Preparation across Quantum Phase Transition Assisted by Reinforcement Learning.
Guo, Shuai-Feng; Chen, Feng; Liu, Qi; Xue, Ming; Chen, Jun-Jie; Cao, Jia-Hao; Mao, Tian-Wei; Tey, Meng Khoon; You, Li.
Affiliation
  • Guo SF; State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China.
  • Chen F; State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China.
  • Liu Q; State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China.
  • Xue M; State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China.
  • Chen JJ; State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China.
  • Cao JH; State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China.
  • Mao TW; State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China.
  • Tey MK; State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China.
  • You L; Frontier Science Center for Quantum Information, Beijing, China.
Phys Rev Lett ; 126(6): 060401, 2021 Feb 12.
Article in En | MEDLINE | ID: mdl-33635691
ABSTRACT
An energy gap develops near quantum critical point of quantum phase transition in a finite many-body (MB) system, facilitating the ground state transformation by adiabatic parameter change. In real application scenarios, however, the efficacy for such a protocol is compromised by the need to balance finite system lifetime with adiabaticity, as exemplified in a recent experiment that prepares three-mode balanced Dicke state near deterministically [Y.-Q. Zou et al., Proc. Natl. Acad. Sci. U.S.A. 115, 6381 (2018)PNASA60027-842410.1073/pnas.1715105115]. Instead of tracking the instantaneous ground state as unanimously required for most adiabatic crossing, this work reports a faster sweeping policy taking advantage of excited level dynamics. It is obtained based on deep reinforcement learning (DRL) from a multistep training scheme we develop. In the absence of loss, a fidelity ≥99% between prepared and the target Dicke state is achieved over a small fraction of the adiabatically required time. When loss is included, training is carried out according to an operational benchmark, the interferometric sensitivity of the prepared state instead of fidelity, leading to better sensitivity in about half of the previously reported time. Implemented in a Bose-Einstein condensate of ∼10^{4} ^{87}Rb atoms, the balanced three-mode Dicke state exhibiting an improved number squeezing of 13.02±0.20 dB is observed within 766 ms, highlighting the potential of DRL for quantum dynamics control and quantum state preparation in interacting MB systems.

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Phys Rev Lett Year: 2021 Document type: Article Affiliation country: China

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Phys Rev Lett Year: 2021 Document type: Article Affiliation country: China