RESUMEN
Quantum mechanics can help to solve complex problems in physics and chemistry, provided they can be programmed in a physical device. In adiabatic quantum computing, a system is slowly evolved from the ground state of a simple initial Hamiltonian to a final Hamiltonian that encodes a computational problem. The appeal of this approach lies in the combination of simplicity and generality; in principle, any problem can be encoded. In practice, applications are restricted by limited connectivity, available interactions and noise. A complementary approach is digital quantum computing, which enables the construction of arbitrary interactions and is compatible with error correction, but uses quantum circuit algorithms that are problem-specific. Here we combine the advantages of both approaches by implementing digitized adiabatic quantum computing in a superconducting system. We tomographically probe the system during the digitized evolution and explore the scaling of errors with system size. We then let the full system find the solution to random instances of the one-dimensional Ising problem as well as problem Hamiltonians that involve more complex interactions. This digital quantum simulation of the adiabatic algorithm consists of up to nine qubits and up to 1,000 quantum logic gates. The demonstration of digitized adiabatic quantum computing in the solid state opens a path to synthesizing long-range correlations and solving complex computational problems. When combined with fault-tolerance, our approach becomes a general-purpose algorithm that is scalable.
RESUMEN
Quantum computing becomes viable when a quantum state can be protected from environment-induced error. If quantum bits (qubits) are sufficiently reliable, errors are sparse and quantum error correction (QEC) is capable of identifying and correcting them. Adding more qubits improves the preservation of states by guaranteeing that increasingly larger clusters of errors will not cause logical failure-a key requirement for large-scale systems. Using QEC to extend the qubit lifetime remains one of the outstanding experimental challenges in quantum computing. Here we report the protection of classical states from environmental bit-flip errors and demonstrate the suppression of these errors with increasing system size. We use a linear array of nine qubits, which is a natural step towards the two-dimensional surface code QEC scheme, and track errors as they occur by repeatedly performing projective quantum non-demolition parity measurements. Relative to a single physical qubit, we reduce the failure rate in retrieving an input state by a factor of 2.7 when using five of our nine qubits and by a factor of 8.5 when using all nine qubits after eight cycles. Additionally, we tomographically verify preservation of the non-classical Greenberger-Horne-Zeilinger state. The successful suppression of environment-induced errors will motivate further research into the many challenges associated with building a large-scale superconducting quantum computer.
RESUMEN
A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.
RESUMEN
Superconducting qubits are an attractive platform for quantum computing since they have demonstrated high-fidelity quantum gates and extensibility to modest system sizes. Nonetheless, an outstanding challenge is stabilizing their energy-relaxation times, which can fluctuate unpredictably in frequency and time. Here, we use qubits as spectral and temporal probes of individual two-level-system defects to provide direct evidence that they are responsible for the largest fluctuations. This research lays the foundation for stabilizing qubit performance through calibration, design, and fabrication.
RESUMEN
By analyzing the dissipative dynamics of a tunable gap flux qubit, we extract both sides of its two-sided environmental flux noise spectral density over a range of frequencies around 2k_{B}T/h≈1 GHz, allowing for the observation of a classical-quantum crossover. Below the crossover point, the symmetric noise component follows a 1/f power law that matches the magnitude of the 1/f noise near 1 Hz. The antisymmetric component displays a 1/T dependence below 100 mK, providing dynamical evidence for a paramagnetic environment. Extrapolating the two-sided spectrum predicts the linewidth and reorganization energy of incoherent resonant tunneling between flux qubit wells.
RESUMEN
Leakage errors occur when a quantum system leaves the two-level qubit subspace. Reducing these errors is critically important for quantum error correction to be viable. To quantify leakage errors, we use randomized benchmarking in conjunction with measurement of the leakage population. We characterize single qubit gates in a superconducting qubit, and by refining our use of derivative reduction by adiabatic gate pulse shaping along with detuning of the pulses, we obtain gate errors consistently below 10^{-3} and leakage rates at the 10^{-5} level. With the control optimized, we find that a significant portion of the remaining leakage is due to incoherent heating of the qubit.
RESUMEN
Faster and more accurate state measurement is required for progress in superconducting qubit experiments with greater numbers of qubits and advanced techniques such as feedback. We have designed a multiplexed measurement system with a bandpass filter that allows fast measurement without increasing environmental damping of the qubits. We use this to demonstrate simultaneous measurement of four qubits on a single superconducting integrated circuit, the fastest of which can be measured to 99.8% accuracy in 140 ns. This accuracy and speed is suitable for advanced multiqubit experiments including surface-code error correction.
RESUMEN
We present a method for optimizing quantum control in experimental systems, using a subset of randomized benchmarking measurements to rapidly infer error. This is demonstrated to improve single- and two-qubit gates, minimize gate bleedthrough, where a gate mechanism can cause errors on subsequent gates, and identify control crosstalk in superconducting qubits. This method is able to correct parameters so that control errors no longer dominate and is suitable for automated and closed-loop optimization of experimental systems.
RESUMEN
We introduce a superconducting qubit architecture that combines high-coherence qubits and tunable qubit-qubit coupling. With the ability to set the coupling to zero, we demonstrate that this architecture is protected from the frequency crowding problems that arise from fixed coupling. More importantly, the coupling can be tuned dynamically with nanosecond resolution, making this architecture a versatile platform with applications ranging from quantum logic gates to quantum simulation. We illustrate the advantages of dynamical coupling by implementing a novel adiabatic controlled-z gate, with a speed approaching that of single-qubit gates. Integrating coherence and scalable control, the introduced qubit architecture provides a promising path towards large-scale quantum computation and simulation.
RESUMEN
We demonstrate a planar, tunable superconducting qubit with energy relaxation times up to 44 µs. This is achieved by using a geometry designed to both minimize radiative loss and reduce coupling to materials-related defects. At these levels of coherence, we find a fine structure in the qubit energy lifetime as a function of frequency, indicating the presence of a sparse population of incoherent, weakly coupled two-level defects. We elucidate this defect physics by experimentally varying the geometry and by a model analysis. Our "Xmon" qubit combines facile fabrication, straightforward connectivity, fast control, and long coherence, opening a viable route to constructing a chip-based quantum computer.
RESUMEN
Superconducting qubits probe environmental defects such as nonequilibrium quasiparticles, an important source of decoherence. We show that "hot" nonequilibrium quasiparticles, with energies above the superconducting gap, affect qubits differently from quasiparticles at the gap, implying qubits can probe the dynamic quasiparticle energy distribution. For hot quasiparticles, we predict a non-negligible increase in the qubit excited state probability Pe. By injecting hot quasiparticles into a qubit, we experimentally measure an increase of Pe in semiquantitative agreement with the model and rule out the typically assumed thermal distribution.
RESUMEN
We demonstrate a superconducting resonator with variable coupling to a measurement transmission line. The resonator coupling can be adjusted through zero to a photon emission rate 1000 times the intrinsic resonator decay rate. We demonstrate the catch and release of photons in the resonator, as well as control of nonclassical Fock states. We also demonstrate the dynamical control of the release waveform of photons from the resonator, a key functionality that will enable high-fidelity quantum state transfer between distant resonators or qubits.
RESUMEN
We measure the dependence of qubit phase coherence and flux noise on inductor loop geometry. While wider inductor traces change neither the flux noise power spectrum nor the qubit dephasing time, increased inductance leads to a simultaneous increase in both. Using our new tomographic protocol for measuring low frequency flux noise, we make a direct comparison between the flux noise spectrum and qubit phase decay, finding agreement within 10% of theory.
RESUMEN
A new single well injection withdrawal (SWIW) test was trialled at four landfills using the tracers lithium and deuterium, and by injecting clean water and measuring electrical conductivity. The aim of the research was to develop a practical test for measuring lateral contaminant transport to aid in the design of landfill flushing. Borehole dilution tests using dyes were undertaken prior to each SWIW test to determine background flow velocities. SWIW tests were performed at different scales by varying the volume of tracer injected (1 to 5,800 m3) and the test duration (2 to 266 days). Tracers were used individually, simultaneously or sequentially to examine repeatability and scaling. Mobile porosities, estimated from first arrival times in observation wells and from model fitting ranged from 0.02 to 0.14. The low mobile porosities measured rule out a purely advective-dispersive system and support a conceptual model of a highly preferential dual-porosity flow system with localised heterogeneity. A dual-porosity model was used to interpret the results. The model gave a good fit to the test data in 7 out of 11 tests (where R2 ≥ 0.98), and the parameters derived are compatible with previous experiments in MSW. Block diffusion times were estimated to range from 12 to 6,630 h, with a scaling relationship apparent between the size of the test (volume of tracer used and/or the duration) and the observed block diffusion time. This scaling relationship means affordable small-scale tests can inform larger-scale flushing operations.
Asunto(s)
Eliminación de Residuos , Instalaciones de Eliminación de Residuos , Difusión , Modelos Teóricos , Porosidad , Movimientos del AguaRESUMEN
Quantum computing can become scalable through error correction, but logical error rates only decrease with system size when physical errors are sufficiently uncorrelated. During computation, unused high energy levels of the qubits can become excited, creating leakage states that are long-lived and mobile. Particularly for superconducting transmon qubits, this leakage opens a path to errors that are correlated in space and time. Here, we report a reset protocol that returns a qubit to the ground state from all relevant higher level states. We test its performance with the bit-flip stabilizer code, a simplified version of the surface code for quantum error correction. We investigate the accumulation and dynamics of leakage during error correction. Using this protocol, we find lower rates of logical errors and an improved scaling and stability of error suppression with increasing qubit number. This demonstration provides a key step on the path towards scalable quantum computing.
RESUMEN
The methane emissions from a landfill in south-east, UK were successfully quantified during a six-day measurement campaign using the tracer dispersion method. The fair weather conditions made it necessary to perform measurements in the late afternoon and in the evening when the lower solar flux resulted in a more stable troposphere with a lower inversion layer. This caused a slower mixing of the gasses, but allowed plume measurements up to 6700â¯m downwind from the landfill. The average methane emission varied between 217⯱â¯14 and 410⯱â¯18â¯kgâ¯h-1 within the individual measurement days, but the measured emission rates were higher on the first three days (333⯱â¯27, 371⯱â¯42 and 410⯱â¯18â¯kgâ¯h-1) compared to the last three days (217⯱â¯14, 249⯱â¯20 and 263⯱â¯22â¯kgâ¯h-1). It was not possible to completely isolate the extent to which these variations were a consequence of measuring artefacts, such as wind/measurement direction and measurement distance, or from an actual change in the fugitive emission. Such emission change is known to occur with changes in the atmospheric pressure. The higher emissions measured during the first three days of the campaign were measured during a period with an overall decrease in atmospheric pressure (from approximately 1014â¯mbar on day 1 to 987â¯mbar on day 6). The lower emissions measured during the last three days of the campaign were carried out during a period with an initial pressure increase followed by a period of slowly reducing pressure. The average daily methane recovery flow varied between 633 and 679â¯kgâ¯h-1 at STP (1 atm, 0⯰C). The methane emitted to the atmosphere accounted for approximately 31% of the total methane generated, assuming that the methane generated is the sum of the methane recovered and the methane emitted to the atmosphere, thus not including a potential methane oxidation in the landfill cover soil.
Asunto(s)
Contaminantes Atmosféricos , Eliminación de Residuos , Monitoreo del Ambiente , Metano , Reino Unido , Instalaciones de Eliminación de ResiduosRESUMEN
The measurement of methane emissions from landfills is important to the understanding of landfills' contribution to greenhouse gas emissions. The Tracer Dispersion Method (TDM) is becoming widely accepted as a technique, which allows landfill emissions to be quantified accurately provided that measurements are taken where the plumes of a released tracer-gas and landfill-gas are well-mixed. However, the distance at which full mixing of the gases occurs is generally unknown prior to any experimental campaign. To overcome this problem the present paper demonstrates that, for any specific TDM application, a simple Gaussian dispersion model (AERMOD) can be run beforehand to help determine the distance from the source at which full mixing conditions occur, and the likely associated measurement errors. An AERMOD model was created to simulate a series of TDM trials carried out at a UK landfill, and was benchmarked against the experimental data obtained. The model was used to investigate the impact of different factors (e.g. tracer cylinder placements, wind directions, atmospheric stability parameters) on TDM results to identify appropriate experimental set ups for different conditions. The contribution of incomplete vertical mixing of tracer and landfill gas on TDM measurement error was explored using the model. It was observed that full mixing conditions at ground level do not imply full mixing over the entire plume height. However, when full mixing conditions were satisfied at ground level, then the error introduced by variations in mixing higher up were always less than 10%.
Asunto(s)
Contaminantes Atmosféricos , Eliminación de Residuos , Monitoreo del Ambiente , Gases , Metano , Instalaciones de Eliminación de ResiduosRESUMEN
A controlled release test was carried out to assess the accuracy of the tracer gas dispersion method, which is used to measure whole-site landfill methane (CH4) emissions as well as fugitive emissions from other area sources. Two teams performed measurements using analytical instruments installed in two vehicles, to measure downwind concentrations of target (CH4) and tracer gases at distances of 1.2-3.5â¯km from the release locations. The controlled target gas release rates were either 5.3 or 10.9â¯kg CH4 h-1, and target and tracer gases were released at distances between 12â¯m and 140â¯m from each other. Five measurement campaigns were performed, where the plume was traversed between 2 and 31 times. The measured target gas emissions agreed well with the controlled releases, with rate differences no greater than 1.1â¯kg CH4 h-1 for Team A and 1.0â¯kg CH4 h-1 for Team B when quantifying a controlled release of 10.9â¯kg CH4 h-1. This corresponds to a maximum error of ±10%. A larger error of up to 18% was seen in the campaign with a lower target gas release rate (5.3â¯kg CH4 h-1). Using a cross plume integration method to calculate tracer gas to target gas ratios provided the most accurate results (lowest error), whereas larger errors (up to 49%) were observed when using other calculation methods. By establishment of an error budget and comparison with the measured error based on the release test, it could be concluded that following best practice when performing measurements, the overall error of a tracer gas dispersion measurement is very likely to be less than 20%.
Asunto(s)
Contaminantes Atmosféricos , Eliminación de Residuos , Monitoreo del Ambiente , Gases , Metano , Instalaciones de Eliminación de ResiduosRESUMEN
A key step toward demonstrating a quantum system that can address difficult problems in physics and chemistry will be performing a computation beyond the capabilities of any classical computer, thus achieving so-called quantum supremacy. In this study, we used nine superconducting qubits to demonstrate a promising path toward quantum supremacy. By individually tuning the qubit parameters, we were able to generate thousands of distinct Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert space. As the number of qubits increases, the system continues to explore the exponentially growing number of states. Extending these results to a system of 50 qubits has the potential to address scientific questions that are beyond the capabilities of any classical computer.
RESUMEN
DNA methylation occurs at the adenines in the somatic macronucleus of Tetrahymena thermophila. We report on a methylation site within a DNA segment showing facultative persistence in the macronucleus. When the site is present, methylation occurs on both strands, although only 50% of the DNA molecules are methylated.