RESUMO
We present two quantum algorithms based on evolution randomization, a simple variant of adiabatic quantum computing, to prepare a quantum state |x⟩ that is proportional to the solution of the system of linear equations Ax[over â]=b[over â]. The time complexities of our algorithms are O(κ^{2}log(κ)/ε) and O(κlog(κ)/ε), where κ is the condition number of A and ε is the precision. Both algorithms are constructed using families of Hamiltonians that are linear combinations of products of A, the projector onto the initial state |b⟩, and single-qubit Pauli operators. The algorithms are conceptually simple and easy to implement. They are not obtained from equivalences between the gate model and adiabatic quantum computing. They do not use phase estimation or variable-time amplitude amplification, and do not require large ancillary systems. We discuss a gate-based implementation via Hamiltonian simulation and prove that our second algorithm is almost optimal in terms of κ. Like previous methods, our techniques yield an exponential quantum speed-up under some assumptions. Our results emphasize the role of Hamiltonian-based models of quantum computing for the discovery of important algorithms.
RESUMO
We show how the classical action, an adiabatic invariant, can be preserved under nonadiabatic conditions. Specifically, for a time-dependent Hamiltonian H=p^{2}/2m+U(q,t) in one degree of freedom, and for an arbitrary choice of action I_{0}, we construct a so-called fast-forward potential energy function V_{FF}(q,t) that, when added to H, guides all trajectories with initial action I_{0} to end with the same value of action. We use this result to construct a local dynamical invariant J(q,p,t) whose value remains constant along these trajectories. We illustrate our results with numerical simulations. Finally, we sketch how our classical results may be used to design approximate quantum shortcuts to adiabaticity.
RESUMO
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.