Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
Intervalo de año de publicación
1.
Sci Rep ; 13(1): 11410, 2023 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-37452044

RESUMEN

Non-periodic solutions are an essential property of chaotic dynamical systems. Simulations with deterministic finite-precision numbers, however, always yield orbits that are eventually periodic. With 64-bit double-precision floating-point numbers such periodic orbits are typically negligible due to very long periods. The emerging trend to accelerate simulations with low-precision numbers, such as 16-bit half-precision floats, raises questions on the fidelity of such simulations of chaotic systems. Here, we revisit the 1-variable logistic map and the generalised Bernoulli map with various number formats and precisions: floats, posits and logarithmic fixed-point. Simulations are improved with higher precision but stochastic rounding prevents periodic orbits even at low precision. For larger systems the performance gain from low-precision simulations is often reinvested in higher resolution or complexity, increasing the number of variables. In the Lorenz 1996 system, the period lengths of orbits increase exponentially with the number of variables. Moreover, invariant measures are better approximated with an increased number of variables than with increased precision. Extrapolating to large simulations of natural systems, such as million-variable climate models, periodic orbit lengths are far beyond reach of present-day computers. Such orbits are therefore not expected to be problematic compared to high-precision simulations but the deviation of both from the continuum solution remains unclear.


Asunto(s)
Matrimonio , Dinámicas no Lineales
2.
J Adv Model Earth Syst ; 14(2): e2021MS002684, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35866041

RESUMEN

Most Earth-system simulations run on conventional central processing units in 64-bit double precision floating-point numbers Float64, although the need for high-precision calculations in the presence of large uncertainties has been questioned. Fugaku, currently the world's fastest supercomputer, is based on A64FX microprocessors, which also support the 16-bit low-precision format Float16. We investigate the Float16 performance on A64FX with ShallowWaters.jl, the first fluid circulation model that runs entirely with 16-bit arithmetic. The model implements techniques that address precision and dynamic range issues in 16 bits. The precision-critical time integration is augmented to include compensated summation to minimize rounding errors. Such a compensated time integration is as precise but faster than mixed precision with 16 and 32-bit floats. As subnormals are inefficiently supported on A64FX the very limited range available in Float16 is 6 × 10-5 to 65,504. We develop the analysis-number format Sherlogs.jl to log the arithmetic results during the simulation. The equations in ShallowWaters.jl are then systematically rescaled to fit into Float16, using 97% of the available representable numbers. Consequently, we benchmark speedups of up to 3.8x on A64FX with Float16. Adding a compensated time integration, speedups reach up to 3.6x. Although ShallowWaters.jl is simplified compared to large Earth-system models, it shares essential algorithms and therefore shows that 16-bit calculations are indeed a competitive way to accelerate Earth-system simulations on available hardware.

3.
J Adv Model Earth Syst ; 14(10): e2022MS003120, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36590321

RESUMEN

Despite continuous improvements, precipitation forecasts are still not as accurate and reliable as those of other meteorological variables. A major contributing factor to this is that several key processes affecting precipitation distribution and intensity occur below the resolved scale of global weather models. Generative adversarial networks (GANs) have been demonstrated by the computer vision community to be successful at super-resolution problems, that is, learning to add fine-scale structure to coarse images. Leinonen et al. (2020, https://doi.org/10.1109/TGRS.2020.3032790) previously applied a GAN to produce ensembles of reconstructed high-resolution atmospheric fields, given coarsened input data. In this paper, we demonstrate this approach can be extended to the more challenging problem of increasing the accuracy and resolution of comparatively low-resolution input from a weather forecasting model, using high-resolution radar measurements as a "ground truth." The neural network must learn to add resolution and structure whilst accounting for non-negligible forecast error. We show that GANs and VAE-GANs can match the statistical properties of state-of-the-art pointwise post-processing methods whilst creating high-resolution, spatially coherent precipitation maps. Our model compares favorably to the best existing downscaling methods in both pixel-wise and pooled CRPS scores, power spectrum information and rank histograms (used to assess calibration). We test our models and show that they perform in a range of scenarios, including heavy rainfall.

4.
Nat Comput Sci ; 1(11): 713-724, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38217145

RESUMEN

Hundreds of petabytes are produced annually at weather and climate forecast centers worldwide. Compression is essential to reduce storage and to facilitate data sharing. Current techniques do not distinguish the real from the false information in data, leaving the level of meaningful precision unassessed. Here we define the bitwise real information content from information theory for the Copernicus Atmospheric Monitoring Service (CAMS). Most variables contain fewer than 7 bits of real information per value and are highly compressible due to spatio-temporal correlation. Rounding bits without real information to zero facilitates lossless compression algorithms and encodes the uncertainty within the data itself. All CAMS data are 17× compressed relative to 64-bit floats, while preserving 99% of real information. Combined with four-dimensional compression, factors beyond 60× are achieved. A data compression Turing test is proposed to optimize compressibility while minimizing information loss for the end use of weather and climate forecast data.

5.
Q J R Meteorol Soc ; 144(715): 1947-1964, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31031424

RESUMEN

Accurate forecasts of the ocean state and the estimation of forecast uncertainties are crucial when it comes to providing skilful seasonal predictions. In this study we analyse the predictive skill and reliability of the ocean component in a seasonal forecasting system. Furthermore, we assess the effects of accounting for model and observational uncertainties. Ensemble forcasts are carried out with an updated version of the ECMWF seasonal forecasting model System 4, with a forecast length of ten months, initialized every May between 1981 and 2010. We find that, for essential quantities such as sea surface temperature and upper ocean 300 m heat content, the ocean forecasts are generally underdispersive and skilful beyond the first month mainly in the Tropics and parts of the North Atlantic. The reference reanalysis used for the forecast evaluation considerably affects diagnostics of forecast skill and reliability, throughout the entire ten-month forecasts but mostly during the first three months. Accounting for parametrization uncertainty by implementing stochastic parametrization perturbations has a positive impact on both reliability (from month 3 onwards) as well as forecast skill (from month 8 onwards). Skill improvements extend also to atmospheric variables such as 2 m temperature, mostly in the extratropical Pacific but also over the midlatitudes of the Americas. Hence, while model uncertainty impacts the skill of seasonal forecasts, observational uncertainty impacts our assessment of that skill. Future ocean model development should therefore aim not only to reduce model errors but to simultaneously assess and estimate uncertainties.

6.
Front Comput Neurosci ; 9: 124, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26528173

RESUMEN

How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA