Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 3004, 2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38321050

RESUMEN

The three-dimensional turbulent flow around a Flettner rotor, i.e. an engine-driven rotating cylinder in an atmospheric boundary layer, is studied via direct numerical simulations (DNS) for three different rotation speeds ([Formula: see text]). This technology offers a sustainable alternative mainly for marine propulsion, underscoring the critical importance of comprehending the characteristics of such flow. In this study, we evaluate the aerodynamic loads produced by the rotor of height h, with a specific focus on the changes in lift and drag force along the vertical axis of the cylinder. Correspondingly, we observe that vortex shedding is inhibited at the highest [Formula: see text] values investigated. However, in the case of intermediate [Formula: see text], vortices continue to be shed in the upper section of the cylinder ([Formula: see text]). As the cylinder begins to rotate, a large-scale motion becomes apparent on the high-pressure side, close to the bottom wall. We offer both a qualitative and quantitative description of this motion, outlining its impact on the wake deflection. This finding is significant as it influences the rotor wake to an extent of approximately one hundred diameters downstream. In practical applications, this phenomenon could influence the performance of subsequent boats and have an impact on the cylinder drag, affecting its fuel consumption. This fundamental study, which investigates a limited yet significant (for DNS) Reynolds number and explores various spinning ratios, provides valuable insights into the complex flow around a Flettner rotor. The simulations were performed using a modern GPU-based spectral element method, leveraging the power of modern supercomputers towards fundamental engineering problems.

2.
Sci Rep ; 13(1): 22344, 2023 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-38102467

RESUMEN

Information theory (IT) provides tools to estimate causality between events, in various scientific domains. Here, we explore the potential of IT-based causality estimation in turbulent (i.e. chaotic) dynamical systems and investigate the impact of various hyperparameters on the outcomes. The influence of Markovian orders, i.e. the time lags, on the computation of the transfer entropy (TE) has been mostly overlooked in the literature. We show that the history effect remarkably affects the TE estimation, especially for turbulent signals. In a turbulent channel flow, we compare the TE with standard measures such as auto- and cross-correlation, showing that the TE has a dominant direction, i.e. from the walls towards the core of the flow. In addition, we found that, in generic low-order vector auto-regressive models (VAR), the causality time scale is determined from the order of the VAR, rather than the integral time scale. Eventually, we propose a novel application of TE as a sensitivity measure for controlling computational errors in numerical simulations with adaptive mesh refinement. The introduced indicator is fully data-driven, no solution of adjoint equations is required, with an improved convergence to the accurate function of interest. In summary, we demonstrate the potential of TE for turbulence, where other measures may only provide partial information.

3.
Eur Phys J C Part Fields ; 83(5): 391, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37180830

RESUMEN

A vector portal between the Standard Model and the dark sector is a predictive and compelling framework for thermal dark matter. Through co-annihilations, models of inelastic dark matter (iDM) and inelastic Dirac dark matter (i2DM) can reproduce the observed relic density in the MeV to GeV mass range without violating cosmological limits. In these scenarios, the vector mediator behaves like a semi-visible particle, evading traditional bounds on visible or invisible resonances, and uncovering new parameter space to explain the muon (g-2) anomaly. By means of a more inclusive signal definition at the NA64 experiment, we place new constraints on iDM and i2DM using a missing energy technique. With a recast-based analysis, we contextualize the NA64 exclusion limits in parameter space and estimate the reach of the newly collected and expected future NA64 data. Our results motivate the development of an optimized search program for semi-visible particles, in which fixed-target experiments like NA64 provide a powerful probe in the sub-GeV mass range.

4.
J Supercomput ; 78(3): 3605-3620, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35210696

RESUMEN

In situ visualization on high-performance computing systems allows us to analyze simulation results that would otherwise be impossible, given the size of the simulation data sets and offline post-processing execution time. We develop an in situ adaptor for Paraview Catalyst and Nek5000, a massively parallel Fortran and C code for computational fluid dynamics. We perform a strong scalability test up to 2048 cores on KTH's Beskow Cray XC40 supercomputer and assess in situ visualization's impact on the Nek5000 performance. In our study case, a high-fidelity simulation of turbulent flow, we observe that in situ operations significantly limit the strong scalability of the code, reducing the relative parallel efficiency to only ≈ 21 % on 2048 cores (the relative efficiency of Nek5000 without in situ operations is ≈ 99 % ). Through profiling with Arm MAP, we identified a bottleneck in the image composition step (that uses the Radix-kr algorithm) where a majority of the time is spent on MPI communication. We also identified an imbalance of in situ processing time between rank 0 and all other ranks. In our case, better scaling and load-balancing in the parallel image composition would considerably improve the performance of Nek5000 with in situ capabilities. In general, the result of this study highlights the technical challenges posed by the integration of high-performance simulation codes and data-analysis libraries and their practical use in complex cases, even when efficient algorithms already exist for a certain application scenario.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA