Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Open Res Eur ; 4: 35, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38974408

RESUMO

This article introduces a suite of mini-applications (mini-apps) designed to optimise computational kernels in ab initio electronic structure codes. The suite is developed from flagship applications participating in the NOMAD Center of Excellence, such as the ELPA eigensolver library and the GW implementations of the exciting, Abinit, and FHI-aims codes. The mini-apps were identified by targeting functions that significantly contribute to the total execution time in the parent applications. This strategic selection allows for concentrated optimisation efforts. The suite is designed for easy deployment on various High-Performance Computing (HPC) systems, supported by an integrated CMake build system for straightforward compilation and execution. The aim is to harness the capabilities of emerging (post)exascale systems, which necessitate concurrent hardware and software development - a concept known as co-design. The mini-app suite serves as a tool for profiling and benchmarking, providing insights that can guide both software optimisation and hardware design. Ultimately, these developments will enable more accurate and efficient simulations of novel materials, leveraging the full potential of exascale computing in material science research.

2.
Artigo em Inglês | MEDLINE | ID: mdl-35935573

RESUMO

Exascale computing has been a dream for ages and is close to becoming a reality that will impact how molecular simulations are being performed, as well as the quantity and quality of the information derived for them. We review how the biomolecular simulations field is anticipating these new architectures, making emphasis on recent work from groups in the BioExcel Center of Excellence for High Performance Computing. We exemplified the power of these simulation strategies with the work done by the HPC simulation community to fight Covid-19 pandemics. This article is categorized under:Data Science > Computer Algorithms and ProgrammingData Science > Databases and Expert SystemsMolecular and Statistical Mechanics > Molecular Dynamics and Monte-Carlo Methods.

4.
Front Big Data ; 4: 657218, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34901840

RESUMO

The execution of complex distributed applications in exascale systems faces many challenges, as it involves empirical evaluation of countless code variations and application runtime parameters over a heterogeneous set of resources. To mitigate these challenges, the research field of autotuning has gained momentum. The autotuning automates identifying the most desirable application implementation in terms of code variations and runtime parameters. However, the complexity and size of the exascale systems make the autotuning process very difficult, especially considering the number of parameter variations that have to be identified. Therefore, we introduce a novel approach for autotuning exascale applications based on a genetic multi-objective optimization algorithm integrated within the ASPIDE exascale computing framework. The approach considers multi-dimensional search space with support for pluggable objective functions, including execution time and energy requirements. Furthermore, the autotuner employs a machine learning-based event detection approach to detect events and anomalies during application execution, such as hardware failures or communication bottlenecks.

5.
J Comput Chem ; 42(15): 1073-1087, 2021 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-33780021

RESUMO

In the era of exascale supercomputers, large-scale, and long-time molecular dynamics (MD) calculations are expected to make breakthroughs in various fields of science and technology. Here, we propose a new algorithm to improve the parallelization performance of message passing interface (MPI)-communication in the MPI-parallelized fast multipole method (FMM) combined with MD calculations under three-dimensional periodic boundary conditions. Our approach enables a drastic reduction in the amount of communication data, including the atomic coordinates and multipole coefficients, both of which are required to calculate the electrostatic interaction by using the FMM. In communications of multipole coefficients, the reduction rate of communication data in the new algorithm relative to the amount of data in the conventional one increases as both the number of FMM levels and the number of MPI processes increase. The aforementioned rate increase could exceed 50% as the number of MPI processes becomes larger for very large systems. The proposed algorithm, named the minimum-transferred data (MTD) method, should enable large-scale and long-time MD calculations to be calculated efficiently, under the condition of massive MPI-parallelization on exascale supercomputers.

6.
Front Big Data ; 4: 756041, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35198971

RESUMO

Data-intensive applications are becoming commonplace in all science disciplines. They are comprised of a rich set of sub-domains such as data engineering, deep learning, and machine learning. These applications are built around efficient data abstractions and operators that suit the applications of different domains. Often lack of a clear definition of data structures and operators in the field has led to other implementations that do not work well together. The HPTMT architecture that we proposed recently, identifies a set of data structures, operators, and an execution model for creating rich data applications that links all aspects of data engineering and data science together efficiently. This paper elaborates and illustrates this architecture using an end-to-end application with deep learning and data engineering parts working together. Our analysis show that the proposed system architecture is better suited for high performance computing environments compared to the current big data processing systems. Furthermore our proposed system emphasizes the importance of efficient compact data structures such as Apache Arrow tabular data representation defined for high performance. Thus the system integration we proposed scales a sequential computation to a distributed computation retaining optimum performance along with highly usable application programming interface.

7.
J Comput Sci ; 46: 101093, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33312270

RESUMO

Many believe that the future of innovation lies in simulation. However, as computers are becoming ever more powerful, so does the hyperbole used to discuss their potential in modelling across a vast range of domains, from subatomic physics to chemistry, climate science, epidemiology, economics and cosmology. As we are about to enter the era of quantum and exascale computing, machine learning and artificial intelligence have entered the field in a significant way. In this article we give a brief history of simulation, discuss how machine learning can be more powerful if underpinned by deeper mechanistic understanding, outline the potential of exascale and quantum computing, highlight the limits of digital computing - classical and quantum - and distinguish rhetoric from reality in assessing the future of modelling and simulation, when we believe analogue computing will play an increasingly important role.

8.
Philos Trans A Math Phys Eng Sci ; 378(2166): 20190066, 2020 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-31955676

RESUMO

A number of features of today's high-performance computers make it challenging to exploit these machines fully for computational science. These include increasing core counts but stagnant clock frequencies; the high cost of data movement; use of accelerators (GPUs, FPGAs, coprocessors), making architectures increasingly heterogeneous; and multi- ple precisions of floating-point arithmetic, including half-precision. Moreover, as well as maximizing speed and accuracy, minimizing energy consumption is an important criterion. New generations of algorithms are needed to tackle these challenges. We discuss some approaches that we can take to develop numerical algorithms for high-performance computational science, with a view to exploiting the next generation of supercomputers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.

9.
Philos Trans A Math Phys Eng Sci ; 378(2166): 20190055, 2020 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-31955677

RESUMO

A traditional goal of algorithmic optimality, squeezing out flops, has been superseded by evolution in architecture. Flops no longer serve as a reasonable proxy for all aspects of complexity. Instead, algorithms must now squeeze memory, data transfers, and synchronizations, while extra flops on locally cached data represent only small costs in time and energy. Hierarchically low-rank matrices realize a rarely achieved combination of optimal storage complexity and high-computational intensity for a wide class of formally dense linear operators that arise in applications for which exascale computers are being constructed. They may be regarded as algebraic generalizations of the fast multipole method. Methods based on these hierarchical data structures and their simpler cousins, tile low-rank matrices, are well proportioned for early exascale computer architectures, which are provisioned for high processing power relative to memory capacity and memory bandwidth. They are ushering in a renaissance of computational linear algebra. A challenge is that emerging hardware architecture possesses hierarchies of its own that do not generally align with those of the algorithm. We describe modules of a software toolkit, hierarchical computations on manycore architectures, that illustrate these features and are intended as building blocks of applications, such as matrix-free higher-order methods in optimization and large-scale spatial statistics. Some modules of this open-source project have been adopted in the software libraries of major vendors. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.

10.
Philos Trans A Math Phys Eng Sci ; 378(2166): 20190056, 2020 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-31955678

RESUMO

As noted in Wikipedia, skin in the game refers to having 'incurred risk by being involved in achieving a goal', where 'skin is a synecdoche for the person involved, and game is the metaphor for actions on the field of play under discussion'. For exascale applications under development in the US Department of Energy Exascale Computing Project, nothing could be more apt, with the skin being exascale applications and the game being delivering comprehensive science-based computational applications that effectively exploit exascale high-performance computing technologies to provide breakthrough modelling and simulation and data science solutions. These solutions will yield high-confidence insights and answers to the most critical problems and challenges for the USA in scientific discovery, national security, energy assurance, economic competitiveness and advanced healthcare. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.

11.
Philos Trans A Math Phys Eng Sci ; 378(2166): 20190058, 2020 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-31955679

RESUMO

The case is made for a much closer synergy between climate science, numerical analysis and computer science. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.

12.
Concurr Comput ; 32(2)2020 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-33897303

RESUMO

Resiliency is and will be a critical factor in determining scientific productivity on current and exascale supercomputers, and beyond. Applications oblivious to and incapable of handling transient soft and hard errors could waste supercomputing resources or, worse, yield misleading scientific insights. We introduce a novel application-driven silent error detection and recovery strategy based on application health monitoring. Our methodology uses application output that follows known patterns as indicators of an application's health, and knowledge that violation of these patterns could be indication of faults. Information from system monitors that report hardware and software health status is used to corroborate faults. Collectively, this information is used by a fault coordinator agent to take preventive and corrective measures by applying computational steering to an application between checkpoints. This cooperative fault management system uses the Fault Tolerance Backplane as a communication channel. The benefits of this framework are demonstrated with two real application case studies, molecular dynamics and quantum chemistry simulations, on scalable clusters with simulated memory and I/O corruptions. The developed approach is general and can be easily applied to other applications.

13.
Philos Trans A Math Phys Eng Sci ; 377(2142): 20180148, 2019 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-30967032

RESUMO

We discuss scientific features and computational performance of kilometre-scale global weather and climate simulations, considering the Icosahedral Non-hydrostatic (ICON) model and the Integrated Forecast System (IFS). Scalability measurements and a performance modelling approach are used to derive performance estimates for these models on upcoming exascale supercomputers. This is complemented by preliminary analyses of the model data that illustrate the importance of high-resolution models to gain improvements in the accuracy of convective processes, a better understanding of physics dynamics interactions and poorly resolved or parametrized processes, such as gravity waves, convection and boundary layer. This article is part of the theme issue 'Multiscale modelling, simulation and computing: from the desktop to the exascale'.

14.
Philos Trans A Math Phys Eng Sci ; 377(2142): 20180144, 2019 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-30967040

RESUMO

In this position paper, we discuss two relevant topics: (i) generic multiscale computing on emerging exascale high-performing computing environments, and (ii) the scaling of such applications towards the exascale. We will introduce the different phases when developing a multiscale model and simulating it on available computing infrastructure, and argue that we could rely on it both on the conceptual modelling level and also when actually executing the multiscale simulation, and maybe should further develop generic frameworks and software tools to facilitate multiscale computing. Next, we focus on simulating multiscale models on high-end computing resources in the face of emerging exascale performance levels. We will argue that although applications could scale to exascale performance relying on weak scaling and maybe even on strong scaling, there are also clear arguments that such scaling may no longer apply for many applications on these emerging exascale machines and that we need to resort to what we would call multi-scaling. This article is part of the theme issue 'Multiscale modelling, simulation and computing: from the desktop to the exascale'.

16.
Front Neuroinform ; 12: 2, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29503613

RESUMO

State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

17.
J Comput Chem ; 38(16): 1419-1430, 2017 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-28093787

RESUMO

The transition toward exascale computing will be accompanied by a performance dichotomy. Computational peak performance will rapidly increase; I/O performance will either grow slowly or be completely stagnant. Essentially, the rate at which data are generated will grow much faster than the rate at which data can be read from and written to the disk. MD simulations will soon face the I/O problem of efficiently writing to and reading from disk on the next generation of supercomputers. This article targets MD simulations at the exascale and proposes a novel technique for in situ data analysis and indexing of MD trajectories. Our technique maps individual trajectories' substructures (i.e., α-helices, ß-strands) to metadata frame by frame. The metadata captures the conformational properties of the substructures. The ensemble of metadata can be used for automatic, strategic analysis within a trajectory or across trajectories, without manually identify those portions of trajectories in which critical changes take place. We demonstrate our technique's effectiveness by applying it to 26.3k helices and 31.2k strands from 9917 PDB proteins and by providing three empirical case studies. © 2017 Wiley Periodicals, Inc.


Assuntos
Ciência de Dados/métodos , Simulação de Dinâmica Molecular , Proteínas/química , Modelos Teóricos , Estrutura Secundária de Proteína
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA