Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
J Chem Inf Model ; 62(7): 1691-1711, 2022 04 11.
Artículo en Inglés | MEDLINE | ID: mdl-35353508

RESUMEN

We assess costs and efficiency of state-of-the-art high-performance cloud computing and compare the results to traditional on-premises compute clusters. Our use case is atomistic simulations carried out with the GROMACS molecular dynamics (MD) toolkit with a particular focus on alchemical protein-ligand binding free energy calculations. We set up a compute cluster in the Amazon Web Services (AWS) cloud that incorporates various different instances with Intel, AMD, and ARM CPUs, some with GPU acceleration. Using representative biomolecular simulation systems, we benchmark how GROMACS performs on individual instances and across multiple instances. Thereby we assess which instances deliver the highest performance and which are the most cost-efficient ones for our use case. We find that, in terms of total costs, including hardware, personnel, room, energy, and cooling, producing MD trajectories in the cloud can be about as cost-efficient as an on-premises cluster given that optimal cloud instances are chosen. Further, we find that high-throughput ligand-screening can be accelerated dramatically by using global cloud resources. For a ligand screening study consisting of 19 872 independent simulations or ∼200 µs of combined simulation trajectory, we made use of diverse hardware available in the cloud at the time of the study. The computations scaled-up to reach peak performance using more than 4 000 instances, 140 000 cores, and 3 000 GPUs simultaneously. Our simulation ensemble finished in about 2 days in the cloud, while weeks would be required to complete the task on a typical on-premises cluster consisting of several hundred nodes.


Asunto(s)
Computadores , Metodologías Computacionales , Nube Computacional , Diseño de Fármacos , Ligandos , Simulación de Dinámica Molecular
2.
Biophys J ; 116(1): 4-11, 2019 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-30558883

RESUMEN

We introduce a computational toolset, named GROmaρs, to obtain and compare time-averaged density maps from molecular dynamics simulations. GROmaρs efficiently computes density maps by fast multi-Gaussian spreading of atomic densities onto a three-dimensional grid. It complements existing map-based tools by enabling spatial inspection of atomic average localization during the simulations. Most importantly, it allows the comparison between computed and reference maps (e.g., experimental) through calculation of difference maps and local and time-resolved global correlation. These comparison operations proved useful to quantitatively contrast perturbed and control simulation data sets and to examine how much biomolecular systems resemble both synthetic and experimental density maps. This was especially advantageous for multimolecule systems in which standard comparisons like RMSDs are difficult to compute. In addition, GROmaρs incorporates absolute and relative spatial free-energy estimates to provide an energetic picture of atomistic localization. This is an open-source GROMACS-based toolset, thus allowing for static or dynamic selection of atoms or even coarse-grained beads for the density calculation. Furthermore, masking of regions was implemented to speed up calculations and to facilitate the comparison with experimental maps. Beyond map comparison, GROmaρs provides a straightforward method to detect solvent cavities and average charge distribution in biomolecular systems. We employed all these functionalities to inspect the localization of lipid and water molecules in aquaporin systems, the binding of cholesterol to the G protein coupled chemokine receptor type 4, and the identification of permeation pathways through the dermicidin antimicrobial channel. Based on these examples, we anticipate a high applicability of GROmaρs for the analysis of molecular dynamics simulations and their comparison with experimentally determined densities.


Asunto(s)
Simulación de Dinámica Molecular , Programas Informáticos , Animales , Acuaporinas/química , Proteínas de la Membrana Bacteriana Externa/química , Humanos , Conformación Proteica , Receptores CXCR4/química
3.
J Comput Chem ; 40(27): 2418-2431, 2019 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-31260119

RESUMEN

We identify hardware that is optimal to produce molecular dynamics (MD) trajectories on Linux compute clusters with the GROMACS 2018 simulation package. Therefore, we benchmark the GROMACS performance on a diverse set of compute nodes and relate it to the costs of the nodes, which may include their lifetime costs for energy and cooling. In agreement with our earlier investigation using GROMACS 4.6 on hardware of 2014, the performance to price ratio of consumer GPU nodes is considerably higher than that of CPU nodes. However, with GROMACS 2018, the optimal CPU to GPU processing power balance has shifted even more toward the GPU. Hence, nodes optimized for GROMACS 2018 and later versions enable a significantly higher performance to price ratio than nodes optimized for older GROMACS versions. Moreover, the shift toward GPU processing allows to cheaply upgrade old nodes with recent GPUs, yielding essentially the same performance as comparable brand-new hardware. © 2019 Wiley Periodicals, Inc.

4.
Biochim Biophys Acta ; 1858(7 Pt B): 1741-52, 2016 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-26874204

RESUMEN

Ion channels are of universal importance for all cell types and play key roles in cellular physiology and pathology. Increased insight into their functional mechanisms is crucial to enable drug design on this important class of membrane proteins, and to enhance our understanding of some of the fundamental features of cells. This review presents the concepts behind the recently developed simulation protocol Computational Electrophysiology (CompEL), which facilitates the atomistic simulation of ion channels in action. In addition, the review provides guidelines for its application in conjunction with the molecular dynamics software package GROMACS. We first lay out the rationale for designing CompEL as a method that models the driving force for ion permeation through channels the way it is established in cells, i.e., by electrochemical ion gradients across the membrane. This is followed by an outline of its implementation and a description of key settings and parameters helpful to users wishing to set up and conduct such simulations. In recent years, key mechanistic and biophysical insights have been obtained by employing the CompEL protocol to address a wide range of questions on ion channels and permeation. We summarize these recent findings on membrane proteins, which span a spectrum from highly ion-selective, narrow channels to wide diffusion pores. Finally we discuss the future potential of CompEL in light of its limitations and strengths. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov.


Asunto(s)
Activación del Canal Iónico , Canales Iónicos/química , Canales Iónicos/ultraestructura , Membrana Dobles de Lípidos/química , Potenciales de la Membrana , Modelos Químicos , Algoritmos , Sitios de Unión , Transporte Biológico Activo , Biología Computacional/métodos , Simulación por Computador , Electrofisiología/métodos , Proteínas de la Membrana , Simulación de Dinámica Molecular , Unión Proteica , Conformación Proteica , Programas Informáticos
5.
J Comput Chem ; 36(26): 1990-2008, 2015 Oct 05.
Artículo en Inglés | MEDLINE | ID: mdl-26238484

RESUMEN

The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime.


Asunto(s)
Simulación por Computador , Simulación de Dinámica Molecular , Programas Informáticos , Benchmarking
6.
Biophys J ; 101(4): 809-17, 2011 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-21843471

RESUMEN

Presently, most simulations of ion channel function rely upon nonatomistic Brownian dynamics calculations, indirect interpretation of energy maps, or application of external electric fields. We present a computational method to directly simulate ion flux through membrane channels based on biologically realistic electrochemical gradients. In close analogy to single-channel electrophysiology, physiologically and experimentally relevant timescales are achieved. We apply our method to the bacterial channel PorB from pathogenic Neisseria meningitidis, which, during Neisserial infection, inserts into the mitochondrial membrane of target cells and elicits apoptosis by dissipating the membrane potential. We show that our method accurately predicts ion conductance and selectivity and elucidates ion conduction mechanisms in great detail. Handles for overcoming channel-related antibiotic resistance are identified.


Asunto(s)
Fenómenos Electrofisiológicos , Canales Iónicos/metabolismo , Simulación de Dinámica Molecular , Neisseria meningitidis/metabolismo , Porinas/metabolismo , Farmacorresistencia Microbiana/genética , Conductividad Eléctrica , Activación del Canal Iónico , Iones , Membrana Dobles de Lípidos/metabolismo , Mutación/genética , Permeabilidad
7.
Chembiochem ; 12(7): 1049-55, 2011 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-21433241

RESUMEN

Neurotransmitter release at the synapse requires fusion of synaptic vesicles with the presynaptic plasma membrane. SNAREs are the core constituents of the protein machinery responsible for this membrane fusion, but the actual fusion mechanism remains unclear. Here, we have simulated neuronal SNARE-mediated membrane fusion in molecular detail. In our simulations, membrane fusion progresses through an inverted micelle fusion intermediate before reaching the hemifused state. We show that at least one single SNARE complex is required for fusion, as has also been confirmed in a recent in vitro single-molecule fluoresence study. Further, the transmembrane regions of the SNAREs were found to play a vital role in the initiation of fusion by causing distortions of the lipid packing of the outer membrane leaflets, and the C termini of the transmembrane regions are associated with the formation of the fusion pores. The inherent mechanical stress in the linker region of the SNARE complex was found to drive both the subsequent formation and expansion of fusion pores. Our simulations also revealed that the presence of homodimerizations between the transmembrane regions leads to the formation of unstable fusion intermediates that are under high curvature stress. We show that multiple SNARE complexes mediate membrane fusion in a cooperative and synchronized process. Finally, we show that after fusion, the zipping of the SNAREs extends into the membrane region, in agreement with the recently resolved X-ray structure of the fully assembled state.


Asunto(s)
Fusión de Membrana , Proteínas SNARE/química , Proteínas SNARE/metabolismo , Membrana Celular/química , Membrana Celular/metabolismo , Simulación de Dinámica Molecular
8.
J Chem Theory Comput ; 16(11): 6938-6949, 2020 Nov 10.
Artículo en Inglés | MEDLINE | ID: mdl-33084336

RESUMEN

An important and computationally demanding part of molecular dynamics simulations is the calculation of long-range electrostatic interactions. Today, the prevalent method to compute these interactions is particle mesh Ewald (PME). The PME implementation in the GROMACS molecular dynamics package is extremely fast on individual GPU nodes. However, for large scale multinode parallel simulations, PME becomes the main scaling bottleneck as it requires all-to-all communication between the nodes; as a consequence, the number of exchanged messages scales quadratically with the number of involved nodes in that communication step. To enable efficient and scalable biomolecular simulations on future exascale supercomputers, clearly a method with a better scaling property is required. The fast multipole method (FMM) is such a method. As a first step on the path to exascale, we have implemented a performance-optimized, highly efficient GPU FMM and integrated it into GROMACS as an alternative to PME. For a fair performance comparison between FMM and PME, we first assessed the accuracies of the methods for various sets of input parameters. With parameters yielding similar accuracies for both methods, we determined the performance of GROMACS with FMM and compared it to PME for exemplary benchmark systems. We found that FMM with a multipole order of 8 yields electrostatic forces that are as accurate as PME with standard parameters. Further, for typical mixed-precision simulation settings, FMM does not lead to an increased energy drift with multipole orders of 8 or larger. Whereas an ≈50 000 atom simulation system with our FMM reaches only about a third of the performance with PME, for systems with large dimensions and inhomogeneous particle distribution, e.g., aerosol systems with water droplets floating in a vacuum, FMM substantially outperforms PME already on a single node.

9.
Elife ; 82019 03 04.
Artículo en Inglés | MEDLINE | ID: mdl-30829573

RESUMEN

We present a correlation-driven molecular dynamics (CDMD) method for automated refinement of atomistic models into cryo-electron microscopy (cryo-EM) maps at resolutions ranging from near-atomic to subnanometer. It utilizes a chemically accurate force field and thermodynamic sampling to improve the real-space correlation between the modeled structure and the cryo-EM map. Our framework employs a gradual increase in resolution and map-model agreement as well as simulated annealing, and allows fully automated refinement without manual intervention or any additional rotamer- and backbone-specific restraints. Using multiple challenging systems covering a wide range of map resolutions, system sizes, starting model geometries and distances from the target state, we assess the quality of generated models in terms of both model accuracy and potential of overfitting. To provide an objective comparison, we apply several well-established methods across all examples and demonstrate that CDMD performs best in most cases.


Asunto(s)
Automatización , Biología Computacional/métodos , Microscopía por Crioelectrón/métodos , Simulación de Dinámica Molecular
10.
J Phys Chem B ; 116(29): 8350-4, 2012 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-22263868

RESUMEN

A molecular dynamics algorithm in principal component space is presented. It is demonstrated that sampling can be improved without changing the ensemble by assigning masses to the principal components proportional to the inverse square root of the eigenvalues. The setup of the simulation requires no prior knowledge of the system; a short initial MD simulation to extract the eigenvectors and eigenvalues suffices. Independent measures indicated a 6-7 times faster sampling compared to a regular molecular dynamics simulation.


Asunto(s)
Algoritmos , Simulación de Dinámica Molecular , Simulación de Dinámica Molecular/economía , Peso Molecular , Análisis de Componente Principal , Conformación Proteica , Proteínas/química , Termodinámica , Factores de Tiempo
11.
J Chem Theory Comput ; 7(5): 1381-1393, 2011 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-21566696

RESUMEN

We describe a versatile method to enforce the rotation of subsets of atoms, e.g., a protein subunit, in molecular dynamics (MD) simulations. In particular, we introduce a "flexible axis" technique that allows realistic flexible adaptions of both the rotary subunit as well as the local rotation axis during the simulation. A variety of useful rotation potentials were implemented for the GROMACS 4.5 MD package. Application to the molecular motor F(1)-ATP synthase demonstrates the advantages of the flexible axis approach over the established fixed axis rotation technique.

12.
J Chem Theory Comput ; 4(3): 435-47, 2008 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26620784

RESUMEN

Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.

13.
J Comput Chem ; 28(12): 2075-84, 2007 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-17405124

RESUMEN

We investigate the parallel scaling of the GROMACS molecular dynamics code on Ethernet Beowulf clusters and what prerequisites are necessary for decent scaling even on such clusters with only limited bandwidth and high latency. GROMACS 3.3 scales well on supercomputers like the IBM p690 (Regatta) and on Linux clusters with a special interconnect like Myrinet or Infiniband. Because of the high single-node performance of GROMACS, however, on the widely used Ethernet switched clusters, the scaling typically breaks down when more than two computer nodes are involved, limiting the absolute speedup that can be gained to about 3 relative to a single-CPU run. With the LAM MPI implementation, the main scaling bottleneck is here identified to be the all-to-all communication which is required every time step. During such an all-to-all communication step, a huge amount of messages floods the network, and as a result many TCP packets are lost. We show that Ethernet flow control prevents network congestion and leads to substantial scaling improvements. For 16 CPUs, e.g., a speedup of 11 has been achieved. However, for more nodes this mechanism also fails. Having optimized an all-to-all routine, which sends the data in an ordered fashion, we show that it is possible to completely prevent packet loss for any number of multi-CPU nodes. Thus, the GROMACS scaling dramatically improves, even for switches that lack flow control. In addition, for the common HP ProCurve 2848 switch we find that for optimum all-to-all performance it is essential how the nodes are connected to the switch's ports. This is also demonstrated for the example of the Car-Parinello MD code.

14.
Science ; 317(5841): 1072-6, 2007 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-17717182

RESUMEN

Most plasmalemmal proteins organize in submicrometer-sized clusters whose architecture and dynamics are still enigmatic. With syntaxin 1 as an example, we applied a combination of far-field optical nanoscopy, biochemistry, fluorescence recovery after photobleaching (FRAP) analysis, and simulations to show that clustering can be explained by self-organization based on simple physical principles. On average, the syntaxin clusters exhibit a diameter of 50 to 60 nanometers and contain 75 densely crowded syntaxins that dynamically exchange with freely diffusing molecules. Self-association depends on weak homophilic protein-protein interactions. Simulations suggest that clustering immobilizes and conformationally constrains the molecules. Moreover, a balance between self-association and crowding-induced steric repulsions is sufficient to explain both the size and dynamics of syntaxin clusters and likely of many oligomerizing membrane proteins that form supramolecular structures.


Asunto(s)
Membrana Celular/metabolismo , Sintaxina 1/química , Sintaxina 1/metabolismo , Secuencias de Aminoácidos , Animales , Membrana Celular/química , Fenómenos Químicos , Química Física , Simulación por Computador , Difusión , Recuperación de Fluorescencia tras Fotoblanqueo , Proteínas Fluorescentes Verdes , Immunoblotting , Microscopía Confocal , Microscopía Fluorescente , Modelos Biológicos , Nanotecnología , Células PC12 , Estructura Terciaria de Proteína , Ratas , Proteínas Recombinantes de Fusión/química , Proteínas Recombinantes de Fusión/metabolismo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA