RESUMO
The MC1R protein is a receptor found in melanocytes that plays a role in melanin synthesis. Mutations in this protein can impact hair color, skin tone, tanning ability, and increase the risk of skin cancer. The MC1R protein is activated by the alpha-melanocyte-stimulating hormone (α-MSH). Previous studies have shown that mutations affect the interaction between MC1R and α-MSH; however, the mechanism behind this process is poorly understood. Our study aims to shed light on this mechanism using molecular dynamics (MD) simulations to analyze the Asp84Glu and Asp294His variants. We simulated both the wild-type (WT) protein and the mutants with and without ligand. Our results reveal that mutations induce unique conformations during state transitions, hindering the switch between active and inactive states and decreasing cellular levels of cAMP. Interestingly, Asp294His showed increased ligand affinity but decreased protein activity, highlighting that tighter binding does not always lead to increased activation. Our study provides insights into the molecular mechanisms underlying the impact of MC1R mutations on protein activity.
Assuntos
AMP Cíclico , Mutação , Receptor Tipo 1 de Melanocortina , alfa-MSH , Humanos , alfa-MSH/química , alfa-MSH/metabolismo , alfa-MSH/genética , Sítios de Ligação , AMP Cíclico/metabolismo , AMP Cíclico/química , Ligantes , Simulação de Dinâmica Molecular , Ligação Proteica , Conformação Proteica , Receptor Tipo 1 de Melanocortina/genética , Receptor Tipo 1 de Melanocortina/química , Receptor Tipo 1 de Melanocortina/metabolismoRESUMO
In the present paper, as part of an interdisciplinary research project (Priority Programme SPP2045), we propose a possible way to design an open access archive for particle-discrete tomographic datasets: the PARROT database (https://parrot.tu-freiberg.de). This archive is the result of a pilot study in the field of particle technology and three use cases are presented for illustrative purposes. Instead of providing a detailed instruction manual, we focus on the methodologies of such an archive. The presented use cases stem from our working group and are intended to demonstrate the advantage of using such an archive with concise and consistent data for potential and ongoing studies. Data and metadata merely serve as examples and need to be adapted for disciplines not concerned here. Since all datasets within the PARROT database and its source code are freely accessible, this study represents a starting point for similar projects.
RESUMO
Multiple-relaxation-time (MRT) lattice Boltzmann methods (LBM) based on orthogonal moments exhibit lattice Mach number dependent instabilities in diffusive scaling. The present work renders an explicit formulation of stability sets for orthogonal moment MRT LBM. The stability sets are defined via the spectral radius of linearized amplification matrices of the MRT collision operator with variable relaxation frequencies. Numerical investigations are carried out for the three-dimensional Taylor-Green vortex benchmark at Reynolds number 1600. Extensive brute force computations of specific relaxation frequency ranges for the full test case are opposed to the von Neumann stability set prediction. Based on that, we prove numerically that a scan over the full wave space, including scaled mean flow variations, is required to draw conclusions on the overall stability of LBM in turbulent flow simulations. Furthermore, the von Neumann results show that a grid dependence is hardly possible to include in the notion of linear stability for LBM. Lastly, via brute force stability investigations based on empirical data from a total number of 22 696 simulations, the existence of a deterministic influence of the grid resolution is deduced. This article is part of the theme issue 'Progress in mesoscale methods for fluid dynamics simulation'.
RESUMO
The connection of relaxation systems and discrete velocity models is essential to the progress of stability as well as convergence results for lattice Boltzmann methods. In the present study we propose a formal perturbation ansatz starting from a scalar one-dimensional target equation, which yields a relaxation system specifically constructed for its equivalence to a discrete velocity Boltzmann model as commonly found in lattice Boltzmann methods. Further, the investigation of stability structures for the discrete velocity Boltzmann equation allows for algebraic characterizations of the equilibrium and collision operator. The methods introduced and summarized here are tailored for scalar, linear advection-diffusion equations, which can be used as a foundation for the constructive design of discrete velocity Boltzmann models and lattice Boltzmann methods to approximate different types of partial differential equations. This article is part of the theme issue 'Fluid dynamics, soft matter and complex systems: recent results and new methods'.
RESUMO
PURPOSE: To introduce a scheme based on a recent technique in computational hemodynamics, known as the lattice Boltzmann methods (LBM), to noninvasively measure pressure gradients in patients with a coarctation of the aorta (CoA). To provide evidence on the accuracy of the proposed scheme, the computed pressure drop values are compared against those obtained using the reference standard method of catheterization. MATERIALS AND METHODS: Pre- and posttreatment LBM-based pressure gradients for 12 patients with CoA were simulated for the time point of peak systole using the open source library OpenLB. Four-dimensional (4D) flow-sensitive phase-contrast MRI at 1.5 Tesla was used to acquire flow and to setup the simulation. The vascular geometry was reconstructed using 3D whole-heart MRI. Patients underwent pre- and postinterventional pressure catheterization as a reference standard. RESULTS: There is a significant linear correlation between the pretreatment catheter pressure drops and those computed based on the LBM simulation, r=.85, P<.001. The bias was -0.58 ± 4.1 mmHg and was not significant ( P=0.64) with a 95% confidence interval (CI) of -3.22 to 2.06. For the posttreatment results, the bias was larger and at -2.54 ± 3.53 mmHg with a 95% CI of -0.17 to -4.91 mmHg. CONCLUSION: The results indicate a reasonable agreement between the simulation results and the catheter measurements. LBM-based computational hemodynamics can be considered as an alternative to more traditional computational fluid dynamics schemes for noninvasive pressure calculations and can assist in diagnosis and therapy planning. LEVEL OF EVIDENCE: 3 J. Magn. Reson. Imaging 2017;45:139-146.
Assuntos
Aorta/fisiopatologia , Coartação Aórtica/diagnóstico por imagem , Coartação Aórtica/fisiopatologia , Velocidade do Fluxo Sanguíneo , Interpretação de Imagem Assistida por Computador/métodos , Angiografia por Ressonância Magnética/métodos , Modelos Cardiovasculares , Adolescente , Adulto , Algoritmos , Aorta/diagnóstico por imagem , Pressão Sanguínea , Simulação por Computador , Feminino , Frequência Cardíaca , Humanos , Hidrodinâmica , Imagem Cinética por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Adulto JovemRESUMO
The numerical simulation of inhaled aerosols in medical research starts to play a crucial role in understanding local deposition within the respiratory tract, a feat often unattainable experimentally. Research on children is particularly challenging due to the limited availability of in vivo data and the inherent morphological intricacies. CFD solvers based on Finite Volume Methods (FVM) have been widely employed to solve the flow field in such studies. Recently, Lattice Boltzmann Methods (LBM), a mesoscopic approach, have gained prominence, especially for their scalability on High-Performance Computers. This study endeavours to compare the effectiveness of LBM and FVM in simulating particulate flows within a child's respiratory tract, supporting research related to particle deposition and medication delivery using LBM. Considering a 5-year-old child's airway model at a steady inspiratory flow, the results are compared with in vitro experiments. Notably, both LBM and FVM exhibit favourable agreement with experimental data for the mean velocity field and the turbulence intensity. For particle deposition, both numerical methods yield comparable results, aligning well with in vitro experiments across a particle size range of 0.1-20 µm. Discrepancies are identified in the upper airways and trachea, indicating a lower deposition fraction than in the experiment. Nonetheless, both LBM and FVM offer invaluable insights into particle behaviour for different sizes, which are not easily achievable experimentally. In terms of practical implications, the findings of this study hold significance for respiratory medicine and drug delivery systems - potential health impacts, targeted drug delivery strategies or optimisation of respiratory therapies.
Assuntos
Hidrodinâmica , Traqueia , Humanos , Pré-Escolar , Simulação por Computador , Traqueia/anatomia & histologia , Aerossóis , Tamanho da PartículaRESUMO
The pervasive presence of plastic in the environment has reached a concerning scale, being identified in many ecosystems. Bioremediation is the cheapest and most eco-friendly alternative to remove this polymer from affected areas. Recent work described that a novel cold-active esterase enzyme extracted from the bacteria Kaistella jeonii could promiscuously degrade PET. Compared to the well-known PETase from Ideonella sakaiensis, this novel esterase presents a low sequence identity yet has a remarkably similar folding. However, enzymatic assays demonstrated a lower catalytic efficiency. In this work, we employed a strict computational approach to investigate the binding mechanism between the esterase and PET. Understanding the underlying mechanism of binding can shed light on the evolutive mechanism of how enzymes have been evolving to degrade these artificial molecules and help develop rational engineering approaches to improve PETase-like enzymes. Our results indicate that this esterase misses a disulfide bridge, keeping the catalytic residues closer and possibly influencing its catalytic efficiency. Moreover, we describe the structural response to the interaction between enzyme and PET, indicating local and global effects. Our results aid in deepening the knowledge behind the mechanism of biological catalysis of PET degradation and as a base for the engineering of novel PETases.
RESUMO
Mutations that affect the proteins responsible for the nucleotide excision repair (NER) pathway can lead to diseases such as xeroderma pigmentosum, trichothiodystrophy, Cockayne syndrome, and Cerebro-oculo-facio-skeletal syndrome. Hence, understanding their molecular behavior is needed to elucidate these diseases' phenotypes and how the NER pathway is organized and coordinated. Molecular dynamics techniques enable the study of different protein conformations, adaptable to any research question, shedding light on the dynamics of biomolecules. However, as important as they are, molecular dynamics studies focused on DNA repair pathways are still becoming more widespread. Currently, there are no review articles compiling the advancements made in molecular dynamics approaches applied to NER and discussing: (i) how this technique is currently employed in the field of DNA repair, focusing on NER proteins; (ii) which technical setups are being employed, their strengths and limitations; (iii) which insights or information are they providing to understand the NER pathway or NER-associated proteins; (iv) which open questions would be suited for this technique to answer; and (v) where can we go from here. These questions become even more crucial considering the numerous 3D structures published regarding the NER pathway's proteins in recent years. In this work, we tackle each one of these questions, revising and critically discussing the results published in the context of the NER pathway.
Assuntos
Síndrome de Cockayne , Xeroderma Pigmentoso , Humanos , Simulação de Dinâmica Molecular , Reparo do DNA , Xeroderma Pigmentoso/genética , Proteínas , Síndrome de Cockayne/genética , Síndrome de Cockayne/metabolismoRESUMO
Colorectal cancer (CRC) is one of the most common types of cancer, with many studies associating its development with changes in the gut microbiota. Recent developments in sequencing technologies and subsequent meta-analyses of gut metagenome provided a better understanding of species related to CRC tumorigenesis. Still, the importance of high-importance taxonomic singletons (i.e. species highly associated with a given condition but observed only in the minority of datasets) and the species interactions and co-occurrence across cohorts need further exploration. It has been shown that the gut metagenome presents a high functional redundancy, meaning that species interactions could mitigate the absence of any given species. In a CRC framework, this implies that species co-occurrence could play a role in tumorigenesis, even if CRC-associated species show low abundance. We propose to evaluate the prevalence of microbial species in tumor by initially analyzing each dataset individually and subsequently intersecting the results for differentially abundant species between CRC and healthy samples. We then identify metabolic pathways from these species based on KEGG orthologs, highlighting metabolic pathways associated with CRC. Our results indicate seven species with high prevalence across all projects and with high association to CRC, including the genus Bacteroides, Enterocloster and Prevotella. Finally, we show that CRC is also characterized by the co-occurrence of species that do not present significant differential abundance, but have been described in the literature as potential CRC biomarkers. These results indicate that between-species interactions could also play a role in CRC tumorigenesis.
Assuntos
Neoplasias Colorretais , Microbioma Gastrointestinal , Humanos , Metagenoma , Transformação Celular Neoplásica , CarcinogêneseRESUMO
Fluid dynamics simulations with the lattice Boltzmann method (LBM) are very memory intensive. Alongside reduction in memory footprint, significant performance benefits can be achieved by using FP32 (single) precision compared to FP64 (double) precision, especially on GPUs. Here we evaluate the possibility to use even FP16 and posit16 (half) precision for storing fluid populations, while still carrying arithmetic operations in FP32. For this, we first show that the commonly occurring number range in the LBM is a lot smaller than the FP16 number range. Based on this observation, we develop customized 16-bit formats-based on a modified IEEE-754 and on a modified posit standard-that are specifically tailored to the needs of the LBM. We then carry out an in-depth characterization of LBM accuracy for six different test systems with increasing complexity: Poiseuille flow, Taylor-Green vortices, Karman vortex streets, lid-driven cavity, a microcapsule in shear flow (utilizing the immersed-boundary method), and, finally, the impact of a raindrop (based on a volume-of-fluid approach). We find that the difference in accuracy between FP64 and FP32 is negligible in almost all cases, and that for a large number of cases even 16-bit is sufficient. Finally, we provide a detailed performance analysis of all precision levels on a large number of hardware microarchitectures and show that significant speedup is achieved with mixed FP32/16-bit.
RESUMO
Reactive particulate systems are of prime importance in varieties of practical applications in process engineering. As an example this study considers extraction of phosphorous from waste water by calcium silicate hydrate particles in the P-RoC process. For such systems modeling has a large potential to help to optimize process conditions, e.g., particle-size distributions or volume flows. The goal of this study is to present a new generic modeling framework to capture relevant aspects of reactive particle fluid flows using combined lattice Boltzmann method and discrete-element method. The model developed is Euler-Lagrange scheme which consist of three-components viz., a fluid phase, a dissolved reactive substance, and suspended particles. The fluid flow and reactive mass transport are described in a continuum framework using volume-averaged Navier-Stokes and volume-averaged advection-diffusion-reaction equations, respectively, and solved using lattice Boltzmann methods. The volume-averaging procedure ensures correctness in coupling between fluid flow, reactive mass transport, and particle motion. The developed model is validated through series of well-defined benchmarks. The benchmarks include the validation of the model with experimental data for the settling of a single particle in a cavity filled with water. The benchmark to validate the multi-scale reactive transport involves comparing the results with a resolved numerical simulation. These benchmarks also prove that the proposed model is grid convergent which has previously not been established for such coupled models. Finally, we demonstrate the applicability of our model by simulating a suspension of multiple particles in fluid with a dissolved reactive substance. Comparison of this coupled model is made with a one-way coupled simulation where the influence of particles on the fluid flow and the reactive solution transport is not considered. This elucidates the need for the two-way coupled model.
RESUMO
The industrial particle sensor market lacks simple, easy to use, low cost yet robust, safe and fast response solutions. Towards development of such a sensor, for in-line use in micro channels under continuous flow conditions, this work introduces static light scattering (SLS) determination of particle diameter using a laser with an emission power of less than 5 µW together with sensitive detectors with detection times of 1 ms. The measurements for the feasibility studies are made in an angular range between 20° and 160° in 2° increments. We focus on the range between 300 and 1000 nm, for applications in the production of paints, colors, pigments and crystallites. Due to the fast response time, reaction characteristics in microchannel designs for precipitation and crystallization processes can be studied. A novel method for particle diameter characterization is developed using the positions of maxima and minima and slope distribution. The novel algorithm to classify particle diameter is especially developed to be independent of dispersed phase concentration or concentration fluctuations like product flares or signal instability. Measurement signals are post processed and particle diameters are validated against Mie light scattering simulations. The design of a low cost instrument for industrial use is proposed.
RESUMO
In many structural bioinformatics problems, there is a broad range of unanswered questions about protein dynamics and amino acid properties. Proteins are not strictly static objects, but rather populate ensembles of conformations. One way to understand these particularities is to analyze the information available in experimental databases. The Ramachandran plot, despite being more than half a century old, remains an utterly useful tool in the study of protein conformation. Based on its assumptions, we inspected a large data set (11,130 protein structures, amounting to 5,255,768 residues) and discriminated the conformational preferences of each residue type regarding their secondary structure participation. These data were studied for phi [Formula: see text], psi [Formula: see text], and side chain chi [Formula: see text] angles, being presented in non-Ramachandranian plots. In the largest analysis of protein conformation made so far, we propose an original plot to depict conformational preferences in relation to different secondary structure elements. Despite confirming previous observations, our results strongly support a unique character for each residue type, whereas also reinforcing the observation that side chains have a major contribution to secondary structure and, by consequence, on protein conformation. This information can be further used in the development of more robust methods and computational strategies for structural bioinformatics problems.
Assuntos
Aminoácidos/química , Bases de Dados de Proteínas , Conformação Proteica , Proteínas/química , Biologia Computacional , Modelos Moleculares , Simulação de Dinâmica MolecularRESUMO
Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today's high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non-intuitive consequences of ion channel-level changes on higher levels of integration.