Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
1.
Toxicol Sci ; 2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38637946

RESUMO

Physiologically-based kinetic (PBK) models are widely used in pharmacology and toxicology for predicting the internal disposition of substances upon exposure, voluntarily or not. Due to their complexity, a large number of model parameters need to be estimated, either through in silico tools, in vitro experiments or by fitting the model to in vivo data. In the latter case, fitting complex structural models on in vivo data can result in overparameterisation and produce unrealistic parameter estimates. To address these issues, we propose a novel parameter grouping approach, which reduces the parametric space by co-estimating groups of parameters across compartments. Grouping of parameters is performed using genetic algorithms and is fully automated, based on a novel goodness-of-fit metric. To illustrate the practical application of the proposed methodology, two case studies were conducted. The first case study demonstrates the development of a new PBK model, while the second focuses on model refinement. In the first case study, a PBK model was developed to elucidate the biodistribution of titanium dioxide (TiO2) nanoparticles in rats following intravenous injection. A variety of parameter estimation schemes were employed. Comparative analysis based on goodness-of-fit metrics demonstrated that the proposed methodology yields models that outperform standard estimation approaches, while utilising a reduced number of parameters. In the second case study, an existing PBK model for perfluorooctanoic acid (PFOA) in rats was extended to incorporate additional tissues, providing a more a comprehensive portrayal of PFOA biodistribution. Both models were validated through independent in vivo studies to ensure their reliability.

2.
Mol Inform ; 42(8-9): e2300019, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37258455

RESUMO

In this study we present deimos, a computational methodology for optimal grouping, applied on the read-across prediction of engineered nanomaterials' (ENMs) toxicity-related properties. The method is based on the formulation and the solution of a mixed-integer optimization program (MILP) problem that automatically and simultaneously performs feature selection, defines the grouping boundaries according to the response variable and develops linear regression models in each group. For each group/region, the characteristic centroid is defined in order to allocate untested ENMs to the groups. The deimos MILP problem is integrated in a broader optimization workflow that selects the best performing methodology between the standard multiple linear regression (MLR), the least absolute shrinkage and selection operator (LASSO) models and the proposed deimos multiple-region model. The performance of the suggested methodology is demonstrated through the application to benchmark ENMs datasets and comparison with other predictive modelling approaches. However, the proposed method can be applied to property prediction of other than ENM chemical entities and it is not limited to ENMs toxicity prediction.


Assuntos
Nanoestruturas , Nanoestruturas/química , Modelos Lineares , Benchmarking
3.
Artigo em Inglês | MEDLINE | ID: mdl-36778642

RESUMO

Responding to the pandemic caused by SARS-CoV-2, the scientific community intensified efforts to provide drugs effective against the virus. To strengthen these efforts, the "COVID Moonshot" project has been accepting public suggestions for computationally triaged, synthesized, and tested molecules. The project aimed to identify molecules of low molecular weight with activity against the virus, for oral treatment. The ability of a drug to cross the intestinal cell membranes and enter circulation decisively influences its bioavailability, and hence the need to optimize permeability in the early stages of drug discovery. In our present work, as a contribution to the ongoing scientific efforts, we employed artificial neural network algorithms to develop QSAR tools for modelling the PAMPA effective permeability (passive diffusion) of orally administered drugs. We identified a set of 61 features most relevant in explaining drug cell permeability and used them to develop a stacked regression ensemble model, subsequently used to predict the permeability of molecules included in datasets made available through the COVID Moonshot project. Our model was shown to be robust and may provide a promising framework for predicting the potential permeability of molecules not yet synthesized, thus guiding the process of drug design. Supplementary Information: The online version contains supplementary material available at 10.1007/s13721-023-00410-9.

4.
Nat Nanotechnol ; 17(9): 924-932, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35982314

RESUMO

Engineered nanomaterials (ENMs) enable new and enhanced products and devices in which matter can be controlled at a near-atomic scale (in the range of 1 to 100 nm). However, the unique nanoscale properties that make ENMs attractive may result in as yet poorly known risks to human health and the environment. Thus, new ENMs should be designed in line with the idea of safe-and-sustainable-by-design (SSbD). The biological activity of ENMs is closely related to their physicochemical characteristics, changes in these characteristics may therefore cause changes in the ENMs activity. In this sense, a set of physicochemical characteristics (for example, chemical composition, crystal structure, size, shape, surface structure) creates a unique 'representation' of a given ENM. The usability of these characteristics or nanomaterial descriptors (nanodescriptors) in nanoinformatics methods such as quantitative structure-activity/property relationship (QSAR/QSPR) models, provides exciting opportunities to optimize ENMs at the design stage by improving their functionality and minimizing unforeseen health/environmental hazards. A computational screening of possible versions of novel ENMs would return optimal nanostructures and manage ('design out') hazardous features at the earliest possible manufacturing step. Safe adoption of ENMs on a vast scale will depend on the successful integration of the entire bulk of nanodescriptors extracted experimentally with data from theoretical and computational models. This Review discusses directions for developing appropriate nanomaterial representations and related nanodescriptors to enhance the reliability of computational modelling utilized in designing safer and more sustainable ENMs.


Assuntos
Nanoestruturas , Simulação por Computador , Humanos , Nanoestruturas/química , Relação Quantitativa Estrutura-Atividade , Reprodutibilidade dos Testes
5.
Beilstein J Nanotechnol ; 12: 1297-1325, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34934606

RESUMO

Manufacturers of nanomaterial-enabled products need models of endpoints that are relevant to human safety to support the "safe by design" paradigm and avoid late-stage attrition. Increasingly, embryonic zebrafish (Danio Rerio) are recognised as a key human safety relevant in vivo test system. Hence, machine learning models were developed for identifying metal oxide nanomaterials causing lethality to embryonic zebrafish up to 24 hours post-fertilisation, or excess lethality in the period of 24-120 hours post-fertilisation, at concentrations of 250 ppm or less. Models were developed using data from the Nanomaterial Biological-Interactions Knowledgebase for a dataset of 44 diverse, coated and uncoated metal or, in one case, metalloid oxide nanomaterials. Different modelling approaches were evaluated using nested cross-validation on this dataset. Models were initially developed for both lethality endpoints using multiple descriptors representing the composition of the core, shell and surface functional groups, as well as particle characteristics. However, interestingly, the 24 hours post-fertilisation data were found to be harder to predict, which could reflect different exposure routes. Hence, subsequent analysis focused on the prediction of excess lethality at 120 hours-post fertilisation. The use of two data augmentation approaches, applied for the first time in nano-QSAR research, was explored, yet both failed to boost predictive performance. Interestingly, it was found that comparable results to those originally obtained using multiple descriptors could be obtained using a model based upon a single, simple descriptor: the Pauling electronegativity of the metal atom. Since it is widely recognised that a variety of intrinsic and extrinsic nanomaterial characteristics contribute to their toxicological effects, this is a surprising finding. This may partly reflect the need to investigate more sophisticated descriptors in future studies. Future studies are also required to examine how robust these modelling results are on truly external data, which were not used to select the single descriptor model. This will require further laboratory work to generate comparable data to those studied herein.

6.
Sensors (Basel) ; 21(21)2021 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-34770266

RESUMO

The field of automatic collision avoidance for surface vessels has been an active field of research in recent years, aiming for the decision support of officers in conventional vessels, or for the creation of autonomous vessel controllers. In this paper, the multi-ship control problem is addressed using a model predictive controller (MPC) that makes use of obstacle ship trajectory prediction models built on the RBF framework and is trained on real AIS data sourced from an open-source database. The usage of such sophisticated trajectory prediction models enables the controller to correctly infer the existence of a collision risk and apply evasive control actions in a timely manner, thus accounting for the slow dynamics of a large vessel, such as container ships, and enhancing the cooperation between controlled vessels. The proposed method is evaluated on a real-life case from the Miami port area, and its generated trajectories are assessed in terms of safety, economy, and COLREG compliance by comparison with an identical MPC controller utilizing straight-line predictions for the obstacle vessel.

7.
J Chem Inf Model ; 61(6): 2766-2779, 2021 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-34029462

RESUMO

In this study, a computational workflow is presented for grouping engineered nanomaterials (ENMs) and for predicting their toxicity-related end points. A mixed integer-linear optimization program (MILP) problem is formulated, which automatically filters out the noisy variables, defines the grouping boundaries, and develops specific to each group predictive models. The method is extended to the multidimensional space, by considering the ENM characterization categories (e.g., biological, physicochemical, biokinetics, image etc.) as different dimensions. The performance of the proposed method is illustrated through the application to benchmark data sets and comparison with alternative predictive modeling approaches. The trained models using the above data sets were made publicly available through a user-friendly web service.


Assuntos
Nanoestruturas , Nanoestruturas/toxicidade
8.
F1000Res ; 102021.
Artigo em Inglês | MEDLINE | ID: mdl-37842337

RESUMO

Toxicology has been an active research field for many decades, with academic, industrial and government involvement. Modern omics and computational approaches are changing the field, from merely disease-specific observational models into target-specific predictive models. Traditionally, toxicology has strong links with other fields such as biology, chemistry, pharmacology and medicine. With the rise of synthetic and new engineered materials, alongside ongoing prioritisation needs in chemical risk assessment for existing chemicals, early predictive evaluations are becoming of utmost importance to both scientific and regulatory purposes. ELIXIR is an intergovernmental organisation that brings together life science resources from across Europe. To coordinate the linkage of various life science efforts around modern predictive toxicology, the establishment of a new ELIXIR Community is seen as instrumental. In the past few years, joint efforts, building on incidental overlap, have been piloted in the context of ELIXIR. For example, the EU-ToxRisk, diXa, HeCaToS, transQST, and the nanotoxicology community have worked with the ELIXIR TeSS, Bioschemas, and Compute Platforms and activities. In 2018, a core group of interested parties wrote a proposal, outlining a sketch of what this new ELIXIR Toxicology Community would look like. A recent workshop (held September 30th to October 1st, 2020) extended this into an ELIXIR Toxicology roadmap and a shortlist of limited investment-high gain collaborations to give body to this new community. This Whitepaper outlines the results of these efforts and defines our vision of the ELIXIR Toxicology Community and how it complements other ELIXIR activities.


Assuntos
Disciplinas das Ciências Biológicas , Europa (Continente) , Medição de Risco
9.
Nanoscale Adv ; 3(11): 3167-3176, 2021 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-36133654

RESUMO

Multi-walled carbon nanotubes (MWCNTs) are made of multiple single-walled carbon nanotubes (SWCNTs) which are nested inside one another forming concentric cylinders. These nanomaterials are widely used in industrial and biomedical applications, due to their unique physicochemical characteristics. However, previous studies have shown that exposure to MWCNTs may lead to toxicity and some of the physicochemical properties of MWCNTs can influence their toxicological profiles. In silico modelling can be applied as a faster and less costly alternative to experimental (in vivo and in vitro) testing for the hazard characterization of MWCNTs. This study aims at developing a fully validated predictive nanoinformatics model based on statistical and machine learning approaches for the accurate prediction of genotoxicity of different types of MWCNTs. Towards this goal, a number of different computational workflows were designed, combining unsupervised (Principal Component Analysis, PCA) and supervised classification techniques (Support Vectors Machine, "SVM", Random Forest, "RF", Logistic Regression, "LR" and Naïve Bayes, "NB") and Bayesian optimization. The Recursive Feature Elimination (RFE) method was applied for selecting the most important variables. An RF model using only three features was selected as the most efficient for predicting the genotoxicity of MWCNTs, exhibiting 80% accuracy on external validation and high classification probabilities. The most informative features selected by the model were "Length", "Zeta average" and "Purity".

10.
Nanomaterials (Basel) ; 10(12)2020 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-33322568

RESUMO

Chemoinformatics has developed efficient ways of representing chemical structures for small molecules as simple text strings, simplified molecular-input line-entry system (SMILES) and the IUPAC International Chemical Identifier (InChI), which are machine-readable. In particular, InChIs have been extended to encode formalized representations of mixtures and reactions, and work is ongoing to represent polymers and other macromolecules in this way. The next frontier is encoding the multi-component structures of nanomaterials (NMs) in a machine-readable format to enable linking of datasets for nanoinformatics and regulatory applications. A workshop organized by the H2020 research infrastructure NanoCommons and the nanoinformatics project NanoSolveIT analyzed issues involved in developing an InChI for NMs (NInChI). The layers needed to capture NM structures include but are not limited to: core composition (possibly multi-layered); surface topography; surface coatings or functionalization; doping with other chemicals; and representation of impurities. NM distributions (size, shape, composition, surface properties, etc.), types of chemical linkages connecting surface functionalization and coating molecules to the core, and various crystallographic forms exhibited by NMs also need to be considered. Six case studies were conducted to elucidate requirements for unambiguous description of NMs. The suggested NInChI layers are intended to stimulate further analysis that will lead to the first version of a "nano" extension to the InChI standard.

11.
Materials (Basel) ; 13(20)2020 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-33066064

RESUMO

The convergence of nanotechnology and biotechnology has led to substantial advancements in nano-biomaterials (NBMs) used in medical devices (MD) and advanced therapy medicinal products (ATMP). However, there are concerns that applications of NBMs for medical diagnostics, therapeutics and regenerative medicine could also pose health and/or environmental risks since the current understanding of their safety is incomplete. A scientific strategy is therefore needed to assess all risks emerging along the life cycles of these products. To address this need, an overarching risk management framework (RMF) for NBMs used in MD and ATMP is presented in this paper, as a result of a collaborative effort of a team of experts within the EU Project BIORIMA and with relevant inputs from external stakeholders. The framework, in line with current regulatory requirements, is designed according to state-of-the-art approaches to risk assessment and management of both nanomaterials and biomaterials. The collection/generation of data for NBMs safety assessment is based on innovative integrated approaches to testing and assessment (IATA). The framework can support stakeholders (e.g., manufacturers, regulators, consultants) in systematically assessing not only patient safety but also occupational (including healthcare workers) and environmental risks along the life cycle of MD and ATMP. The outputs of the framework enable the user to identify suitable safe(r)-by-design alternatives and/or risk management measures and to compare the risks of NBMs to their (clinical) benefits, based on efficacy, quality and cost criteria, in order to inform robust risk management decision-making.

12.
Small ; 16(36): e2001080, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32548897

RESUMO

This study presents the results of applying deep learning methodologies within the ecotoxicology field, with the objective of training predictive models that can support hazard assessment and eventually the design of safer engineered nanomaterials (ENMs). A workflow applying two different deep learning architectures on microscopic images of Daphnia magna is proposed that can automatically detect possible malformations, such as effects on the length of the tail, and the overall size, and uncommon lipid concentrations and lipid deposit shapes, which are due to direct or parental exposure to ENMs. Next, classification models assign specific objects (heart, abdomen/claw) to classes that depend on lipid densities and compare the results with controls. The models are statistically validated in terms of their prediction accuracy on external D. magna images and illustrate that deep learning technologies can be useful in the nanoinformatics field, because they can automate time-consuming manual procedures, accelerate the investigation of adverse effects of ENMs, and facilitate the process of designing safer nanostructures. It may even be possible in the future to predict impacts on subsequent generations from images of parental exposure, reducing the time and cost involved in long-term reproductive toxicity assays over multiple generations.


Assuntos
Daphnia , Aprendizado Profundo , Ecotoxicologia , Nanoestruturas , Animais , Simulação por Computador , Daphnia/efeitos dos fármacos , Ecotoxicologia/métodos , Nanoestruturas/toxicidade , Poluentes Químicos da Água/toxicidade
13.
Nanomaterials (Basel) ; 10(5)2020 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-32397130

RESUMO

Preprocessing of transcriptomics data plays a pivotal role in the development of toxicogenomics-driven tools for chemical toxicity assessment. The generation and exploitation of large volumes of molecular profiles, following an appropriate experimental design, allows the employment of toxicogenomics (TGx) approaches for a thorough characterisation of the mechanism of action (MOA) of different compounds. To date, a plethora of data preprocessing methodologies have been suggested. However, in most cases, building the optimal analytical workflow is not straightforward. A careful selection of the right tools must be carried out, since it will affect the downstream analyses and modelling approaches. Transcriptomics data preprocessing spans across multiple steps such as quality check, filtering, normalization, batch effect detection and correction. Currently, there is a lack of standard guidelines for data preprocessing in the TGx field. Defining the optimal tools and procedures to be employed in the transcriptomics data preprocessing will lead to the generation of homogeneous and unbiased data, allowing the development of more reliable, robust and accurate predictive models. In this review, we outline methods for the preprocessing of three main transcriptomic technologies including microarray, bulk RNA-Sequencing (RNA-Seq), and single cell RNA-Sequencing (scRNA-Seq). Moreover, we discuss the most common methods for the identification of differentially expressed genes and to perform a functional enrichment analysis. This review is the second part of a three-article series on Transcriptomics in Toxicogenomics.

14.
Nanomaterials (Basel) ; 10(4)2020 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-32276469

RESUMO

Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics.

15.
Nanomaterials (Basel) ; 10(4)2020 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-32326418

RESUMO

The starting point of successful hazard assessment is the generation of unbiased and trustworthy data. Conventional toxicity testing deals with extensive observations of phenotypic endpoints in vivo and complementing in vitro models. The increasing development of novel materials and chemical compounds dictates the need for a better understanding of the molecular changes occurring in exposed biological systems. Transcriptomics enables the exploration of organisms' responses to environmental, chemical, and physical agents by observing the molecular alterations in more detail. Toxicogenomics integrates classical toxicology with omics assays, thus allowing the characterization of the mechanism of action (MOA) of chemical compounds, novel small molecules, and engineered nanomaterials (ENMs). Lack of standardization in data generation and analysis currently hampers the full exploitation of toxicogenomics-based evidence in risk assessment. To fill this gap, TGx methods need to take into account appropriate experimental design and possible pitfalls in the transcriptomic analyses as well as data generation and sharing that adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. In this review, we summarize the recent advancements in the design and analysis of DNA microarray, RNA sequencing (RNA-Seq), and single-cell RNA-Seq (scRNA-Seq) data. We provide guidelines on exposure time, dose and complex endpoint selection, sample quality considerations and sample randomization. Furthermore, we summarize publicly available data resources and highlight applications of TGx data to understand and predict chemical toxicity potential. Additionally, we discuss the efforts to implement TGx into regulatory decision making to promote alternative methods for risk assessment and to support the 3R (reduction, refinement, and replacement) concept. This review is the first part of a three-article series on Transcriptomics in Toxicogenomics. These initial considerations on Experimental Design, Technologies, Publicly Available Data, Regulatory Aspects, are the starting point for further rigorous and reliable data preprocessing and modeling, described in the second and third part of the review series.

16.
Comput Struct Biotechnol J ; 18: 583-602, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32226594

RESUMO

Nanotechnology has enabled the discovery of a multitude of novel materials exhibiting unique physicochemical (PChem) properties compared to their bulk analogues. These properties have led to a rapidly increasing range of commercial applications; this, however, may come at a cost, if an association to long-term health and environmental risks is discovered or even just perceived. Many nanomaterials (NMs) have not yet had their potential adverse biological effects fully assessed, due to costs and time constraints associated with the experimental assessment, frequently involving animals. Here, the available NM libraries are analyzed for their suitability for integration with novel nanoinformatics approaches and for the development of NM specific Integrated Approaches to Testing and Assessment (IATA) for human and environmental risk assessment, all within the NanoSolveIT cloud-platform. These established and well-characterized NM libraries (e.g. NanoMILE, NanoSolutions, NANoREG, NanoFASE, caLIBRAte, NanoTEST and the Nanomaterial Registry (>2000 NMs)) contain physicochemical characterization data as well as data for several relevant biological endpoints, assessed in part using harmonized Organisation for Economic Co-operation and Development (OECD) methods and test guidelines. Integration of such extensive NM information sources with the latest nanoinformatics methods will allow NanoSolveIT to model the relationships between NM structure (morphology), properties and their adverse effects and to predict the effects of other NMs for which less data is available. The project specifically addresses the needs of regulatory agencies and industry to effectively and rapidly evaluate the exposure, NM hazard and risk from nanomaterials and nano-enabled products, enabling implementation of computational 'safe-by-design' approaches to facilitate NM commercialization.

17.
RSC Adv ; 10(9): 5385-5391, 2020 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-35498319

RESUMO

The use of in silico approaches for the prediction of biomedical properties of nano-biomaterials (NBMs) can play a significant role in guiding and reducing wetlab experiments. Computational methods, such as data mining and machine learning techniques, can increase the efficiency and reduce the time and cost required for hazard and risk assesment and for designing new safer NBMs. A major obstacle in developing accurate and well-validated in silico models such as Nano Quantitative Structure-Activity Relationships (Nano-QSARs) is that although the volume of data published in the literature is increasing, the data are fragmented in many different publications and are not sufficiently curated for modelling purposes. Moreover, NBMs exhibit high complexity and heterogeneity in their structures, making data collection and curation and QSAR model development more challenging compared to traditional small molecules. The aim of this study was to construct and fully validate a Nano-QSAR model for the prediction of toxicological properties of superparamagnetic iron oxide nanoparticles (SPIONs), focusing on their application as Magnetic Resonance Imaging (MRI) contrast agents for non-invasive stem cell labelling and tracking. To achieve this goal, we first performed an extensive search through the literature for collecting and curating relevant data and we developed a dataset containing both physicochemical and toxicological properties of SPIONs. The data were analysed next, using Automated machine learning (Auto-ML) approaches for optimising the development and validation of nanotoxicity classification QSAR models of SPIONs. Further analysis of relative attribute importances revealed that physicochemical properties such as the size and the magnetic core are the dominant attributes correlated to the toxicity of SPIONs. Our results suggest that as more systematic information from NBM experimental tests becomes available, computational tools could play an important role in supporting the safety-by-design (SbD) concept in regenerative medicine and disease therapeutics.

18.
Curr Top Med Chem ; 20(4): 305-317, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31878856

RESUMO

AIMS: Cheminformatics models are able to predict different outputs (activity, property, chemical reactivity) in single molecules or complex molecular systems (catalyzed organic synthesis, metabolic reactions, nanoparticles, etc.). BACKGROUND: Cheminformatics models are able to predict different outputs (activity, property, chemical reactivity) in single molecules or complex molecular systems (catalyzed organic synthesis, metabolic reactions, nanoparticles, etc.). OBJECTIVE: Cheminformatics prediction of complex catalytic enantioselective reactions is a major goal in organic synthesis research and chemical industry. Markov Chain Molecular Descriptors (MCDs) have been largely used to solve Cheminformatics problems. There are different types of Markov chain descriptors such as Markov-Shannon entropies (Shk), Markov Means (Mk), Markov Moments (πk), etc. However, there are other possible MCDs that have not been used before. In addition, the calculation of MCDs is done very often using specific software not always available for general users and there is not an R library public available for the calculation of MCDs. This fact, limits the availability of MCMDbased Cheminformatics procedures. METHODS: We studied the enantiomeric excess ee(%)[Rcat] for 324 α-amidoalkylation reactions. These reactions have a complex mechanism depending on various factors. The model includes MCDs of the substrate, solvent, chiral catalyst, product along with values of time of reaction, temperature, load of catalyst, etc. We tested several Machine Learning regression algorithms. The Random Forest regression model has R2 > 0.90 in training and test. Secondly, the biological activity of 5644 compounds against colorectal cancer was studied. RESULTS: We developed very interesting model able to predict with Specificity and Sensitivity 70-82% the cases of preclinical assays in both training and validation series. CONCLUSION: The work shows the potential of the new tool for computational studies in organic and medicinal chemistry.


Assuntos
Quimioinformática , Química Farmacêutica , Cadeias de Markov , Algoritmos , Humanos , Aprendizado de Máquina
19.
J Pharmacokinet Pharmacodyn ; 46(2): 173-192, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30949914

RESUMO

The aim of this study is to benchmark two Bayesian software tools, namely Stan and GNU MCSim, that use different Markov chain Monte Carlo (MCMC) methods for the estimation of physiologically based pharmacokinetic (PBPK) model parameters. The software tools were applied and compared on the problem of updating the parameters of a Diazepam PBPK model, using time-concentration human data. Both tools produced very good fits at the individual and population levels, despite the fact that GNU MCSim is not able to consider multivariate distributions. Stan outperformed GNU MCSim in sampling efficiency, due to its almost uncorrelated sampling. However, GNU MCSim exhibited much faster convergence and performed better in terms of effective samples produced per unit of time.


Assuntos
Diazepam/farmacocinética , Adulto , Teorema de Bayes , Simulação por Computador , Feminino , Humanos , Masculino , Cadeias de Markov , Modelos Biológicos , Método de Monte Carlo , Software
20.
Nanoscale Adv ; 1(2): 706-718, 2019 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-36132268

RESUMO

Multi-walled carbon nanotubes are currently used in numerous industrial applications and products, therefore fast and accurate evaluation of their biological and toxicological effects is of utmost importance. Computational methods and techniques, previously applied in the area of cheminformatics for the prediction of adverse effects of chemicals, can also be applied in the case of nanomaterials (NMs), in an effort to reduce expensive and time consuming experimental procedures. In this context, a validated and predictive nanoinformatics model has been developed for the accurate prediction of the biological and toxicological profile of decorated multi-walled carbon nanotubes. The nanoinformatics workflow was fully validated according to the OECD principles before it was released online via the Enalos Cloud platform. The web-service is a ready-to-use, user-friendly application whose purpose is to facilitate decision making, as part of a safe-by-design framework for novel carbon nanotubes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA