Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Environ Pollut ; 352: 124109, 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38718961

RESUMO

Exposure assessment is a crucial component of environmental health research, providing essential information on the potential risks associated with various chemicals. A systematic scoping review was conducted to acquire an overview of accessible human exposure assessment methods and computational tools to support and ultimately improve risk assessment. The systematic scoping review was performed in Sysrev, a web platform that introduces machine learning techniques into the review process aiming for increased accuracy and efficiency. Included publications were restricted to a publication date after the year 2000, where exposure methods were properly described. Exposure assessments methods were found to be used for a broad range of environmental chemicals including pesticides, metals, persistent chemicals, volatile organic compounds, and other chemical classes. Our results show that after the year 2000, for all the types of exposure routes, probabilistic analysis, and computational methods to calculate human exposure have increased. Sixty-three mathematical models and toolboxes were identified that have been developed in Europe, North America, and globally. However, only twelve occur frequently and their usefulness were associated with exposure route, chemical classes and input parameters used to estimate exposure. The outcome of the combined associations can function as a basis and/or guide for decision making for the selection of most appropriate method and tool to be used for environmental chemical human exposure assessments in Ontology-driven and artificial intelligence-based repeated dose toxicity testing of chemicals for next generation risk assessment (ONTOX) project and elsewhere. Finally, the choice of input parameters used in each mathematical model and toolbox shown by our analysis can contribute to the harmonization process of the exposure models and tools increasing the prospect for comparison between studies and consistency in the regulatory process in the future.

2.
ALTEX ; 39(1): 3-29, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35034131

RESUMO

Safety sciences must cope with uncertainty of models and results as well as information gaps. Acknowledging this uncer-tainty necessitates embracing probabilities and accepting the remaining risk. Every toxicological tool delivers only probable results. Traditionally, this is taken into account by using uncertainty / assessment factors and worst-case / precautionary approaches and thresholds. Probabilistic methods and Bayesian approaches seek to characterize these uncertainties and promise to support better risk assessment and, thereby, improve risk management decisions. Actual assessments of uncertainty can be more realistic than worst-case scenarios and may allow less conservative safety margins. Most importantly, as soon as we agree on uncertainty, this defines room for improvement and allows a transition from traditional to new approach methods as an engineering exercise. The objective nature of these mathematical tools allows to assign each methodology its fair place in evidence integration, whether in the context of risk assessment, sys-tematic reviews, or in the definition of an integrated testing strategy (ITS) / defined approach (DA) / integrated approach to testing and assessment (IATA). This article gives an overview of methods for probabilistic risk assessment and their application for exposure assessment, physiologically-based kinetic modelling, probability of hazard assessment (based on quantitative and read-across based structure-activity relationships, and mechanistic alerts from in vitro studies), indi-vidual susceptibility assessment, and evidence integration. Additional aspects are opportunities for uncertainty analysis of adverse outcome pathways and their relation to thresholds of toxicological concern. In conclusion, probabilistic risk assessment will be key for constructing a new toxicology paradigm - probably!


Assuntos
Toxicologia , Teorema de Bayes , Medição de Risco , Incerteza
3.
Toxicol Res (Camb) ; 7(5): 732-744, 2018 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-30310652

RESUMO

The creation of large toxicological databases and advances in machine-learning techniques have empowered computational approaches in toxicology. Work with these large databases based on regulatory data has allowed reproducibility assessment of animal models, which highlight weaknesses in traditional in vivo methods. This should lower the bars for the introduction of new approaches and represents a benchmark that is achievable for any alternative method validated against these methods. Quantitative Structure Activity Relationships (QSAR) models for skin sensitization, eye irritation, and other human health hazards based on these big databases, however, also have made apparent some of the challenges facing computational modeling, including validation challenges, model interpretation issues, and model selection issues. A first implementation of machine learning-based predictions termed REACHacross achieved unprecedented sensitivities of >80% with specificities >70% in predicting the six most common acute and topical hazards covering about two thirds of the chemical universe. While this is awaiting formal validation, it demonstrates the new quality introduced by big data and modern data-mining technologies. The rapid increase in the diversity and number of computational models, as well as the data they are based on, create challenges and opportunities for the use of computational methods.

4.
ALTEX ; 34(4): 459-478, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29101769

RESUMO

Computational prediction of toxicity has reached new heights as a result of decades of growth in the magnitude and diversity of biological data. Public packages for statistics and machine learning make model creation faster. New theory in machine learning and cheminformatics enables integration of chemical structure, toxicogenomics, simulated and physical data in the prediction of chemical health hazards, and other toxicological information. Our earlier publications have characterized a toxicological dataset of unprecedented scale resulting from the European REACH legislation (Registration Evaluation Authorisation and Restriction of Chemicals). These publications dove into potential use cases for regulatory data and some models for exploiting this data. This article analyzes the options for the identification and categorization of chemicals, moves on to the derivation of descriptive features for chemicals, discusses different kinds of targets modeled in computational toxicology, and ends with a high-level perspective of the algorithms used to create computational toxicology models.


Assuntos
Simulação por Computador , Modelos Químicos , Modelos de Riscos Proporcionais , Humanos , Aprendizado de Máquina , Relação Quantitativa Estrutura-Atividade , Medição de Risco , Toxicologia
5.
J Appl Toxicol ; 35(11): 1361-1371, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26046447

RESUMO

Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose-response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimensionality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals' potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced " false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets.


Assuntos
Alternativas aos Testes com Animais/métodos , Aprendizado de Máquina , Relação Quantitativa Estrutura-Atividade , Testes Cutâneos/métodos , Testes de Toxicidade/métodos , Algoritmos , Bases de Dados Factuais , Relação Dose-Resposta a Droga , Humanos , Hidrocarbonetos Bromados/toxicidade , Ensaio Local de Linfonodo , Cadeias de Markov , Medição de Risco , Pele/efeitos dos fármacos , Pele/metabolismo
6.
ALTEX ; 32(1): 25-40, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25413849

RESUMO

Integrated testing strategies (ITS), as opposed to single definitive tests or fixed batteries of tests, are expected to efficiently combine different information sources in a quantifiable fashion to satisfy an information need, in this case for regulatory safety assessments. With increasing awareness of the limitations of each individual tool and the development of highly targeted tests and predictions, the need for combining pieces of evidence increases. The discussions that took place during this workshop, which brought together a group of experts coming from different related areas, illustrate the current state of the art of ITS, as well as promising developments and identifiable challenges. The case of skin sensitization was taken as an example to understand how possible ITS can be constructed, optimized and validated. This will require embracing and developing new concepts such as adverse outcome pathways (AOP), advanced statistical learning algorithms and machine learning, mechanistic validation and "Good ITS Practices".


Assuntos
Alternativas aos Testes com Animais , Testes de Toxicidade/métodos , Animais , Europa (Continente) , Humanos , Medição de Risco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA