Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
Anal Chim Acta ; 1317: 342869, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39029998

RESUMO

BACKGROUND: The chemical space is comprised of a vast number of possible structures, of which an unknown portion comprises the human and environmental exposome. Such samples are frequently analyzed using non-targeted analysis via liquid chromatography (LC) coupled to high-resolution mass spectrometry often employing a reversed phase (RP) column. However, prior to analysis, the contents of these samples are unknown and could be comprised of thousands of known and unknown chemical constituents. Moreover, it is unknown which part of the chemical space is sufficiently retained and eluted using RPLC. RESULTS: We present a generic framework that uses a data driven approach to predict whether molecules fall 'inside', 'maybe' inside, or 'outside' of the RPLC subspace. Firstly, three retention index random forest (RF) regression models were constructed that showed that molecular fingerprints are able to predict RPLC retention behavior. Secondly, these models were used to set up the dataset for building an RPLC RF classification model. The RPLC classification model was able to correctly predict whether a chemical belonged to the RPLC subspace with an accuracy of 92% for the testing set. Finally, applying this model to the 91 737 small molecules (i.e., ≤1 000 Da) in NORMAN SusDat showed that 19.1% fall 'outside' of the RPLC subspace. SIGNIFICANCE AND NOVELTY: The RPLC chemical space model provides a major step towards mapping the chemical space and is able to assess whether chemicals can potentially be measured with an RPLC method (i.e., not every RPLC method) or if a different selectivity should be considered. Moreover, knowing which chemicals are outside of the RPLC subspace can assist in reducing potential candidates for library searching and avoid screening for chemicals that will not be present in RPLC data.

2.
J Hazard Mater ; 469: 133955, 2024 May 05.
Artigo em Inglês | MEDLINE | ID: mdl-38457976

RESUMO

The complexity around the dynamic markets for new psychoactive substances (NPS) forces researchers to develop and apply innovative analytical strategies to detect and identify them in influent urban wastewater. In this work a comprehensive suspect screening workflow following liquid chromatography - high resolution mass spectrometry analysis was established utilising the open-source InSpectra data processing platform and the HighResNPS library. In total, 278 urban influent wastewater samples from 47 sites in 16 countries were collected to investigate the presence of NPS and other drugs of abuse. A total of 50 compounds were detected in samples from at least one site. Most compounds found were prescription drugs such as gabapentin (detection frequency 79%), codeine (40%) and pregabalin (15%). However, cocaine was the most found illicit drug (83%), in all countries where samples were collected apart from the Republic of Korea and China. Eight NPS were also identified with this protocol: 3-methylmethcathinone 11%), eutylone (6%), etizolam (2%), 3-chloromethcathinone (4%), mitragynine (6%), phenibut (2%), 25I-NBOH (2%) and trimethoxyamphetamine (2%). The latter three have not previously been reported in municipal wastewater samples. The workflow employed allowed the prioritisation of features to be further investigated, reducing processing time and gaining in confidence in their identification.


Assuntos
Drogas Ilícitas , Poluentes Químicos da Água , Águas Residuárias , Fluxo de Trabalho , Psicotrópicos , China , Poluentes Químicos da Água/análise
3.
Anal Chem ; 95(33): 12247-12255, 2023 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-37549176

RESUMO

Clean high-resolution mass spectra (HRMS) are essential to a successful structural elucidation of an unknown feature during nontarget analysis (NTA) workflows. This is a crucial step, particularly for the spectra generated during data-independent acquisition or during direct infusion experiments. The most commonly available tools only take advantage of the time domain for spectral cleanup. Here, we present an algorithm that combines the time domain and mass domain information to perform spectral deconvolution. The algorithm employs a probability-based cumulative neutral loss (CNL) model for fragment deconvolution. The optimized model, with a mass tolerance of 0.005 Da and a scoreCNL threshold of 0.00, was able to achieve a true positive rate (TPr) of 95.0%, a false discovery rate (FDr) of 20.6%, and a reduction rate of 35.4%. Additionally, the CNL model was extensively tested on real samples containing predominantly pesticides at different concentration levels and with matrix effects. Overall, the model was able to obtain a TPr above 88.8% with FD rates between 33 and 79% and reduction rates between 9 and 45%. Finally, the CNL model was compared with the retention time difference method and peak shape correlation analysis, showing that a combination of correlation analysis and the CNL model was the most effective for fragment deconvolution, obtaining a TPr of 84.7%, an FDr of 54.4%, and a reduction rate of 51.0%.

4.
J Hazard Mater ; 455: 131486, 2023 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-37172382

RESUMO

Non-target analysis (NTA) employing high-resolution mass spectrometry (HRMS) coupled with liquid chromatography is increasingly being used to identify chemicals of biological relevance. HRMS datasets are large and complex making the identification of potentially relevant chemicals extremely challenging. As they are recorded in vendor-specific formats, interpreting them is often reliant on vendor-specific software that may not accommodate advancements in data processing. Here we present InSpectra, a vendor independent automated platform for the systematic detection of newly identified emerging chemical threats. InSpectra is web-based, open-source/access and modular providing highly flexible and extensible NTA and suspect screening workflows. As a cloud-based platform, InSpectra exploits parallel computing and big data archiving capabilities with a focus for sharing and community curation of HRMS data. InSpectra offers a reproducible and transparent approach for the identification, tracking and prioritisation of emerging chemical threats.

5.
J Cheminform ; 15(1): 28, 2023 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-36829215

RESUMO

Non-target analysis combined with liquid chromatography high resolution mass spectrometry is considered one of the most comprehensive strategies for the detection and identification of known and unknown chemicals in complex samples. However, many compounds remain unidentified due to data complexity and limited number structures in chemical databases. In this work, we have developed and validated a novel machine learning algorithm to predict the retention index (r[Formula: see text]) values for structurally (un)known chemicals based on their measured fragmentation pattern. The developed model, for the first time, enabled the predication of r[Formula: see text] values without the need for the exact structure of the chemicals, with an [Formula: see text] of 0.91 and 0.77 and root mean squared error (RMSE) of 47 and 67 r[Formula: see text] units for the NORMAN ([Formula: see text]) and amide ([Formula: see text]) test sets, respectively. This fragment based model showed comparable accuracy in r[Formula: see text] prediction compared to conventional descriptor-based models that rely on known chemical structure, which obtained an [Formula: see text] of 0.85 with an RMSE of 67.

6.
Anal Chem ; 94(46): 16060-16068, 2022 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-36318471

RESUMO

The majority of liquid chromatography (LC) methods are still developed in a conventional manner, that is, by analysts who rely on their knowledge and experience to make method development decisions. In this work, a novel, open-source algorithm was developed for automated and interpretive method development of LC(-mass spectrometry) separations ("AutoLC"). A closed-loop workflow was constructed that interacted directly with the LC system and ran unsupervised in an automated fashion. To achieve this, several challenges related to peak tracking, retention modeling, the automated design of candidate gradient profiles, and the simulation of chromatograms were investigated. The algorithm was tested using two newly designed method development strategies. The first utilized retention modeling, whereas the second used a Bayesian-optimization machine learning approach. In both cases, the algorithm could arrive within 4-10 iterations (i.e., sets of method parameters) at an optimum of the objective function, which included resolution and analysis time as measures of performance. Retention modeling was found to be more efficient while depending on peak tracking, whereas Bayesian optimization was more flexible but limited in scalability. We have deliberately designed the algorithm to be modular to facilitate compatibility with previous and future work (e.g., previously published data handling algorithms).


Assuntos
Algoritmos , Quimiometria , Teorema de Bayes , Cromatografia Líquida/métodos , Espectrometria de Massas/métodos
7.
Molecules ; 27(19)2022 Sep 29.
Artigo em Inglês | MEDLINE | ID: mdl-36234961

RESUMO

High-resolution mass spectrometry is a promising technique in non-target screening (NTS) to monitor contaminants of emerging concern in complex samples. Current chemical identification strategies in NTS experiments typically depend on spectral libraries, chemical databases, and in silico fragmentation tools. However, small molecule identification remains challenging due to the lack of orthogonal sources of information (e.g., unique fragments). Collision cross section (CCS) values measured by ion mobility spectrometry (IMS) offer an additional identification dimension to increase the confidence level. Thanks to the advances in analytical instrumentation, an increasing application of IMS hybrid with high-resolution mass spectrometry (HRMS) in NTS has been reported in the recent decades. Several CCS prediction tools have been developed. However, limited CCS prediction methods were based on a large scale of chemical classes and cross-platform CCS measurements. We successfully developed two prediction models using a random forest machine learning algorithm. One of the approaches was based on chemicals' super classes; the other model was direct CCS prediction using molecular fingerprint. Over 13,324 CCS values from six different laboratories and PubChem using a variety of ion-mobility separation techniques were used for training and testing the models. The test accuracy for all the prediction models was over 0.85, and the median of relative residual was around 2.2%. The models can be applied to different IMS platforms to eliminate false positives in small molecule identification.


Assuntos
Espectrometria de Mobilidade Iônica , Bibliotecas de Moléculas Pequenas , Algoritmos , Aprendizado de Máquina , Espectrometria de Massas , Bibliotecas de Moléculas Pequenas/química
8.
Sci Data ; 8(1): 223, 2021 08 24.
Artigo em Inglês | MEDLINE | ID: mdl-34429429

RESUMO

Non-target analysis (NTA) employing high-resolution mass spectrometry is a commonly applied approach for the detection of novel chemicals of emerging concern in complex environmental samples. NTA typically results in large and information-rich datasets that require computer aided (ideally automated) strategies for their processing and interpretation. Such strategies do however raise the challenge of reproducibility between and within different processing workflows. An effective strategy to mitigate such problems is the implementation of inter-laboratory studies (ILS) with the aim to evaluate different workflows and agree on harmonized/standardized quality control procedures. Here we present the data generated during such an ILS. This study was organized through the Norman Network and included 21 participants from 11 countries. A set of samples based on the passive sampling of drinking water pre and post treatment was shipped to all the participating laboratories for analysis, using one pre-defined method and one locally (i.e. in-house) developed method. The data generated represents a valuable resource (i.e. benchmark) for future developments of algorithms and workflows for NTA experiments.


Assuntos
Benchmarking , Água Potável/análise , Espectrometria de Massas , Algoritmos , Laboratórios , Fluxo de Trabalho
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...