Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Anal Chem ; 96(22): 9294-9301, 2024 06 04.
Artigo em Inglês | MEDLINE | ID: mdl-38758734

RESUMO

Despite the high gain in peak capacity, online comprehensive two-dimensional liquid chromatography coupled with high-resolution mass spectrometry (LC × LC-HRMS) has not yet been widely applied to the analysis of complex protein digests. One reason is the method's reduced sensitivity which can be linked to the high flow rates of the second separation dimension (2D). This results in higher dilution factors and the need for flow splitters to couple to ESI-MS. This study reports proof-of-principle results of the development of an RPLC × RPLC-HRMS method using parallel gradients (2D flow rate of 0.7 mL min-1) and its comparison to shifted gradient methods (2D of 1.4 mL min-1) for the analysis of complex digests using HRMS (QExactive-Plus MS). Shifted and parallel gradients resulted in high surface coverage (SC) and effective peak capacity (SC of 0.6226 and 0.7439 and effective peak capacity of 779 and 757 in 60 min). When applied to a cell line digest sample, parallel gradients allowed higher sensitivity (e.g., average MS intensity increased by a factor of 3), allowing for a higher number of identifications (e.g., about 2600 vs 3900 peptides). In addition, reducing the modulation time to 10 s significantly increased the number of MS/MS events that could be performed. When compared to a 1D-RPLC method, parallel RPLC × RPLC-HRMS methods offered a higher separation performance (FHWH from 0.12 to 0.018 min) with limited sensitivity losses resulting in an increase of analyte identifications (e.g., about 6000 vs 7000 peptides and 1500 vs 1990 proteins).


Assuntos
Espectrometria de Massas , Proteínas , Cromatografia Líquida/métodos , Proteínas/análise , Proteínas/metabolismo , Humanos , Espectrometria de Massas/métodos
2.
Anal Bioanal Chem ; 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-38995405

RESUMO

Feature detection plays a crucial role in non-target screening (NTS), requiring careful selection of algorithm parameters to minimize false positive (FP) features. In this study, a stochastic approach was employed to optimize the parameter settings of feature detection algorithms used in processing high-resolution mass spectrometry data. This approach was demonstrated using four open-source algorithms (OpenMS, SAFD, XCMS, and KPIC2) within the patRoon software platform for processing extracts from drinking water samples spiked with 46 per- and polyfluoroalkyl substances (PFAS). The designed method is based on a stochastic strategy involving random sampling from variable space and the use of Pearson correlation to assess the impact of each parameter on the number of detected suspect analytes. Using our approach, the optimized parameters led to improvement in the algorithm performance by increasing suspect hits in case of SAFD and XCMS, and reducing the total number of detected features (i.e., minimizing FP) for OpenMS. These improvements were further validated on three different drinking water samples as test dataset. The optimized parameters resulted in a lower false discovery rate (FDR%) compared to the default parameters, effectively increasing the detection of true positive features. This work also highlights the necessity of algorithm parameter optimization prior to starting the NTS to reduce the complexity of such datasets.

3.
Anal Chem ; 95(33): 12247-12255, 2023 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-37549176

RESUMO

Clean high-resolution mass spectra (HRMS) are essential to a successful structural elucidation of an unknown feature during nontarget analysis (NTA) workflows. This is a crucial step, particularly for the spectra generated during data-independent acquisition or during direct infusion experiments. The most commonly available tools only take advantage of the time domain for spectral cleanup. Here, we present an algorithm that combines the time domain and mass domain information to perform spectral deconvolution. The algorithm employs a probability-based cumulative neutral loss (CNL) model for fragment deconvolution. The optimized model, with a mass tolerance of 0.005 Da and a scoreCNL threshold of 0.00, was able to achieve a true positive rate (TPr) of 95.0%, a false discovery rate (FDr) of 20.6%, and a reduction rate of 35.4%. Additionally, the CNL model was extensively tested on real samples containing predominantly pesticides at different concentration levels and with matrix effects. Overall, the model was able to obtain a TPr above 88.8% with FD rates between 33 and 79% and reduction rates between 9 and 45%. Finally, the CNL model was compared with the retention time difference method and peak shape correlation analysis, showing that a combination of correlation analysis and the CNL model was the most effective for fragment deconvolution, obtaining a TPr of 84.7%, an FDr of 54.4%, and a reduction rate of 51.0%.

4.
Anal Chem ; 95(50): 18361-18369, 2023 12 19.
Artigo em Inglês | MEDLINE | ID: mdl-38061068

RESUMO

The use of peak-picking algorithms is an essential step in all nontarget analysis (NTA) workflows. However, algorithm choice may influence reliability and reproducibility of results. Using a real-world data set, the aim of this study was to investigate how different peak-picking algorithms influence NTA results when exploring temporal and/or spatial trends. For this, drinking water catchment monitoring data, using passive samplers collected twice per year across Southeast Queensland, Australia (n = 18 sites) between 2014 and 2019, was investigated. Data were acquired using liquid chromatography coupled to high-resolution mass spectrometry. Peak picking was performed using five different programs/algorithms (SCIEX OS, MSDial, self-adjusting-feature-detection, two algorithms within MarkerView), keeping parameters identical whenever possible. The resulting feature lists revealed low overlap: 7.2% of features were picked by >3 algorithms, while 74% of features were only picked by a single algorithm. Trend evaluation of the data, using principal component analysis, showed significant variability between the approaches, with only one temporal and no spatial trend being identified by all algorithms. Manual evaluation of features of interest (p-value <0.01, log fold change >2) for one sampling site revealed high rates of incorrectly picked peaks (>70%) for three algorithms. Lower rates (<30%) were observed for the other algorithms, but with the caveat of not successfully picking all internal standards used as quality control. The choice is therefore currently between comprehensive and strict peak picking, either resulting in increased noise or missed peaks, respectively. Reproducibility of NTA results remains challenging when applied for regulatory frameworks.


Assuntos
Algoritmos , Análise de Dados , Reprodutibilidade dos Testes , Espectrometria de Massas/métodos , Cromatografia Líquida/métodos
5.
Environ Sci Technol ; 57(38): 14101-14112, 2023 09 26.
Artigo em Inglês | MEDLINE | ID: mdl-37704971

RESUMO

Non-targeted analysis (NTA) has emerged as a valuable approach for the comprehensive monitoring of chemicals of emerging concern (CECs) in the exposome. The NTA approach can theoretically identify compounds with diverse physicochemical properties and sources. Even though they are generic and have a wide scope, non-targeted analysis methods have been shown to have limitations in terms of their coverage of the chemical space, as the number of identified chemicals in each sample is very low (e.g., ≤5%). Investigating the chemical space that is covered by each NTA assay is crucial for understanding the limitations and challenges associated with the workflow, from the experimental methods to the data acquisition and data processing techniques. In this review, we examined recent NTA studies published between 2017 and 2023 that employed liquid chromatography-high-resolution mass spectrometry. The parameters used in each study were documented, and the reported chemicals at confidence levels 1 and 2 were retrieved. The chosen experimental setups and the quality of the reporting were critically evaluated and discussed. Our findings reveal that only around 2% of the estimated chemical space was covered by the NTA studies investigated for this review. Little to no trend was found between the experimental setup and the observed coverage due to the generic and wide scope of the NTA studies. The limited coverage of the chemical space by the reviewed NTA studies highlights the necessity for a more comprehensive approach in the experimental and data processing setups in order to enable the exploration of a broader range of chemical space, with the ultimate goal of protecting human and environmental health. Recommendations for further exploring a wider range of the chemical space are given.


Assuntos
Bioensaio , Saúde Ambiental , Humanos , Cromatografia Líquida , Espectrometria de Massas
6.
Environ Sci Technol ; 57(4): 1712-1720, 2023 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-36637365

RESUMO

A wastewater-based epidemiology (WBE) method is presented to estimate analgesic consumption and assess the burden of treated pain in Australian communities. Wastewater influent samples from 60 communities, representing ∼52% of Australia's population, were analyzed to quantify the concentration of analgesics used to treat pain and converted to estimates of the amount of drug consumed per day per 1000 inhabitants using pharmacokinetics and WBE data. Consumption was standardized to the defined daily dose per day per 1000 people. The population burden of pain treatment was classified as mild to moderate pain (for non-opioid analgesics) and strong to severe pain (for opioid analgesics). The mean per capita weighted total DDD of non-opioid analgesics was 0.029 DDD/day/person, and that of opioid-based analgesics was 0.037 DDD/day/person across Australia. A greater burden of pain (mild to moderate or strong to severe pain index) was observed at regional and remote sites. The correlation analysis of pain indices with different socioeconomic descriptors revealed that pain affects populations from high to low socioeconomic groups. Australians spent an estimated US $3.5 (AU $5) per day on analgesics. Our findings suggest that WBE could be an effective surveillance tool for estimating the consumption of analgesics at a population scale and assessing the total treated pain burden in communities.


Assuntos
Analgésicos não Narcóticos , Águas Residuárias , Humanos , Austrália/epidemiologia , Analgésicos não Narcóticos/uso terapêutico , Analgésicos/uso terapêutico , Analgésicos Opioides , Dor/tratamento farmacológico , Dor/epidemiologia
7.
Environ Sci Technol ; 57(36): 13635-13645, 2023 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-37648245

RESUMO

The leaching of per- and polyfluoroalkyl substances (PFASs) from Australian firefighting training grounds has resulted in extensive contamination of groundwater and nearby farmlands. Humans, farm animals, and wildlife in these areas may have been exposed to complex mixtures of PFASs from aqueous film-forming foams (AFFFs). This study aimed to identify PFAS classes in pooled whole blood (n = 4) and serum (n = 4) from cattle exposed to AFFF-impacted groundwater and potentially discover new PFASs in blood. Thirty PFASs were identified at various levels of confidence (levels 1a-5a), including three novel compounds: (i) perfluorohexanesulfonamido 2-hydroxypropanoic acid (FHxSA-HOPrA), (ii) methyl((perfluorohexyl)sulfonyl)sulfuramidous acid, and (iii) methyl((perfluorooctyl)sulfonyl)sulfuramidous acid, belonging to two different classes. Biotransformation intermediate, perfluorohexanesulfonamido propanoic acid (FHxSA-PrA), hitherto unreported in biological samples, was detected in both whole blood and serum. Furthermore, perfluoroalkyl sulfonamides, including perfluoropropane sulfonamide (FPrSA), perfluorobutane sulfonamide (FBSA), and perfluorohexane sulfonamide (FHxSA) were predominantly detected in whole blood, suggesting that these accumulate in the cell fraction of blood. The suspect screening revealed several fluoroalkyl chain-substituted PFAS. The results suggest that targeting only the major PFASs in the plasma or serum of AFFF-exposed mammals likely underestimates the toxicological risks associated with exposure. Future studies of AFFF-exposed populations should include whole-blood analysis with high-resolution mass spectrometry to understand the true extent of PFAS exposure.


Assuntos
Fluorocarbonos , Água Subterrânea , Humanos , Animais , Bovinos , Austrália , Animais Selvagens , Plasma , Mamíferos
8.
Anal Chem ; 94(12): 5029-5040, 2022 03 29.
Artigo em Inglês | MEDLINE | ID: mdl-35297608

RESUMO

The differentiation of positional isomers is a well established analytical challenge for forensic laboratories. As more novel psychoactive substances (NPSs) are introduced to the illicit drug market, robust yet efficient methods of isomer identification are needed. Although current literature suggests that Direct Analysis in Real Time-Time-of-Flight mass spectrometry (DART-ToF) with in-source collision induced dissociation (is-CID) can be used to differentiate positional isomers, it is currently unclear whether this capability extends to positional isomers whose only structural difference is the precise location of a single substitution on an aromatic ring. The aim of this work was to determine whether chemometric analysis of DART-ToF data could offer forensic laboratories an alternative rapid and robust method of differentiating NPS positional ring isomers. To test the feasibility of this technique, three positional isomer sets (fluoroamphetamine, fluoromethamphetamine, and methylmethcathinone) were analyzed. Using a linear rail for consistent sample introduction, the three isomers of each type were analyzed 96 times over an eight-week timespan. The classification methods investigated included a univariate approach, the Welch t test at each included ion; a multivariate approach, linear discriminant analysis; and a machine learning approach, the Random Forest classifier. For each method, multiple validation techniques were used including restricting the classifier to data that was only generated on one day. Of these classification methods, the Random Forest algorithm was ultimately the most accurate and robust, consistently achieving out-of-bag error rates below 5%. At an inconclusive rate of approximately 5%, a success rate of 100% was obtained for isomer identification when applied to a randomly selected test set. The model was further tested with data acquired as a part of a different batch. The highest classification success rate was 93.9%, and error rates under 5% were consistently achieved.


Assuntos
Aprendizado de Máquina , Isomerismo , Espectrometria de Massas/métodos
9.
Anal Chem ; 94(14): 5599-5607, 2022 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-35343683

RESUMO

A fast algorithm for automated feature mining of synthetic (industrial) homopolymers or perfectly alternating copolymers was developed. Comprehensive two-dimensional liquid chromatography-mass spectrometry data (LC × LC-MS) was utilized, undergoing four distinct parts within the algorithm. Initially, the data is reduced by selecting regions of interest within the data. Then, all regions of interest are clustered on the time and mass-to-charge domain to obtain isotopic distributions. Afterward, single-value clusters and background signals are removed from the data structure. In the second part of the algorithm, the isotopic distributions are employed to define the charge state of the polymeric units and the charge-state reduced masses of the units are calculated. In the third part, the mass of the repeating unit (i.e., the monomer) is automatically selected by comparing all mass differences within the data structure. Using the mass of the repeating unit, mass remainder analysis can be performed on the data. This results in groups sharing the same end-group compositions. Lastly, combining information from the clustering step in the first part and the mass remainder analysis results in the creation of compositional series, which are mapped on the chromatogram. Series with similar chromatographic behavior are separated in the mass-remainder domain, whereas series with an overlapping mass remainder are separated in the chromatographic domain. These series were extracted within a calculation time of 3 min. The false positives were then assessed within a reasonable time. The algorithm is verified with LC × LC-MS data of an industrial hexahydrophthalic anhydride-derivatized propylene glycol-terephthalic acid copolyester. Afterward, a chemical structure proposal has been made for each compositional series found within the data.


Assuntos
Algoritmos , Polímeros , Cromatografia Líquida/métodos , Análise por Conglomerados , Espectrometria de Massas/métodos , Polímeros/química
10.
Anal Chem ; 94(46): 16060-16068, 2022 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-36318471

RESUMO

The majority of liquid chromatography (LC) methods are still developed in a conventional manner, that is, by analysts who rely on their knowledge and experience to make method development decisions. In this work, a novel, open-source algorithm was developed for automated and interpretive method development of LC(-mass spectrometry) separations ("AutoLC"). A closed-loop workflow was constructed that interacted directly with the LC system and ran unsupervised in an automated fashion. To achieve this, several challenges related to peak tracking, retention modeling, the automated design of candidate gradient profiles, and the simulation of chromatograms were investigated. The algorithm was tested using two newly designed method development strategies. The first utilized retention modeling, whereas the second used a Bayesian-optimization machine learning approach. In both cases, the algorithm could arrive within 4-10 iterations (i.e., sets of method parameters) at an optimum of the objective function, which included resolution and analysis time as measures of performance. Retention modeling was found to be more efficient while depending on peak tracking, whereas Bayesian optimization was more flexible but limited in scalability. We have deliberately designed the algorithm to be modular to facilitate compatibility with previous and future work (e.g., previously published data handling algorithms).


Assuntos
Algoritmos , Quimiometria , Teorema de Bayes , Cromatografia Líquida/métodos , Espectrometria de Massas/métodos
11.
Environ Sci Technol ; 2022 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-36480454

RESUMO

The European and U.S. chemical agencies have listed approximately 800k chemicals about which knowledge of potential risks to human health and the environment is lacking. Filling these data gaps experimentally is impossible, so in silico approaches and prediction are essential. Many existing models are however limited by assumptions (e.g., linearity and continuity) and small training sets. In this study, we present a supervised direct classification model that connects molecular descriptors to toxicity. Categories can be driven by either data (using k-means clustering) or defined by regulation. This was tested via 907 experimentally defined 96 h LC50 values for acute fish toxicity. Our classification model explained ≈90% of the variance in our data for the training set and ≈80% for the test set. This strategy gave a 5-fold decrease in the frequency of incorrect categorization compared to a quantitative structure-activity relationship (QSAR) regression model. Our model was subsequently employed to predict the toxicity categories of ≈32k chemicals. A comparison between the model-based applicability domain (AD) and the training set AD was performed, suggesting that the training set-based AD is a more adequate way to avoid extrapolation when using such models. The better performance of our direct classification model compared to that of QSAR methods makes this approach a viable tool for assessing the hazards and risks of chemicals.

12.
Environ Sci Technol ; 56(3): 1627-1638, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-35060377

RESUMO

Wastewater-based epidemiology is a potential complementary technique for monitoring the use of performance- and image-enhancing drugs (PIEDs), such as anabolic steroids and selective androgen receptor modulators (SARMs), within the general population. Assessing in-sewer transformation and degradation is critical for understanding uncertainties associated with wastewater analysis. An electrospray ionization liquid chromatography mass spectrometry method for the quantification of 59 anabolic agents in wastewater influent was developed. Limits of detection and limits of quantification ranged from 0.004 to 1.56 µg/L and 0.01 to 4.75 µg/L, respectively. Method performance was acceptable for linearity (R2 > 0.995, few exceptions), accuracy (68-119%), and precision (1-21%RSD), and applicability was successfully demonstrated. To assess the stability of the selected biomarkers in wastewater, we used laboratory-scale sewer reactors to subject the anabolic agents to simulated realistic sewer environments for 12 h. Anabolic agents, including parent compounds and metabolites, were spiked into freshly collected wastewater that was then fed into three sewer reactor types: control sewer (no biofilm), gravity sewer (aerobic conditions), and rising main sewer (anaerobic conditions). Our results revealed that while most glucuronide conjugates were completely transformed following 12 h in the sewer reactors, 50% of the investigated biomarkers had half-lives longer than 4 h (mean residence time) under gravity sewer conditions. Most anabolic agents were likely subject to biofilm sorption and desorption. These novel results lay the groundwork for any future wastewater-based epidemiology research involving anabolic steroids and SARMs.


Assuntos
Anabolizantes , Poluentes Químicos da Água , Biomarcadores , Humanos , Receptores Androgênicos , Esgotos , Congêneres da Testosterona , Águas Residuárias/química , Poluentes Químicos da Água/análise
13.
Proc Natl Acad Sci U S A ; 116(43): 21864-21873, 2019 10 22.
Artigo em Inglês | MEDLINE | ID: mdl-31591193

RESUMO

Wastewater is a potential treasure trove of chemicals that reflects population behavior and health status. Wastewater-based epidemiology has been employed to determine population-scale consumption of chemicals, particularly illicit drugs, across different communities and over time. However, the sociodemographic or socioeconomic correlates of chemical consumption and exposure are unclear. This study explores the relationships between catchment specific sociodemographic parameters and biomarkers in wastewater generated by the respective catchments. Domestic wastewater influent samples taken during the 2016 Australian census week were analyzed for a range of diet, drug, pharmaceutical, and lifestyle biomarkers. We present both linear and rank-order (i.e., Pearson and Spearman) correlations between loads of 42 biomarkers and census-derived metrics, index of relative socioeconomic advantage and disadvantage (IRSAD), median age, and 40 socioeconomic index for area (SEIFA) descriptors. Biomarkers of caffeine, citrus, and dietary fiber consumption had strong positive correlations with IRSAD, while tramadol, atenolol, and pregabalin had strong negative correlation with IRSAD. As expected, atenolol and hydrochlorothiazide correlated positively with median age. We also found specific SEIFA descriptors such as occupation and educational attainment correlating with each biomarker. Our study demonstrates that wastewater-based epidemiology can be used to study sociodemographic influences and disparities in chemical consumption.


Assuntos
Vigilância Epidemiológica Baseada em Águas Residuárias , Águas Residuárias/análise , Águas Residuárias/química , Austrália , Análise de Alimentos , Humanos , Preparações Farmacêuticas/análise , Fatores Socioeconômicos
14.
Molecules ; 27(19)2022 Sep 29.
Artigo em Inglês | MEDLINE | ID: mdl-36234961

RESUMO

High-resolution mass spectrometry is a promising technique in non-target screening (NTS) to monitor contaminants of emerging concern in complex samples. Current chemical identification strategies in NTS experiments typically depend on spectral libraries, chemical databases, and in silico fragmentation tools. However, small molecule identification remains challenging due to the lack of orthogonal sources of information (e.g., unique fragments). Collision cross section (CCS) values measured by ion mobility spectrometry (IMS) offer an additional identification dimension to increase the confidence level. Thanks to the advances in analytical instrumentation, an increasing application of IMS hybrid with high-resolution mass spectrometry (HRMS) in NTS has been reported in the recent decades. Several CCS prediction tools have been developed. However, limited CCS prediction methods were based on a large scale of chemical classes and cross-platform CCS measurements. We successfully developed two prediction models using a random forest machine learning algorithm. One of the approaches was based on chemicals' super classes; the other model was direct CCS prediction using molecular fingerprint. Over 13,324 CCS values from six different laboratories and PubChem using a variety of ion-mobility separation techniques were used for training and testing the models. The test accuracy for all the prediction models was over 0.85, and the median of relative residual was around 2.2%. The models can be applied to different IMS platforms to eliminate false positives in small molecule identification.


Assuntos
Espectrometria de Mobilidade Iônica , Bibliotecas de Moléculas Pequenas , Algoritmos , Aprendizado de Máquina , Espectrometria de Massas , Bibliotecas de Moléculas Pequenas/química
15.
Anal Chem ; 93(49): 16562-16570, 2021 12 14.
Artigo em Inglês | MEDLINE | ID: mdl-34843646

RESUMO

Centroiding is one of the major approaches used for size reduction of the data generated by high-resolution mass spectrometry. During centroiding, performed either during acquisition or as a pre-processing step, the mass profiles are represented by a single value (i.e., the centroid). While being effective in reducing the data size, centroiding also reduces the level of information density present in the mass peak profile. Moreover, each step of the centroiding process and their consequences on the final results may not be completely clear. Here, we present Cent2Prof, a package containing two algorithms that enables the conversion of the centroided data to mass peak profile data and vice versa. The centroiding algorithm uses the resolution-based mass peak width parameter as the first guess and self-adjusts to fit the data. In addition to the m/z values, the centroiding algorithm also generates the measured mass peak widths at half-height, which can be used during the feature detection and identification. The mass peak profile prediction algorithm employs a random-forest model for the prediction of mass peak widths, which is consequently used for mass profile reconstruction. The centroiding results were compared to the outputs of the MZmine-implemented centroiding algorithm. Our algorithm resulted in rates of false detection ≤5% while the MZmine algorithm resulted in 30% rate of false positive and 3% rate of false negative. The error in profile prediction was ≤56% independent of the mass, ionization mode, and intensity, which was 6 times more accurate than the resolution-based estimated values.


Assuntos
Aprendizado de Máquina
16.
Environ Sci Technol ; 54(5): 2707-2714, 2020 03 03.
Artigo em Inglês | MEDLINE | ID: mdl-32019310

RESUMO

Naphthenic acids (NAs) constitute one of the toxic components of the produced water (PW) from offshore oil platforms discharged into the marine environment. We employed liquid chromatography (LC) coupled to high-resolution mass spectrometry with electrospray ionization (ESI) in negative mode for the comprehensive chemical characterization and quantification of NAs in PW samples from six different Norwegian offshore oil platforms. In total, we detected 55 unique NA isomer groups, out of the 181 screened homologous groups, across all tested samples. The frequency of detected NAs in the samples varied between 14 and 44 isomer groups. Principal component analysis (PCA) indicated a clear distinction of the PW from the tested platforms based on the distribution of NAs in these samples. The averaged total concentration of NAs varied between 6 and 56 mg L-1, among the tested platforms, whereas the concentrations of the individual NA isomer groups ranged between 0.2 and 44 mg L-1. Based on both the distribution and the concentration of NAs in the samples, the C8H14O2 isomer group appeared to be a reasonable indicator of the presence and the total concentration of NAs in the samples with a Pearson correlation coefficient of 0.89.


Assuntos
Poluentes Químicos da Água , Água , Ácidos Carboxílicos , Mar do Norte , Campos de Petróleo e Gás
17.
Environ Sci Technol ; 54(15): 9408-9417, 2020 08 04.
Artigo em Inglês | MEDLINE | ID: mdl-32644808

RESUMO

Microplastic contamination of the marine environment is widespread, but the extent to which the marine food web is contaminated is not yet known. The aims of this study were to go beyond visual identification techniques and develop and apply a simple seafood sample cleanup, extraction, and quantitative analysis method using pyrolysis gas chromatography mass spectrometry to improve the detection of plastic contamination. This method allows the identification and quantification of polystyrene, polyethylene, polyvinyl chloride, polypropylene, and poly(methyl methacrylate) in the edible portion of five different seafood organisms: oysters, prawns, squid, crabs, and sardines. Polyvinyl chloride was detected in all samples and polyethylene at the highest total concentration of between 0.04 and 2.4 mg g-1 of tissue. Sardines contained the highest total plastic mass concentration (0.3 mg g-1 tissue) and squid the lowest (0.04 mg g-1 tissue). Our findings show that the total concentration of plastics is highly variable among species and that microplastic concentration differs between organisms of the same species. The sources of microplastic exposure, such as packaging and handling with consequent transference and adherence to the tissues, are discussed. This method is a major development in the standardization of plastic quantification techniques used in seafood.


Assuntos
Plásticos , Poluentes Químicos da Água , Austrália , Monitoramento Ambiental , Cromatografia Gasosa-Espectrometria de Massas , Pirólise , Alimentos Marinhos/análise , Poluentes Químicos da Água/análise
18.
Anal Chem ; 91(16): 10800-10807, 2019 08 20.
Artigo em Inglês | MEDLINE | ID: mdl-31356049

RESUMO

Nontargeted feature detection in data from high resolution mass spectrometry is a challenging task, due to the complex and noisy nature of data sets. Numerous feature detection and preprocessing strategies have been developed in an attempt to tackle this challenge, but recent evidence has indicated limitations in the currently used methods. Recent studies have indicated the limitations of the currently used methods for feature detection of LC-HRMS data. To overcome these limitations, we propose a self-adjusting feature detection (SAFD) algorithm for the processing of profile data from LC-HRMS. SAFD fits a three-dimensional Gaussian into the profile data of a feature, without data preprocessing (i.e., centroiding and/or binning). We tested SAFD on 55 LC-HRMS chromatograms from which 44 were composite wastewater influent samples. Additionally, 51 of 55 samples were spiked with 19 labeled internal standards. We further validated SAFD by comparing its results with those produced via XCMS implemented through MZmine. In terms of ISs and the unknown features, SAFD produced lower rates of false detection (i.e., ≤ 5% and ≤10%, respectively) when compared to XCMS (≤11% and ≤28%, respectively). We also observed higher reproducibility in the feature area generated by SAFD algorithm versus XCMS.

19.
Environ Sci Technol ; 52(8): 4694-4701, 2018 04 17.
Artigo em Inglês | MEDLINE | ID: mdl-29561135

RESUMO

Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.


Assuntos
Algoritmos , Espectrometria de Massas em Tandem , Cromatografia Líquida , Águas Residuárias
20.
Environ Sci Technol ; 52(9): 5135-5144, 2018 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-29651850

RESUMO

A key challenge in the environmental and exposure sciences is to establish experimental evidence of the role of chemical exposure in human and environmental systems. High resolution and accurate tandem mass spectrometry (HRMS) is increasingly being used for the analysis of environmental samples. One lauded benefit of HRMS is the possibility to retrospectively process data for (previously omitted) compounds that has led to the archiving of HRMS data. Archived HRMS data affords the possibility of exploiting historical data to rapidly and effectively establish the temporal and spatial occurrence of newly identified contaminants through retrospective suspect screening. We propose to establish a global emerging contaminant early warning network to rapidly assess the spatial and temporal distribution of contaminants of emerging concern in environmental samples through performing retrospective analysis on HRMS data. The effectiveness of such a network is demonstrated through a pilot study, where eight reference laboratories with available archived HRMS data retrospectively screened data acquired from aqueous environmental samples collected in 14 countries on 3 different continents. The widespread spatial occurrence of several surfactants (e.g., polyethylene glycols ( PEGs ) and C12AEO-PEGs ), transformation products of selected drugs (e.g., gabapentin-lactam, metoprolol-acid, carbamazepine-10-hydroxy, omeprazole-4-hydroxy-sulfide, and 2-benzothiazole-sulfonic-acid), and industrial chemicals (3-nitrobenzenesulfonate and bisphenol-S) was revealed. Obtaining identifications of increased reliability through retrospective suspect screening is challenging, and recommendations for dealing with issues such as broad chromatographic peaks, data acquisition, and sensitivity are provided.


Assuntos
Espectrometria de Massas em Tandem , Humanos , Projetos Piloto , Reprodutibilidade dos Testes , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA