Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters











Publication year range
1.
Anal Chim Acta ; 1319: 342956, 2024 Aug 29.
Article in English | MEDLINE | ID: mdl-39122272

ABSTRACT

BACKGROUND: Atmospheric mercury (Hg) concentrations are quantified primarily through preconcentration on gold (Au) cartridges through amalgamation and subsequent thermal desorption into an atomic fluorescence spectrometry detector. This procedure has been used for decades, and is implemented in the industry-standard atmospheric Hg analyzer, the Tekran 2537. There is ongoing debate as to whether gaseous elemental mercury (Hg0) or total gaseous mercury (TGM, Hg0 + HgII) is measured using Au cartridges. The raw Hg signal processing algorithms for the Tekran 2537 analyzer have also been questioned. The objective of this work was to develop a better understanding of what forms of Hg are collected on gold cartridges through the use of permeation tube-based calibrators, that release known amounts of Hg0 and HgII. The potential differences between different Tekran analyzer models (i.e., 2537B versus 2537X) Hg signal processing algorithms, and Hg0 calibration methods were also investigated. RESULTS: Experiments were performed using Hg0 and HgII permeation calibrators. Validation tests showed that the HgII calibrator produced a reproducible and stable HgII permeation rate (2.2 ± 0.2 pg min-1). Results of HgII sampling and analysis using Au amalgamation showed the gold cartridges measured up to 75 % HgII, with the value at the beginning of the HgII measurement being much lower (as low as 10 %) due to HgII adsorption on analyzer surfaces and the Tekran particulate filter. Furthermore, thermal desorption of Hg from Au reduced only 80 % of HgII to Hg0, resulting in additional HgII that was not measured by the analyzer. By adding a thermolyzer upstream of the analyzer, 97 % of HgII was measured as Hg0. Additionally, Hg0 measurements using Tekran 2537 B and X models using a newly developed signal processing algorithm, different peak integration methods, and two Hg0 sources were compared. Results showed the 2537X model was not affected by the integration type, while the 2537B model was. Bell jar calibration based on the Dumarey equation resulted in 6 % ± 7 % (mean ± SD) underestimation of measured Hg0 concentrations compared to the calibration with a permeation calibrator. SIGNIFICANCE: Gold cartridges measured an atmospheric Hg fraction somewhere between Hg0 and TGM due to HgII adsorption and inefficient reduction of HgII to Hg0 during thermal desorption from Au. Since HgII in ambient air can be 25 % of total Hg, distinguishing between Hg0 and TGM is important. The use of a thermolyzer or a cation exchange membrane upstream of gold cartridges is recommended to enable TGM or Hg0 measurements, respectively. Observations showed that traceable multipoint calibrations of atmospheric Hg measurements are needed for Hg quantification, and that different Hg0 calibration methods can produce significantly different results for measured atmospheric Hg concentrations.

2.
Metabolomics ; 19(7): 65, 2023 Jul 07.
Article in English | MEDLINE | ID: mdl-37418094

ABSTRACT

INTRODUCTION: Absolute quantification of individual metabolites in complex biological samples is crucial in targeted metabolomic profiling. OBJECTIVES: An inter-laboratory test was performed to evaluate the impact of the NMR software, peak-area determination method (integration vs. deconvolution) and operator on quantification trueness and precision. METHODS: A synthetic urine containing 32 compounds was prepared. One site prepared the urine and calibration samples, and performed NMR acquisition. NMR spectra were acquired with two pulse sequences including water suppression used in routine analyses. The pre-processed spectra were sent to the other sites where each operator quantified the metabolites using internal referencing or external calibration, and his/her favourite in-house, open-access or commercial NMR tool. RESULTS: For 1D NMR measurements with solvent presaturation during the recovery delay (zgpr), 20 metabolites were successfully quantified by all processing strategies. Some metabolites could not be quantified by some methods. For internal referencing with TSP, only one half of the metabolites were quantified with a trueness below 5%. With peak integration and external calibration, about 90% of the metabolites were quantified with a trueness below 5%. The NMRProcFlow integration module allowed the quantification of several additional metabolites. The number of quantified metabolites and quantification trueness improved for some metabolites with deconvolution tools. Trueness and precision were not significantly different between zgpr- and NOESYpr-based spectra for about 70% of the variables. CONCLUSION: External calibration performed better than TSP internal referencing. Inter-laboratory tests are useful when choosing to better rationalize the choice of quantification tools for NMR-based metabolomic profiling and confirm the value of spectra deconvolution tools.


Subject(s)
Body Fluids , Metabolomics , Female , Male , Humans , Metabolomics/methods , Workflow , Magnetic Resonance Spectroscopy/methods , Magnetic Resonance Imaging , Body Fluids/chemistry
3.
Biotechnol Bioeng ; 120(7): 1822-1843, 2023 07.
Article in English | MEDLINE | ID: mdl-37086414

ABSTRACT

Chromatographic data processing has garnered attention due to multiple Food and Drug Administration 483 citations and warning letters, highlighting the need for a robust technological solution. The healthcare industry has the potential to greatly benefit from the adoption of digital technologies, but the process of implementing these technologies can be slow and complex. This article presents a "Digital by Design" managerial approach, adapted from pharmaceutical quality by design principles, for designing and implementing an artificial intelligence (AI)-based solution for chromatography peak integration process in the healthcare industry. We report the use of a convolutional neural network model to predict analytical variability for integrating chromatography peaks and propose a potential GxP framework for using AI in the healthcare industry that includes elements on data management, model management, and human-in-the-loop processes. The component on analytical variability prediction has a great potential to enable Industry 4.0 objectives on real-time release testing, automated quality control, and continuous manufacturing.


Subject(s)
Artificial Intelligence , Deep Learning , United States , Humans , Neural Networks, Computer , Quality Control , Chromatography
4.
Toxins (Basel) ; 15(2)2023 02 15.
Article in English | MEDLINE | ID: mdl-36828475

ABSTRACT

Snakebite is considered a neglected tropical disease, and it is one of the most intricate ones. The variability found in snake venom is what makes it immensely complex to study. These variations are present both in the big and the small molecules found in snake venom. This study focused on examining the variability found in the venom's small molecules (i.e., mass range of 100-1000 Da) between two main families of venomous snakes-Elapidae and Viperidae-managing to create a model able to classify unknown samples by means of specific features, which can be extracted from their LC-MS data and output in a comprehensive list. The developed model also allowed further insight into the composition of snake venom by highlighting the most relevant metabolites of each group by clustering similarly composed venoms. The model was created by means of support vector machines and used 20 features, which were merged into 10 principal components. All samples from the first and second validation data subsets were correctly classified. Biological hypotheses relevant to the variation regarding the metabolites that were identified are also given.


Subject(s)
Snake Bites , Viperidae , Animals , Humans , Snake Venoms , Elapidae/metabolism , Viperidae/metabolism , Mass Spectrometry , Elapid Venoms/metabolism
5.
Metabolomics ; 16(11): 117, 2020 10 21.
Article in English | MEDLINE | ID: mdl-33085002

ABSTRACT

INTRODUCTION: Despite the availability of several pre-processing software, poor peak integration remains a prevalent problem in untargeted metabolomics data generated using liquid chromatography high-resolution mass spectrometry (LC-MS). As a result, the output of these pre-processing software may retain incorrectly calculated metabolite abundances that can perpetuate in downstream analyses. OBJECTIVES: To address this problem, we propose a computational methodology that combines machine learning and peak quality metrics to filter out low quality peaks. METHODS: Specifically, we comprehensively and systematically compared the performance of 24 different classifiers generated by combining eight classification algorithms and three sets of peak quality metrics on the task of distinguishing reliably integrated peaks from poorly integrated ones. These classifiers were compared to using a residual standard deviation (RSD) cut-off in pooled quality-control (QC) samples, which aims to remove peaks with analytical error. RESULTS: The best performing classifier was found to be a combination of the AdaBoost algorithm and a set of 11 peak quality metrics previously explored in untargeted metabolomics and proteomics studies. As a complementary approach, applying our framework to peaks retained after filtering by 30% RSD across pooled QC samples was able to further distinguish poorly integrated peaks that were not removed from filtering alone. An R implementation of these classifiers and the overall computational approach is available as the MetaClean package at https://CRAN.R-project.org/package=MetaClean . CONCLUSION: Our work represents an important step forward in developing an automated tool for filtering out unreliable peak integrations in untargeted LC-MS metabolomics data.


Subject(s)
Machine Learning , Metabolomics/methods , Chromatography, Liquid , Mass Spectrometry , Software
6.
J Sep Sci ; 42(8): 1644-1657, 2019 Apr.
Article in English | MEDLINE | ID: mdl-30771233

ABSTRACT

Modern chromatographic data acquisition softwares often behave as black boxes where the researchers have little control over the raw data processing. One of the significant interests of separation scientists is to extract physico-chemical information from chromatographic experiments and peak parameters. In addition, column developers need the total peak shape analysis to characterize the flow profile in chromatographic beds. Statistical moments offer a robust approach for providing detailed information for peaks in terms of area, its center of gravity, variance, resolution, and its skew without assuming any peak model or shape. Despite their utility and theoretical significance, statistical moments are rarely incorporated as they often provide underestimated or overestimated results because of inappropriate choice of the integration method and selection of integration limits. The Gaussian model is universally used in most chromatography softwares to assess efficiency, resolution, and peak position. Herein we present a user-friendly, and accessible approach for calculating the zeroth, first, second, and third moments through more accurate numerical integration techniques (Trapezoidal and Simpson's rule) which provide an accurate estimate of peak parameters as compared to rectangular integration. An Excel template is also provided which can calculate the four moments in three steps with or without baseline correction.

7.
J Chromatogr A ; 1529: 81-92, 2017 Dec 22.
Article in English | MEDLINE | ID: mdl-29126588

ABSTRACT

Chromatography provides important detail on the composition of environmental samples and their chemical processing. However, the complexity of these samples and their tendency to contain many structurally and chemically similar compounds frequently results in convoluted or poorly resolved data. Data reduction from raw chromatograms of complex environmental data into integrated peak areas consequently often requires substantial operator interaction. This difficulty has led to a bottleneck in analysis that increases analysis time, decreases data quality, and will worsen as advances in field-based instrumentation multiply the quantity and informational density of data produced. In this work, we develop and validate an automated approach to fitting chromatographic data within a target retention time window with a combination of multiple idealized peaks (Gaussian peaks either with or without an exponential decay component). We compare this single-ion peak fitting approach to drawn baseline integration methods of more than 70,000 peaks collected by field-based chromatographs spanning across a wide range of volatilities and functionalities. Accuracy of peak fitting under real-world conditions is found to be within 10%. The quantitative parameters describing the fit (e.g. coefficients, fit residuals, etc.) are found to provide valuable information to increase the efficiency of quality control and provide constraints to accurately integrate peaks that are significantly convoluted with neighboring peaks. Implementation of the peak fitting method is shown to yield accurate integration of peaks otherwise too poorly resolved to separate into individual compounds and improved quantitative metrics to determine the fidelity of the data reduction process, while substantially decreasing the time spent by operators on data reduction.


Subject(s)
Chromatography , Statistics as Topic/methods , Reproducibility of Results , Statistics as Topic/standards
8.
J Biomol NMR ; 69(2): 93-99, 2017 Oct.
Article in English | MEDLINE | ID: mdl-29043470

ABSTRACT

NMR spectroscopy is uniquely suited for atomic resolution studies of biomolecules such as proteins, nucleic acids and metabolites, since detailed information on structure and dynamics are encoded in positions and line shapes of peaks in NMR spectra. Unfortunately, accurate determination of these parameters is often complicated and time consuming, in part due to the need for different software at the various analysis steps and for validating the results. Here, we present an integrated, cross-platform and open-source software that is significantly more versatile than the typical line shape fitting application. The software is a completely redesigned version of PINT ( https://pint-nmr.github.io/PINT/ ). It features a graphical user interface and includes functionality for peak picking, editing of peak lists and line shape fitting. In addition, the obtained peak intensities can be used directly to extract, for instance, relaxation rates, heteronuclear NOE values and exchange parameters. In contrast to most available software the entire process from spectral visualization to preparation of publication-ready figures is done solely using PINT and often within minutes, thereby, increasing productivity for users of all experience levels. Unique to the software are also the outstanding tools for evaluating the quality of the fitting results and extensive, but easy-to-use, customization of the fitting protocol and graphical output. In this communication, we describe the features of the new version of PINT and benchmark its performance.


Subject(s)
Data Interpretation, Statistical , Magnetic Resonance Spectroscopy , Software , Magnetic Resonance Spectroscopy/methods , Reproducibility of Results , User-Computer Interface , Web Browser
9.
Sci Total Environ ; 574: 1588-1598, 2017 Jan 01.
Article in English | MEDLINE | ID: mdl-27613668

ABSTRACT

The present study demonstrates that the ratio of fluorescence integration of peak C to peak T (IC:IT) can be used as an indicator tracing the compositional dynamics of chromophoric dissolved organic matter (CDOM). CDOM absorption and fluorescence spectroscopy and stable isotope δ13C were determined on a seasonal basis in seventeen Chinese inland waters as well as in a series of mixing and photodegradation experiments in the lab. A strong positive linear correlation was recorded between IC:IT and the ratio of terrestrial humic-like C1 to tryptophan-like C4 (C1:C4) derived by parallel factor analysis. The r2 for the linear fitting between IC:IT and C1:C4 (r2=0.80) was notably higher than between C1:C4 and other indices tested, including the ratio of CDOM absorption at 250nm to 365nm, i.e. a(250):a(365) (r2=0.09), spectral slope (S275-295) (r2=0.26), spectral slope ratio (SR) (r2=0.31), the humification index (HIX) (r2=0.47), the recent autochthonous biological contribution index (BIX) (r2=0.27), and a fluorescence index (FI370) (r2=0.07). IC:IT exhibited larger variability than the remaining six indices and a closer correlation with stable isotope δ13C than that observed for a(250):a(365), S275-295, SR, FI370, and BIX during field campaigns. Confirming our field observations, significant correlations were recorded between IC:IT and the remaining six indices, and IC:IT also demonstrated notably larger variability than the six other indices during our wastewater addition experiment. Compared with HIX, eutrophic water addition and photobleaching substantially decreased IC:IT but had no pronounced effect on a(250):a(365), S275-297, SR, FI370, and BIX, further suggesting that IC:IT is the most efficient indicator of the CDOM compositional dynamics.

10.
Article in Chinese | WPRIM (Western Pacific) | ID: wpr-852985

ABSTRACT

Objective: To compare the differences between Lonicera Japonica Flos and Lonicera Flos by establishing HPLC fingerprint and calculating the similarity. Methods: The columns was Phenomenex Luna 5 μm C18 (2) 100 A, 250 mm×4.6 mm; The column temperature was 40℃. The mobile phase was acetonitrile-0.5% phosphoric acid, the flow rate was 1 mL/min, and the wavelength was 350 nm. Results: HPLC fingerprint of Lonicera Japonica and similarity evaluation by screening large peak integration were established. The similarity of 12 batches of Lonicera Japonica Flos were all above 0.95, and four batches of Lonicera Flos were less than 0.80. Conclusion: HPLC fingerprint profiles under 350 nm can reflex the differences between Lonicera Japonica Flos and Lonicera Flos effectively; Similarity evaluation by screening large peak integration shows the tiny differences of chemical component.

11.
J Chromatogr A ; 1368: 107-15, 2014 Nov 14.
Article in English | MEDLINE | ID: mdl-25441346

ABSTRACT

Comprehensive, two-dimensional liquid chromatography (LC × LC) is a powerful technique for the separation of complex mixtures. Most studies using LC × LC are focused on qualitative efforts, such as increasing peak capacity. The present study examined the use of LC × LC-UV/vis for the separation and quantitation of polycyclic aromatic hydrocarbons (PAHs). More specifically, this study evaluated the impact of different peak integration approaches on the quantitative performance of the LC × LC method. For well-resolved three-dimensional peaks, parameters such as baseline definition, peak base shape, and peak width determination did not have a significant impact on accuracy and precision. For less-resolved peaks, a dropped baseline and the summation of all slices in the peak improved the accuracy and precision of the integration methods. The computational approaches to three-dimensional peak integration are provided, including fully descriptive, select slice, and summed heights integration methods, each with its own strengths and weaknesses. Overall, the integration methods presented quantify each of the PAHs within acceptable precision and accuracy ranges and have comparable performance to that of single dimension liquid chromatography.


Subject(s)
Chromatography, High Pressure Liquid/methods , Polycyclic Aromatic Hydrocarbons/isolation & purification , Ions/chemistry , Polycyclic Aromatic Hydrocarbons/chemistry
12.
J Chromatogr A ; 1364: 140-50, 2014 Oct 17.
Article in English | MEDLINE | ID: mdl-25234499

ABSTRACT

Different automatic peak integration methods have been reviewed and compared for their ability to accurately determine the variance of the very narrow and very fast eluting peaks encountered when measuring the instrument band broadening of today's low dispersion liquid chromatography instruments. Using fully maximized injection concentrations to work at the highest possible signal-to-noise ratio's (SNR), the best results were obtained with the so-called variance profile analysis method. This is an extension (supplemented with a user-independent read-out algorithm) of a recently proposed method which calculates the peak variance value for any possible value of the peak end time, providing a curve containing all the possible variance values and theoretically levelling off to the (best possible estimate of the) true variance. Despite the use of maximal injection concentrations (leading to SNRs over 10,000), the peak variance errors were of the order of some 10-20%, mostly depending on the peak tail characteristics. The accuracy could however be significantly increased (to an error level below 0.5-2%) by averaging over 10-15 subsequent measurements, or by first adding the peak profiles of 10-15 subsequent runs and then analyzing this summed peak. There also appears to be an optimal detector intermediate frequency, with the higher frequencies suffering from their poorer signal-to-noise-ratio and with the smaller detector frequencies suffering from a limited number of data points. When the SNR drops below 1000, an accurate determination of the true variance of extra-column peaks of modern instruments no longer seems to be possible.


Subject(s)
Chromatography, Liquid/instrumentation , Analysis of Variance , Signal-To-Noise Ratio
SELECTION OF CITATIONS
SEARCH DETAIL