Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
Nat Commun ; 13(1): 124, 2022 01 10.
Artigo em Inglês | MEDLINE | ID: mdl-35013261

RESUMO

Pancreatic cancer has the worst prognosis among all cancers. Cancer screening of body fluids may improve the survival time prognosis of patients, who are often diagnosed too late at an incurable stage. Several studies report the dysregulation of lipid metabolism in tumor cells, suggesting that changes in the blood lipidome may accompany tumor growth. Here we show that the comprehensive mass spectrometric determination of a wide range of serum lipids reveals statistically significant differences between pancreatic cancer patients and healthy controls, as visualized by multivariate data analysis. Three phases of biomarker discovery research (discovery, qualification, and verification) are applied for 830 samples in total, which shows the dysregulation of some very long chain sphingomyelins, ceramides, and (lyso)phosphatidylcholines. The sensitivity and specificity to diagnose pancreatic cancer are over 90%, which outperforms CA 19-9, especially at an early stage, and is comparable to established diagnostic imaging methods. Furthermore, selected lipid species indicate a potential as prognostic biomarkers.


Assuntos
Biomarcadores Tumorais/sangue , Ceramidas/sangue , Metabolismo dos Lipídeos/genética , Lisofosfatidilcolinas/sangue , Neoplasias Pancreáticas/diagnóstico , Esfingomielinas/sangue , Biomarcadores Tumorais/genética , Antígeno CA-19-9/sangue , Estudos de Casos e Controles , Feminino , Humanos , Lipidômica/métodos , Masculino , Análise Multivariada , Neoplasias Pancreáticas/sangue , Neoplasias Pancreáticas/mortalidade , Neoplasias Pancreáticas/patologia , Modelos de Riscos Proporcionais , Sensibilidade e Especificidade , Espectrometria de Massas por Ionização e Dessorção a Laser Assistida por Matriz , Neoplasias Pancreáticas
2.
Anal Chem ; 89(14): 7675-7683, 2017 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-28643516

RESUMO

In this work, a new strategy for the chemometric analysis of two-dimensional liquid chromatography-high-resolution mass spectrometry (LC × LC-HRMS) data is proposed. This approach consists of a preliminary compression step along the mass spectrometry (MS) spectral dimension based on the selection of the regions of interest (ROI), followed by a further data compression along the chromatographic dimension by wavelet transforms. In a secondary step, the multivariate curve resolution alternating least squares (MCR-ALS) method is applied to previously compressed data sets obtained in the simultaneous analysis of multiple LC × LC-HRMS chromatographic runs from multiple samples. The feasibility of the proposed approach is demonstrated by its application to a large experimental data set obtained in the untargeted LC × LC-HRMS study of the effects of different environmental conditions (watering and harvesting time) on the metabolism of multiple rice samples. An untargeted chromatographic setup coupling two different liquid chromatography (LC) columns [hydrophilic interaction liquid chromatography (HILIC) and reversed-phase liquid chromatography (RPLC)] together with an HRMS detector was developed and applied to analyze the metabolites extracted from rice samples at the different experimental conditions. In the case of the metabolomics study taken as example in this work, a total number of 154 metabolites from 15 different families were properly resolved after the application of MCR-ALS. A total of 139 of these metabolites could be identified by their HRMS spectra. Statistical analysis of their concentration changes showed that both watering and harvest time experimental factors had significant effects on rice metabolism. The biochemical insight of the effects of watering and harvesting experimental factors on the changes in concentration of these detected metabolites in the investigated rice samples is attempted.


Assuntos
Flavonoides/análise , Glicosídeos/análise , Oryza/química , Reguladores de Crescimento de Plantas/análise , Cromatografia Líquida , Flavonoides/metabolismo , Glicosídeos/metabolismo , Espectrometria de Massas , Análise Multivariada , Oryza/metabolismo , Reguladores de Crescimento de Plantas/metabolismo
3.
Electrophoresis ; 38(13-14): 1713-1723, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-28370326

RESUMO

In this work, we present a novel probabilistic peak detection algorithm based on a Bayesian framework for forensic DNA analysis. The proposed method aims at an exhaustive use of raw electropherogram data from a laser-induced fluorescence multi-CE system. As the raw data are informative up to a single data point, the conventional threshold-based approaches discard relevant forensic information early in the data analysis pipeline. Our proposed method assigns a posterior probability reflecting the data point's relevance with respect to peak detection criteria. Peaks of low intensity generated from a truly existing allele can thus constitute evidential value instead of fully discarding them and contemplating a potential allele drop-out. This way of working utilizes the information available within each individual data point and thus avoids making early (binary) decisions on the data analysis that can lead to error propagation. The proposed method was tested and compared to the application of a set threshold as is current practice in forensic STR DNA profiling. The new method was found to yield a significant improvement in the number of alleles identified, regardless of the peak heights and deviation from Gaussian shape.


Assuntos
Impressões Digitais de DNA/métodos , Eletroforese Capilar/métodos , Genética Forense/métodos , Repetições de Microssatélites/genética , Algoritmos , Teorema de Bayes , Humanos , Modelos Estatísticos
4.
Talanta ; 160: 624-635, 2016 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-27591659

RESUMO

Comprehensive hyphenated two-dimensional liquid chromatography mass spectrometry (LC×LC-MS) is a very powerful analytical tool achieving high throughput resolution of highly complex natural samples. However, even using this approach there is still the possibility of not resolving some of the analytes of interest. For instance, triacylglycerols (TAGs) structural isomers in oil samples are extremely difficult to separate chromatographically due to their very similar structure and chemical properties. Traditional approaches based on current vendor chromatographic software cannot distinguish these isomers from their different mass spectral features. In this work, a chemometric approach is proposed to solve this problem. First, the experimental LC×LC-MS data structure is discussed, and results achieved by different methods based on the fulfilment of the trilinear model are compared. Then, the step-by-step resolution and identification of strongly coeluted compounds from different examples of triacylglycerols (TAGs) structural isomers in corn oil samples are described. As a conclusion, the separation power of two-dimensional chromatography can be significantly improved when it is combined with the multivariate curve resolution method.


Assuntos
Óleo de Milho/química , Triglicerídeos/análise , Cromatografia Líquida , Isomerismo , Espectrometria de Massas , Triglicerídeos/química
5.
Anal Chem ; 88(19): 9843-9849, 2016 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-27584087

RESUMO

A novel method for compound identification in liquid chromatography-high resolution mass spectrometry (LC-HRMS) is proposed. The method, based on Bayesian statistics, accommodates all possible uncertainties involved, from instrumentation up to data analysis into a single model yielding the probability of the compound of interest being present/absent in the sample. This approach differs from the classical methods in two ways. First, it is probabilistic (instead of deterministic); hence, it computes the probability that the compound is (or is not) present in a sample. Second, it answers the hypothesis "the compound is present", opposed to answering the question "the compound feature is present". This second difference implies a shift in the way data analysis is tackled, since the probability of interfering compounds (i.e., isomers and isobaric compounds) is also taken into account.

6.
Forensic Sci Int ; 267: 183-195, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27614238

RESUMO

The application of GC×GC-FID and GC×GC-MS for the chemical analysis and profiling of neat white spirit is explored and the benefit of the enhanced peak capacity offered by comprehensive two-dimensional gas chromatography is demonstrated. An extensive sampling exercise was conducted throughout The Netherlands and the production and logistics in terms of bottling and distribution of white spirits were studied. An exploratory approach based on target-peak tables and principal component analysis was employed to study the brand-to-brand differences and production variations over time. Despite the complex chemical composition of white spirit samples this study shows that chemical variation during productions is actually quite limited. Hence care has to be taken with the chemical comparison for forensic purposes. Although some clustering was noticed on brand level, the large scale production process leads to a very consistent composition across stores and brands. However, because of the broad specifications of this commodity product, substantial chemical variation was found over time. This temporal discrimination could be of forensic value when considering white spirits supplies in individual households.

7.
Anal Chim Acta ; 940: 46-55, 2016 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-27662758

RESUMO

A novel peak tracking method based on Bayesian statistics is proposed. The method consists of assigning (i.e. tracking) peaks from two GCxGC-FID data sets of the same sample taken in different conditions. Opposed to traditional (i.e. deterministic) peak tracking algorithms, in which the assignment problem is solved with a unique solution, the proposed algorithm is probabilistic. In other words, we quantify the uncertainty of matching two peaks without excluding other possible candidates, ranking the possible peak assignments regarding their posterior probability. This represents a significant advantage over existing deterministic methods. Two algorithms are presented: the blind peak tracking algorithm (BPTA) and peak table matching algorithm (PTMA). PTMA method was able to assign correctly 78% of a selection of peaks in a GCxGC-FID chromatogram of a diesel sample and proved to be extremely fast.

8.
Anal Chem ; 88(15): 7705-14, 2016 08 02.
Artigo em Inglês | MEDLINE | ID: mdl-27391247

RESUMO

In this work, we introduce an automated, efficient, and elegant model to combine all pieces of evidence (e.g., expected retention times, peak shapes, isotope distributions, fragment-to-parent ratio) obtained from liquid chromatography-tandem mass spectrometry (LC-MS/MS/MS) data for screening purposes. Combining all these pieces of evidence requires a careful assessment of the uncertainties in the analytical system as well as all possible outcomes. To-date, the majority of the existing algorithms are highly dependent on user input parameters. Additionally, the screening process is tackled as a deterministic problem. In this work we present a Bayesian framework to deal with the combination of all these pieces of evidence. Contrary to conventional algorithms, the information is treated in a probabilistic way, and a final probability assessment of the presence/absence of a compound feature is computed. Additionally, all the necessary parameters except the chromatographic band broadening for the method are learned from the data in training and learning phase of the algorithm, avoiding the introduction of a large number of user-defined parameters. The proposed method was validated with a large data set and has shown improved sensitivity and specificity in comparison to a threshold-based commercial software package.

9.
J Chromatogr A ; 1450: 29-37, 2016 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-27178151

RESUMO

The challenge of fully optimizing LC×LC separations is horrendous. Yet, it is essential to address this challenge if sophisticated LC×LC instruments are to be utilized to their full potential in an efficient manner. Currently, lengthy method development is a major obstacle to the proliferation of the technique, especially in industry. A program was developed for the rigorous optimization of LC×LC separations, using gradient-elution in both dimensions. The program establishes two linear retention models (one for each dimension) based on just two LC×LC experiments. It predicts LC×LC chromatograms using a simple van-Deemter model to generalize band-broadening. Various objectives (analysis time, resolution, orthogonality) can be implemented in a Pareto-optimization framework to establish the optimal conditions. The program was successfully applied to a separation of a complex mixture of 54 aged, authentic synthetic dyestuffs, separated by ion-exchange chromatography and ion pair chromatography. The main limitation experienced was the retention-time stability in the first (ion-exchange) dimension. Using the PIOTR program LC×LC method development can be greatly accelerated, typically from a few months to a few days.


Assuntos
Cromatografia por Troca Iônica/métodos , Cromatografia por Troca Iônica/normas , Software , Corantes/isolamento & purificação , Fatores de Tempo
10.
Anal Chem ; 88(4): 2096-104, 2016 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-26814559

RESUMO

A new method for comparison of GCxGC-MS is proposed. The method is aimed at spotting the differences between two GCxGC-MS injections, in order to highlight the differences between two samples, in order to flag differences in composition, or to spot compounds only present in one of the samples. The method is based on application of the Jensen-Shannon divergence (JS) analysis combined with Bayesian hypothesis testing. In order to make the method robust against misalignment in both time dimensions, a moving-window approach is proposed. Using a Bayesian framework, we provide a probabilistic visual map (i.e., log likelihood ratio map) of the significant differences between two data sets consequently excluding the deterministic (i.e., "yes" or "no") decision. We proved this approach to be a versatile tool in GCxGC-MS data analysis, especially when the differences are embedded inside a complex matrix. We tested the approach to spot contamination of diesel samples.

11.
J Chromatogr A ; 1431: 122-130, 2016 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-26774434

RESUMO

Accurate analysis of chromatographic data often requires the removal of baseline drift. A frequently employed strategy strives to determine asymmetric weights in order to fit a baseline model by regression. Unfortunately, chromatograms characterized by a very high peak saturation pose a significant challenge to such algorithms. In addition, a low signal-to-noise ratio (i.e. s/n<40) also adversely affects accurate baseline correction by asymmetrically weighted regression. We present a baseline estimation method that leverages a probabilistic peak detection algorithm. A posterior probability of being affected by a peak is computed for each point in the chromatogram, leading to a set of weights that allow non-iterative calculation of a baseline estimate. For extremely saturated chromatograms, the peak weighted (PW) method demonstrates notable improvement compared to the other methods examined. However, in chromatograms characterized by low-noise and well-resolved peaks, the asymmetric least squares (ALS) and the more sophisticated Mixture Model (MM) approaches achieve superior results in significantly less time. We evaluate the performance of these three baseline correction methods over a range of chromatographic conditions to demonstrate the cases in which each method is most appropriate.


Assuntos
Algoritmos , Cromatografia/métodos , Modelos Teóricos , Análise dos Mínimos Quadrados , Razão Sinal-Ruído
12.
Anal Chem ; 88(4): 2421-30, 2016 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-26768508

RESUMO

As part of forensic toxicological investigation of cases involving unexpected death of an individual, targeted or untargeted xenobiotic screening of post-mortem samples is normally conducted. To this end, liquid chromatography (LC) coupled to high-resolution mass spectrometry (MS) is typically employed. For data analysis, almost all commonly applied algorithms are threshold-based (frequentist). These algorithms examine the value of a certain measurement (e.g., peak height) to decide whether a certain xenobiotic of interest (XOI) is present/absent, yielding a binary output. Frequentist methods pose a problem when several sources of information [e.g., shape of the chromatographic peak, isotopic distribution, estimated mass-to-charge ratio (m/z), adduct, etc.] need to be combined, requiring the approach to make arbitrary decisions at substep levels of data analysis. We hereby introduce a novel Bayesian probabilistic algorithm for toxicological screening. The method tackles the problem with a different strategy. It is not aimed at reaching a final conclusion regarding the presence of the XOI, but it estimates its probability. The algorithm effectively and efficiently combines all possible pieces of evidence from the chromatogram and calculates the posterior probability of the presence/absence of XOI features. This way, the model can accommodate more information by updating the probability if extra evidence is acquired. The final probabilistic result assists the end user to make a final decision with respect to the presence/absence of the xenobiotic. The Bayesian method was validated and found to perform better (in terms of false positives and false negatives) than the vendor-supplied software package.


Assuntos
Algoritmos , Teorema de Bayes , Toxicologia Forense , Xenobióticos/análise , Cromatografia Líquida de Alta Pressão , Humanos , Espectrometria de Massas , Software
13.
Lab Chip ; 15(23): 4415-22, 2015 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-26495444

RESUMO

In order to successfully tackle the truly complex separation problems arising from areas such as proteomics research, the development of ultra-efficient and fast separation technology is required. In spatial three-dimensional chromatography, components are separated in the space domain with each peak being characterized by its coordinates in a three-dimensional separation body. Spatial three-dimensional (3D-)LC has the potential to offer unprecedented resolving power when orthogonal retention mechanisms are applied, since the total peak capacity is the product of the three individual peak capacities. Due to parallel developments during the second- and third-dimension separations, the analysis time is greatly reduced compared to a coupled-column multi-dimensional LC approach. This communication discusses the different design aspects to create a microfluidic chip for spatial 3D-LC. The use of physical barriers to confine the flow between the individual developments, and flow control by the use of (2)D and (3)D flow distributors is discussed. Furthermore, the in situ synthesis of monolithic stationary phases is demonstrated. Finally, the potential performance of a spatial 3D-LC systems is compared with the performance obtained with state-of-the-art 1D-LC and (coupled-column) 2D-LC approaches via a Pareto-optimization approach. The proposed microfluidic device for 3D-LC featuring 16 (2)D channels and 256 (3)D channels can potentially yield a peak capacity of 8000 in a total analysis time of 10 minutes.


Assuntos
Cromatografia Líquida/métodos , Dispositivos Lab-On-A-Chip , Proteômica/instrumentação , Fatores de Tempo
14.
Anal Chem ; 87(14): 7345-55, 2015 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-26095981

RESUMO

We introduce a novel Bayesian probabilistic peak detection algorithm for liquid chromatography-mass spectroscopy (LC-MS). The final probabilistic result allows the user to make a final decision about which points in a chromatogram are affected by a chromatographic peak and which ones are only affected by noise. The use of probabilities contrasts with the traditional method in which a binary answer is given, relying on a threshold. By contrast, with the Bayesian peak detection presented here, the values of probability can be further propagated into other preprocessing steps, which will increase (or decrease) the importance of chromatographic regions into the final results. The present work is based on the use of the statistical overlap theory of component overlap from Davis and Giddings (Davis, J. M.; Giddings, J. Anal. Chem. 1983, 55, 418-424) as prior probability in the Bayesian formulation. The algorithm was tested on LC-MS Orbitrap data and was able to successfully distinguish chemical noise from actual peaks without any data preprocessing.

15.
Forensic Sci Int ; 252: 177-86, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26005858

RESUMO

Forensic chemical analysis of fire debris addresses the question of whether ignitable liquid residue is present in a sample and, if so, what type. Evidence evaluation regarding this question is complicated by interference from pyrolysis products of the substrate materials present in a fire. A method is developed to derive a set of class-conditional features for the evaluation of such complex samples. The use of a forensic reference collection allows characterization of the variation in complex mixtures of substrate materials and ignitable liquids even when the dominant feature is not specific to an ignitable liquid. Making use of a novel method for data imputation under complex mixing conditions, a distribution is modeled for the variation between pairs of samples containing similar ignitable liquid residues. Examining the covariance of variables within the different classes allows different weights to be placed on features more important in discerning the presence of a particular ignitable liquid residue. Performance of the method is evaluated using a database of total ion spectrum (TIS) measurements of ignitable liquid and fire debris samples. These measurements include 119 nominal masses measured by GC-MS and averaged across a chromatographic profile. Ignitable liquids are labeled using the American Society for Testing and Materials (ASTM) E1618 standard class definitions. Statistical analysis is performed in the class-conditional feature space wherein new forensic traces are represented based on their likeness to known samples contained in a forensic reference collection. The demonstrated method uses forensic reference data as the basis of probabilistic statements concerning the likelihood of the obtained analytical results given the presence of ignitable liquid residue of each of the ASTM classes (including a substrate only class). When prior probabilities of these classes can be assumed, these likelihoods can be connected to class probabilities. In order to compare the performance of this method to previous work, a uniform prior was assumed, resulting in an 81% accuracy for an independent test of 129 real burn samples.

16.
Anal Bioanal Chem ; 407(13): 3817-29, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25801383

RESUMO

Post-polymerization photografting is a versatile tool to alter the surface chemistry of organic-based monoliths so as to obtain desired stationary phase properties. In this study, 2-acrylamido-2-methyl-1-propanesulfonic acid was grafted to a hydrophobic poly(butyl methacrylate-co-ethylene glycol dimethacrylate) monolith to create a strong cation exchange stationary phase. Both single-step and two-step photografting were addressed, and the effects of grafting conditions were assessed. An experimental design has been applied in an attempt to optimize three of the key parameters of the two-step photografting chemistry, i.e. the grafting time of the initiator, the monomer concentration and the monomer irradiation time. The photografted columns were implemented in a comprehensive two-dimensional column liquid chromatography ( (t) LC × (t) LC) workflow and applied for the separation of intact proteins and peptides. A baseline separation of 11 intact proteins was obtained within 20 min by implementing a gradient across a limited RP composition window in the second dimension. (t) LC × (t) LC with UV detection was used for the separation of cytochrome c digest, bovine serum insulin digest and a digest of a complex protein mixture. A semi-quantitative estimation of the occupation of separation space, the orthogonality, of the (t) LC × (t) LC system yielded 75%. The (t) LC × (t) LC setup was hyphenated to a high-resolution Fourier transform ion cyclotron resonance mass spectrometer instrument to identify the bovine serum insulin tryptic peptides and to demonstrate the compatibility with MS analysis.


Assuntos
Cromatografia por Troca Iônica/métodos , Espectrometria de Massas/métodos , Metacrilatos/química , Proteínas/química , Proteínas/isolamento & purificação , Fotoquímica , Polímeros/química , Polímeros/efeitos da radiação
17.
Forensic Sci Int ; 248: 101-12, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25602642

RESUMO

Ammonium nitrate (AN) is frequently encountered in explosives in forensic casework. It is widely available as fertilizer and easy to implement in explosive devices, for example by mixing it with a fuel. Forensic profiling methods to determine whether material found on a crime scene and material retrieved from a suspect arise from the same source are becoming increasingly important. In this work, we have explored the possibility of using isotopic and elemental profiling to discriminate between different batches of AN. Variations within a production batch, between different batches from the same manufacturer, and between batches from different manufacturers were studied using a total of 103 samples from 19 different fertilizer manufacturers. Isotope-ratio mass spectrometry (IRMS) was used to analyze AN samples for their (15)N and (18)O isotopic composition. The trace-elemental composition of these samples was studied using inductively coupled plasma-mass spectrometry (ICP-MS). All samples were analyzed for the occurrence of 66 elements. 32 of these elements were useful for the differentiation of AN samples. These include magnesium (Mg), calcium (Ca), iron (Fe) and strontium (Sr). Samples with a similar elemental profile may be differentiated based on their isotopic composition. Linear discriminant analysis (LDA) was used to calculate likelihood ratios and demonstrated the power of combining elemental and isotopic profiling for discrimination between different sources of AN.


Assuntos
Substâncias Explosivas/química , Nitratos/química , Análise Discriminante , Ciências Forenses , Funções Verossimilhança , Espectrometria de Massas/métodos , Isótopos de Nitrogênio/análise , Isótopos de Oxigênio/análise , Análise Espectral/métodos
18.
J Chromatogr A ; 1368: 190-8, 2014 Nov 14.
Artigo em Inglês | MEDLINE | ID: mdl-25441353

RESUMO

In this paper we present a model relating experimental factors (column lengths, diameters and thickness, modulation times, pressures and temperature programs) with retention times. Unfortunately, an analytical solution to calculate the retention in temperature programmed GC × GC is impossible, making thus necessary to perform a numerical integration. In this paper we present a computational physical model of GC × GC, capable of predicting with a high accuracy retention times in both dimensions. Once fitted (e.g., calibrated), the model is used to make predictions, which are always subject to error. In this way, the prediction can result rather in a probability distribution of (predicted) retention times than in a fixed (most likely) value. One of the most common problems that can occur when fitting unknown parameters using experimental data is overfitting. In order to detect overfitting situations and assess the error, the K-fold cross-validation technique was applied. Another technique of error assessment proposed in this article is the use of error propagation using Jacobians. This method is based on estimation of the accuracy of the model by the partial derivatives of the retention time prediction with respect to the fitted parameters (in this case entropy and enthalpy for each component) in a set of given conditions. By treating the predictions of the model in terms of intervals rather than as precise values, it is possible to considerably increase the robustness of any optimization algorithm.


Assuntos
Cromatografia Gasosa/métodos , Algoritmos , Calibragem , Cromatografia Gasosa/instrumentação , Pressão , Temperatura , Termodinâmica
19.
Anal Chim Acta ; 799: 29-35, 2013 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-24091371

RESUMO

Mathematical deconvolution methods can separate co-eluting peaks in samples for which (chromatographic) separation fail. However, these methods often heavily rely on manual user-input and interpretation. This is not only time-consuming but also error-prone and automation is needed if such methods are to be applied in a routine manner. One major hurdle when automating deconvolution methods is the selection of the correct number of components used for building the model. We propose a new method for the automatic determination of the optimum number of components when applying multivariate curve resolution (MCR) to comprehensive two-dimensional gas chromatography-mass spectrometry (GC×GC-MS) data. It is based on a two-fold cross-validation scheme. The obtained overall cross-validation error decreases when adding components and increases again once over-fitting of the data starts to occur. The turning point indicates that the optimum number of components has been reached. Overall, the method is at least as good as and sometimes superior to the inspection of the eigenvalues when performing singular-value decomposition. However, its strong point is that it can be fully automated and it is thus more efficient and less prone to subjective interpretation. The developed method has been applied to two different-sized regions in a GC×GC-MS chromatogram. In both regions, the cross-validation scheme resulted in selecting the correct number of components for applying MCR. The pure concentration and mass spectral profiles obtained can then be used for identification and/or quantification of the compounds. While the method has been developed for applying MCR to GC×GC-MS data, a transfer to other deconvolution methods and other analytical systems should only require minor modifications.

20.
PLoS One ; 8(10): e76263, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24146846

RESUMO

Tuberculosis (TB) remains a major international health problem. Rapid differentiation of Mycobacterium tuberculosis complex (MTB) from non-tuberculous mycobacteria (NTM) is critical for decisions regarding patient management and choice of therapeutic regimen. Recently we developed a 20-compound model to distinguish between MTB and NTM. It is based on thermally assisted hydrolysis and methylation gas chromatography-mass spectrometry and partial least square discriminant analysis. Here we report the validation of this model with two independent sample sets, one consisting of 39 MTB and 17 NTM isolates from the Netherlands, the other comprising 103 isolates (91 MTB and 12 NTM) from Stellenbosch, Cape Town, South Africa. All the MTB strains in the 56 Dutch samples were correctly identified and the model had a sensitivity of 100% and a specificity of 94%. For the South African samples the model had a sensitivity of 88% and specificity of 100%. Based on our model, we have developed a new decision-tree that allows the differentiation of MTB from NTM with 100% accuracy. Encouraged by these findings we will proceed with the development of a simple, rapid, affordable, high-throughput test to identify MTB directly in sputum.


Assuntos
Biomarcadores/metabolismo , Cromatografia Gasosa-Espectrometria de Massas/métodos , Mycobacterium tuberculosis/isolamento & purificação , Micobactérias não Tuberculosas/isolamento & purificação , Algoritmos , Análise Discriminante , Humanos , Hidrólise , Análise dos Mínimos Quadrados , Metilação , Países Baixos , Reprodutibilidade dos Testes , África do Sul , Temperatura
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA