Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Anal Chim Acta ; 726: 9-21, 2012 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-22541008

RESUMEN

Comprehensive two-dimensional gas chromatography coupled to mass spectrometry is a powerful tool to analyze complex samples. For application of the technique in studies like biomarker discovery in which large sets of complex samples have to be analyzed, extensive preprocessing is needed to align the data obtained in several injections (analyses). We developed new alignment and clustering algorithms for this type of data. New in the current procedures is the consistent way in which the phenomenon referred to as wrap-around is treated. The data analysis problems associated with this phenomenon are solved by treating the 2D display as the surface of a three-dimensional cylinder. Based on this transformation we developed a new similarity metric for features as a function of both the cylindrical distance (reflecting similarity in chromatographic behavior) and of the mass spectral correlation (reflecting similarity in chemical structure). The concepts are used in warping and clustering, and include a protection against greedy warping. The methods were applied - for the purpose of an example - to the analysis of 11 replicates of a human urine sample concentrated by solid phase extraction. It is shown that the alignment is well protected against greedy warping which is important with respect to analytical qualities as robustness and repeatability. It is also demonstrated that chemically similar features are clustered together. The paper is organized as follows. First a brief introduction is provided addressing the background of the GC×GC-MS data structure followed by a theoretical section with a conceptual description of the procedures and details of the algorithms. Finally an example is given in the experimental section, illustrating the application of the procedures.


Asunto(s)
Algoritmos , Cromatografía de Gases y Espectrometría de Masas , Biomarcadores/orina , Análisis por Conglomerados , Humanos , Extracción en Fase Sólida
2.
Metabolomics ; 2(2): 53-61, 2006.
Artículo en Inglés | MEDLINE | ID: mdl-24489531

RESUMEN

Statistical model validation tools such as cross-validation, jack-knifing model parameters and permutation tests are meant to obtain an objective assessment of the performance and stability of a statistical model. However, little is known about the performance of these tools for megavariate data sets, having, for instance, a number of variables larger than 10 times the number of subjects. The performance is assessed for megavariate metabolomics data, but the conclusions also carry over to proteomics, transcriptomics and many other research areas. Partial least squares discriminant analyses models were built for several LC-MS lipidomic training data sets of various numbers of lean and obese subjects. The training data sets were compared on their modelling performance and their predictability using a 10-fold cross-validation, a permutation test, and test data sets. A wide range of cross-validation error rates was found (from 7.5% to 16.3% for the largest trainings set and from 0% to 60% for the smallest training set) and the error rate increased when the number of subjects decreased. The test error rates varied from 5% to 50%. The smaller the number of subjects compared to the number of variables, the less the outcome of validation tools such as cross-validation, jack-knifing model parameters and permutation tests can be trusted. The result depends crucially on the specific sample of subjects that is used for modelling. The validation tools cannot be used as warning mechanism for problems due to sample size or to representativity of the sampling.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...