Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Proteomics ; 24(10): e2300339, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38299459

RESUMO

Detergent-based workflows incorporating sodium dodecyl sulfate (SDS) necessitate additional steps for detergent removal ahead of mass spectrometry (MS). These steps may lead to variable protein recovery, inconsistent enzyme digestion efficiency, and unreliable MS signals. To validate a detergent-based workflow for quantitative proteomics, we herein evaluate the precision of a bottom-up sample preparation strategy incorporating cartridge-based protein precipitation with organic solvent to deplete SDS. The variance of data-independent acquisition (SWATH-MS) data was isolated from sample preparation error by modelling the variance as a function of peptide signal intensity. Our SDS-assisted cartridge workflow yield a coefficient of variance (CV) of 13%-14%. By comparison, conventional (detergent-free) in-solution digestion increased the CV to 50%; in-gel digestion provided lower CVs between 14% and 20%. By filtering peptides predicting to display lower precision, we further enhance the validity of data in global comparative proteomics. These results demonstrate the detergent-based precipitation workflow is a reliable approach for in depth, label-free quantitative proteome analysis.


Assuntos
Precipitação Química , Detergentes , Proteômica , Dodecilsulfato de Sódio , Fluxo de Trabalho , Proteômica/métodos , Dodecilsulfato de Sódio/química , Detergentes/química , Proteoma/análise , Proteoma/química , Humanos , Peptídeos/química , Peptídeos/análise
2.
Molecules ; 28(4)2023 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-36838644

RESUMO

To address the growing concern of honey adulteration in Canada and globally, a quantitative NMR method was developed to analyze 424 honey samples collected across Canada as part of two surveys in 2018 and 2019 led by the Canadian Food Inspection Agency. Based on a robust and reproducible methodology, NMR data were recorded in triplicate on a 700 MHz NMR spectrometer equipped with a cryoprobe, and the data analysis led to the identification and quantification of 33 compounds characteristic of the chemical composition of honey. The high proportion of Canadian honey in the library provided a unique opportunity to apply multivariate statistical methods including PCA, PLS-DA, and SIMCA in order to differentiate Canadian samples from the rest of the world. Through satisfactory model validation, both PLS-DA as a discriminant modeling technique and SIMCA as a class modeling method proved to be reliable at differentiating Canadian honey from a diverse set of honeys with various countries of origins and floral types. The replacement method of optimization was successfully applied for variable selection, and trigonelline, proline, and ethanol at a lower extent were identified as potential chemical markers for the discrimination of Canadian and non-Canadian honeys.


Assuntos
Mel , Mel/análise , Espectroscopia de Ressonância Magnética/métodos , Imageamento por Ressonância Magnética , Prolina , Canadá , Análise Multivariada
3.
J Chromatogr A ; 1675: 463168, 2022 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-35667219

RESUMO

A two steps proposal for the purification of immunoglobulin G from human blood plasma is investigated. The first step is precipitation using cold ethanol based on the Cohn method with some modification and the second step is a chromatographic separation by DEAE-Sepharose FF resin as a weak anion exchanger. The presence of interferent in the region3 of chromatographic fractions, which is co-eluted with IgG, restricts the application of the mechanistic chromatography model. Therefore, multivariate cure resolution-alternating least squares (MCR-ALS) as a soft method is employed on measured absorbance data matrix from eluted fractions to recover pure concentration and spectral profiles. Besides, possible solutions for resolved concentration and spectral profiles are investigated. The reaction-dispersive model as a mechanistic hard model for the column is utilized for the evaluation of the ion exchange chromatography. Using a genetic algorithm as a global optimization method, mobile phase modulator (MPM) adsorption model parameters such as ß, kdes,0, and Keq,0, were fitted to the concentration profiles from MCR-ALS as 1.96, 2.87×10-4 m3 mol-1s-1, and 1883, respectively. Furthermore, a new resampling incorporated non-parametric statistics is conducted to assess parameters' uncertainty. Values of 2.00, 1.10×10-3 m3 mol-1s-1, and 549.80 are estimated median, and values of 0.05, 2.5×10-3, and 691.00 are calculated interquartile range (IQR) for ß, kdes,0, and Keq,0, respectively. Finally, results show three and two outliers for ß and kdes,0, respectively.


Assuntos
Anticorpos , Plasma , Cromatografia Líquida de Alta Pressão , Cromatografia por Troca Iônica , Humanos , Análise dos Mínimos Quadrados
4.
Anal Methods ; 13(37): 4188-4219, 2021 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-34473142

RESUMO

Multivariate data analysis tools have become an integral part of modern analytical chemistry, and principal component analysis (PCA) is perhaps foremost among these. PCA is central in approaching many problems in data exploration, classification, calibration, modelling, and curve resolution. However, PCA is only one form of a broader group of factor analysis (FA) methods that are rarely employed by chemists. The dominance of PCA in chemistry is primarily a consequence of history and convenience, but this has obscured the potential advantages of other FA tools that are widely used in other fields. The purpose of this article, which is intended for those who are already familiar with the mathematical foundations and applications of PCA, is to develop a framework to relate PCA to other commonly used FA methods from the perspective of chemical applications. Specifically, PCA is compared to maximum likelihood factor analysis (MLFA), principal axis factorization (PAF) and maximum likelihood PCA (MLPCA). Similarities and differences are highlighted with regard to the assumptions and constraints of the models, algorithms employed, and calculation of scores and loadings. Practical aspects such as data dimensionality, preprocessing, rank estimation, improper solutions (Heywood cases), and software implementation are considered. The performance of the four methods is compared using both simulated and experimental data sets. While PCA provides the most reliable estimates when measurement error variance is uniform (homoscedastic noise) and MLPCA works best when the error covariance matrix is explicitly known, MLFA and PAF have the distinct advantage of providing information about measurement uncertainty and adapting to situations of unknown heteroscedastic errors, eliminating the need for scaling. Moreover, MLFA in particular is shown to be tolerant to deviations from model linearity. These results make a strong case for increased application of other FA methods in chemistry.


Assuntos
Algoritmos , Software , Análise Fatorial , Análise Multivariada , Análise de Componente Principal
5.
Anal Chim Acta ; 1174: 338716, 2021 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-34247741

RESUMO

Kurtosis-based projection pursuit analysis (kPPA) has demonstrated the ability to visualize multivariate data in a way that complements other exploratory data analysis tools, such as principal components analysis (PCA). It is especially useful for partitioning binary data sets (2k classes) with a balanced design. Since kPPA is not a variance-based method, it can often provide unsupervised class separation where other methods fail. However, when multiple classifications are possible (e.g. by gender, age, disease state, etc.), the projection provided by kPPA (corresponding to the global minimum kurtosis) will not necessarily be the one of greatest interest to the researcher. Fortunately, the optimization algorithm for kPPA allows for interrogation of projections obtained from numerous local minima. This strategy provides the basis of a new method described here, referred to as combinatorial projection pursuit analysis (CombPPA) because it presents alternative combinations of class separation. The method is truly exploratory in that it allows the landscape of interesting projections to be more fully probed. The approach uses Procrustes rotation to map local minima among the kPPA solutions, whereupon the researcher can visualize different projections. To demonstrate the new method, the clustering of grape juice samples using visible spectroscopy is presented as a model problem. This problem is well-suited to this type of study because there are eight classes of samples symmetrically partitioned into two classes by type (organic/non-organic) or four classes by brand. Results presented show the different combinations of projections that can be obtained, including the desired partitions. In addition, this work describes new enhancements to the kPPA algorithm that improve the orthogonality of solutions obtained.


Assuntos
Algoritmos , Análise por Conglomerados , Análise de Componente Principal
6.
Anal Chem ; 92(2): 1755-1762, 2020 01 21.
Artigo em Inglês | MEDLINE | ID: mdl-31822056

RESUMO

Sparse projection pursuit analysis (SPPA), a new approach for the unsupervised exploration of high-dimensional chemical data, is proposed as an alternative to traditional exploratory methods such as principal components analysis (PCA) and hierarchical cluster analysis (HCA). Where traditional methods use variance and distance metrics for data compression and visualization, the proposed method incorporates the fourth statistical moment (kurtosis) to access interesting subspaces that can clarify relationships within complex data sets. The quasi-power algorithm used for projection pursuit is coupled with a genetic algorithm for variable selection to efficiently generate sparse projection vectors that improve the chemical interpretability of the results while at the same time mitigating the problem of overmodeling. Several multivariate chemical data sets are employed to demonstrate that SPPA can reveal meaningful clusters in the data where other unsupervised methods cannot.

7.
J Agric Food Chem ; 67(27): 7765-7774, 2019 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-31240917

RESUMO

One of the greatest challenges facing the functional food and natural health product (NHP) industries is sourcing high-quality, functional, natural ingredients for their finished products. Unfortunately, the lack of ingredient standards, modernized analytical methodologies, and industry oversight creates the potential for low quality and, in some cases, deliberate adulteration of ingredients. By exploring a diverse library of NHPs provided by the independent certification organization ISURA, we demonstrated that nuclear magnetic resonance (NMR) spectroscopy provides an innovative solution to authenticate botanicals and warrant the quality and safety of processed foods and manufactured functional ingredients. Two-dimensional NMR experiments were shown to be a robust and reproducible approach to capture the content of complex chemical mixtures, while a binary normalization step allows for emphasizing the chemical diversity in each sample, and unsupervised statistical methodologies provide key advantages to classify, authenticate, and highlight the potential presence of additives and adulterants.


Assuntos
Rotulagem de Medicamentos/métodos , Processamento Eletrônico de Dados/métodos , Rotulagem de Alimentos/métodos , Alimento Funcional/análise , Espectroscopia de Ressonância Magnética/métodos , Preparações de Plantas/análise , Manipulação de Alimentos , Rotulagem de Alimentos/normas , Qualidade dos Alimentos , Inocuidade dos Alimentos , Análise Multivariada , Controle de Qualidade
8.
Anal Chim Acta ; 959: 1-14, 2017 03 22.
Artigo em Inglês | MEDLINE | ID: mdl-28159103

RESUMO

The error covariance matrix (ECM) is an important tool for characterizing the errors from multivariate measurements, representing both the variance and covariance in the errors across multiple channels. Such information is useful in understanding and minimizing sources of experimental error and in the selection of optimal data analysis procedures. Experimental ECMs, normally obtained through replication, are inherently noisy, inconvenient to obtain, and offer limited interpretability. Significant advantages can be realized by building a model for the ECM based on established error types. Such models are less noisy, reduce the need for replication, mitigate mathematical complications such as matrix singularity, and provide greater insights. While the fitting of ECM models using least squares has been previously proposed, the present work establishes that fitting based on the Wishart distribution offers a much better approach. Simulation studies show that the Wishart method results in parameter estimates with a smaller variance and also facilitates the statistical testing of alternative models using a parameterized bootstrap method. The new approach is applied to fluorescence emission data to establish the acceptability of various models containing error terms related to offset, multiplicative offset, shot noise and uniform independent noise. The implications of the number of replicates, as well as single vs. multiple replicate sets are also described.

9.
Anal Chim Acta ; 903: 51-60, 2016 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-26709298

RESUMO

Most of the current expressions used to calculate figures of merit in multivariate calibration have been derived assuming independent and identically distributed (iid) measurement errors. However, it is well known that this condition is not always valid for real data sets, where the existence of many external factors can lead to correlated and/or heteroscedastic noise structures. In this report, the influence of the deviations from the classical iid paradigm is analyzed in the context of error propagation theory. New expressions have been derived to calculate sample dependent prediction standard errors under different scenarios. These expressions allow for a quantitative study of the influence of the different sources of instrumental error affecting the system under analysis. Significant differences are observed when the prediction error is estimated in each of the studied scenarios using the most popular first-order multivariate algorithms, under both simulated and experimental conditions.

10.
Anal Chim Acta ; 877: 51-63, 2015 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-26002210

RESUMO

Projection pursuit (PP) is an effective exploratory data analysis tool because it optimizes the projection of high dimensional data using distributional characteristics rather than variance or distance metrics. The recent development of fast and simple PP algorithms based on minimization of kurtosis for clustering data has made this powerful tool more accessible, but under conditions where the sample-to-variable ratio is small, PP fails due to opportunistic overfitting of random correlations to limiting distributional targets. Therefore, some kind of variable compression or data regularization is required in these cases. However, this introduces an additional parameter whose optimization is manually time consuming and subject to bias. The present work describes the use of Procrustes analysis as diagnostic tool that can be used to evaluate the results of PP analysis in an efficient manner. Through Procrustes rotation, the similarity of different PP projections can be examined in an automated fashion with "Procrustes maps" to establish regions of stable projections as a function of the parameter to be optimized. The application of this diagnostic is demonstrated using principal components analysis to compress FTIR spectra from ink samples of ten different brands of pen, and also in conjunction with regularized PP for soybean disease classification.

11.
Anal Chim Acta ; 847: 16-28, 2014 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-25261896

RESUMO

A method is described for the characterization of measurement errors with non-uniform variance (heteroscedastic noise) in contiguous signal vectors (e.g., spectra, chromatograms) that does not require the use of replicated measurements. High-pass digital filters based on inverted Blackman windowed sinc smoothing coefficients are employed to provide point estimates of noise from measurement vectors. Filter parameters (number of points, cutoff frequency) are selected based on the amplitude spectrum of the signal in the Fourier domain. Following this, noise estimates from multiple signals are partitioned into bins based on a variable that correlates with the noise amplitude, such as measurement channel or signal intensity. The noise estimates in each bin are combined to estimate the standard deviation and, where appropriate, a functional model of the noise can be obtained to characterize instrumental errors (e.g., shot noise, proportional noise). The proposed method is demonstrated and evaluated with both simulated and experimental data sets, and results are compared with replicated measurements. Experimental data includes fluorescence spectra, ion chromatograms from liquid chromatography/mass spectrometry, and UV-vis absorbance spectra. The limitations and advantages of the new method compared to replicate analysis are presented.

12.
Artigo em Inglês | MEDLINE | ID: mdl-23122402

RESUMO

The differential separation of deuterated and non-deuterated forms of isotopically substituted compounds in chromatography is a well-known but not well-understood phenomenon. This separation is relevant in comparative proteomics, where stable isotopes are used for differential labelling and the effect of isotope resolution on quantitation has been used to disqualify some deuterium labelling methods in favour of heavier isotopes. In this work, a detailed evaluation of the extent of isotopic separation and its impact on quantitation was performed for peptides labelled through dimethylation with H(2)/D(2) formaldehyde. The chromatographic behaviour of 71 labelled peptide pairs from quadruplicate tryptic digests of bovine serum albumin were analysed, focusing on differences in median retention times, resolution, and relative quantitation for each peptide. For 94% of peptides, the retention time difference (heavy-light) was less than 12s with a median value 3.4s. With the exception of a single anomalous pair, isotope resolution was below 0.6 with a median value 0.11. Quantitative assessment indicates that the bias in ratio calculation introduced by retention time shifts is only about 3%, substantially smaller than the variation in ratio measurements themselves. Computational studies on the dipole moments of deuterated labels indicate that these results are consistent with literature suggestions that retention time shifts are inversely related to the polarity of the label. This study suggests that the incorporation of deuterium isotopes through peptide dimethylation at amine residues is a viable route to proteome quantitation.


Assuntos
Deutério/química , Formaldeído/química , Fragmentos de Peptídeos/química , Proteômica/métodos , Animais , Bovinos , Cromatografia Líquida de Alta Pressão , Cromatografia de Fase Reversa , Metilação , Fragmentos de Peptídeos/análise , Soroalbumina Bovina/química , Espectrometria de Massas em Tandem
13.
OMICS ; 14(1): 99-107, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20141332

RESUMO

In the analysis of data from high-throughput experiments, information regarding the underlying data structure provides the researcher with confidence in the appropriateness of various analysis methods. One extremely simple but powerful data visualization method is the correlation heat map, whereby correlations between experiments/conditions are calculated and represented using color. In this work, the use of correlation maps to shed light on transcription patterns from DNA microarray time course data prior to gene-level analysis is described. Using three different time course studies from the literature, it is shown how the patterns observed at the array level provide insights into the dynamics of the system under study and the experimental design.


Assuntos
DNA/genética , Análise de Sequência com Séries de Oligonucleotídeos , Animais , Feminino , Masculino
14.
Brief Bioinform ; 10(3): 289-94, 2009 May.
Artigo em Inglês | MEDLINE | ID: mdl-19279157

RESUMO

The increased need for multiple statistical comparisons under conditions of non-independence in bioinformatics applications, such as DNA microarray data analysis, has led to the development of alternatives to the conventional Bonferroni correction for adjusting P-values. The use of the false discovery rate (FDR), in particular, has grown considerably. However, the calculation of the FDR frequently depends on drawing random samples from a population, and inappropriate sampling will result in a bias in the calculated FDR. In this work, we demonstrate a bias due to incorrect random sampling in the widely used GO::TermFinder package. Both T(2) and permutation tests are used to confirm the bias for a test set of data, which leads to an overestimation of the FDR of about 10%. A simple fix to the random sampling method is proposed to remove the bias.


Assuntos
Algoritmos , Biologia Computacional/métodos , Armazenamento e Recuperação da Informação/métodos , Software , Bases de Dados Genéticas , Perfilação da Expressão Gênica/métodos , Análise de Sequência com Séries de Oligonucleotídeos/métodos
15.
Anal Chim Acta ; 636(2): 163-74, 2009 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-19264164

RESUMO

NMR-based metabolomics is characterized by high throughput measurements of the signal intensities of complex mixtures of metabolites in biological samples by assaying, typically, bio-fluids or tissue homogenates. The ultimate goal is to obtain relevant biological information regarding the dissimilarity in patho-physiological conditions that the samples experience. For a long time now, this information has been obtained through the analysis of measured NMR signals via multivariate statistics. NMR data are quite complex and the use of such multivariate statistical methods as principal components analysis (PCA) for their analysis assumes that the data are multivariate normal with errors that are identical, independent and normally distributed (i.e. iid normal). There is a consensus that these assumptions are not always true for these data and, thus, several methods have been devised to transform the data or weight them prior to analysis by PCA. The structure of NMR measurement noise, or the extent to which violations of error homoscedasticity affect PCA results have neither been characterized nor investigated. A comprehensive characterization of measurement uncertainties in NMR based metabolomics was achieved in this work using an experiment designed to capture contributions of several sources of error to the total variance in the measurements. The noise structure was found to be heteroscedastic and highly correlated with spectral characteristics that are similar to the mean of the spectra and their standard deviation. A model was subsequently developed that potentially allows errors in NMR measurements to be accurately estimated without the need for extensive replication.


Assuntos
Espectroscopia de Ressonância Magnética/métodos , Metabolômica , Algoritmos , Animais , Feminino , Peixes , Masculino , Análise Multivariada , Análise de Componente Principal , Projetos de Pesquisa
16.
Anal Bioanal Chem ; 389(7-8): 2125-41, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17899024

RESUMO

DNA microarrays permit the measurement of gene expression across the entire genome of an organism, but the quality of the thousands of measurements is highly variable. For spotted dual-color microarrays the situation is complicated by the use of ratio measurements. Studies have shown that measurement errors can be described by multiplicative and additive terms, with the latter dominating for low-intensity measurements. In this work, a measurement-error model is presented that partitions the variance into general experimental sources and sources associated with the calculation of the ratio from noisy pixel data. The former is described by a proportional (multiplicative) structure, while the latter is estimated using a statistical bootstrap method. The model is validated using simulations and three experimental data sets. Monte-Carlo fits of the model to data from duplicate experiments are excellent, but suggest that the bootstrap estimates, while proportionately correct, may be underestimated. The bootstrap standard error estimates are particularly useful in determining the reliability of individual microarray spots without the need for replicate spotting. This information can be used in screening or weighting the measurements.


Assuntos
Técnicas de Química Analítica/métodos , Modelos Estatísticos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Incerteza , Técnicas de Química Analítica/estatística & dados numéricos , Corantes/química , Padrões de Referência , Reprodutibilidade dos Testes , Projetos de Pesquisa
17.
OMICS ; 11(2): 186-99, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17594237

RESUMO

The conceptual simplicity of DNA microarray technology often belies the complex nature of the measurement errors inherent in the methodology. As the technology has developed, the importance of understanding the sources of uncertainty in the measurements and developing ways to control their influence on the conclusions drawn has become apparent. In this review, strategies for modeling measurement errors and minimizing their effect on the outcome of experiments using a variety of techniques are discussed in the context of spotted, dual-color microarrays. First, methods designed to reduce the influence of random variability through data filtering, replication, and experimental design are introduced. This is followed by a review of data analysis methods that partition the variance into random effects and one or more systematic effects, specifically two-sample significance testing and analysis of variance (ANOVA) methods. Finally, the current state of measurement error models for spotted microarrays and their role in variance stabilizing transformations are discussed.


Assuntos
Análise de Variância , Biologia Computacional/métodos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , DNA Complementar/química , DNA Complementar/genética , Interpretação Estatística de Dados , Perfilação da Expressão Gênica , Modelos Lineares
18.
BMC Bioinformatics ; 7: 343, 2006 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-16839419

RESUMO

BACKGROUND: Modeling of gene expression data from time course experiments often involves the use of linear models such as those obtained from principal component analysis (PCA), independent component analysis (ICA), or other methods. Such methods do not generally yield factors with a clear biological interpretation. Moreover, implicit assumptions about the measurement errors often limit the application of these methods to log-transformed data, destroying linear structure in the untransformed expression data. RESULTS: In this work, a method for the linear decomposition of gene expression data by multivariate curve resolution (MCR) is introduced. The MCR method is based on an alternating least-squares (ALS) algorithm implemented with a weighted least squares approach. The new method, MCR-WALS, extracts a small number of basis functions from untransformed microarray data using only non-negativity constraints. Measurement error information can be incorporated into the modeling process and missing data can be imputed. The utility of the method is demonstrated through its application to yeast cell cycle data. CONCLUSION: Profiles extracted by MCR-WALS exhibit a strong correlation with cell cycle-associated genes, but also suggest new insights into the regulation of those genes. The unique features of the MCR-WALS algorithm are its freedom from assumptions about the underlying linear model other than the non-negativity of gene expression, its ability to analyze non-log-transformed data, and its use of measurement error information to obtain a weighted model and accommodate missing measurements.


Assuntos
Biologia Computacional/métodos , Regulação da Expressão Gênica , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Algoritmos , Ciclo Celular , Interpretação Estatística de Dados , Proteínas Fúngicas/química , Perfilação da Expressão Gênica , Modelos Biológicos , Análise Multivariada , Análise de Componente Principal , Saccharomyces cerevisiae/metabolismo , Fatores de Tempo
19.
J Microbiol Methods ; 65(2): 357-60, 2006 May.
Artigo em Inglês | MEDLINE | ID: mdl-16198434

RESUMO

Here we describe an automated, pressure-driven, sampling device for harvesting 10 to 30 ml samples, in replicate, with intervals as short as 10 s. Correlation between biological replicate time courses measured by microarrays was extremely high. The sampler enables sampling at intervals within the range of many important biological processes.


Assuntos
Técnicas Microbiológicas/instrumentação , Leveduras , Automação , Meios de Cultura , Desenho de Equipamento , Análise de Sequência com Séries de Oligonucleotídeos , RNA Fúngico/análise , RNA Fúngico/isolamento & purificação , Reprodutibilidade dos Testes , Leveduras/genética , Leveduras/crescimento & desenvolvimento , Leveduras/isolamento & purificação , Leveduras/metabolismo
20.
Analyst ; 130(10): 1331-6, 2005 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-16172653

RESUMO

DNA microarrays, or "DNA chips", represent a relatively new technology that is having a profound impact on biology and medicine, yet analytical research into this area is somewhat sparse. This article presents an overview of DNA microarrays and their application to gene expression analysis from the perspective of analytical chemistry, treating aspects of array platforms, measurement, image analysis, experimental design, normalization, and data analysis. Typical approaches are described and unresolved issues are discussed, with a view to identifying some of the contributions that might be made by analytical chemists.


Assuntos
Técnicas de Química Analítica , Perfilação da Expressão Gênica , Análise de Sequência com Séries de Oligonucleotídeos , Papel Profissional , Animais , Interpretação Estatística de Dados , Humanos , Processamento de Imagem Assistida por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA