Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 11(10): e0164582, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27736999

RESUMO

Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.


Assuntos
Bacillus/isolamento & purificação , Técnicas Microbiológicas/métodos , Manejo de Espécimes/instrumentação , Esporos Bacterianos/isolamento & purificação , Bacillus/classificação , Bacillus/fisiologia , Contagem de Colônia Microbiana , Microbiologia Ambiental , Técnicas Microbiológicas/instrumentação , Modelos Estatísticos , Manejo de Espécimes/métodos , Propriedades de Superfície , Fatores de Tempo
2.
Appl Radiat Isot ; 82: 181-7, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24035928

RESUMO

The gamma-ray spectrum of spent nuclear fuel in the 3-6 MeV energy range is important for active interrogation since gamma rays emitted from nuclear decay are not expected to interfere with measurements in this energy region. There is, unfortunately, a dearth of empirical measurements from spent nuclear fuel in this region. This work is an initial attempt to partially fill this gap by presenting an analysis of gamma-ray spectra collected from a set of spent nuclear fuel sources using a high-purity germanium detector array. This multi-crystal array possesses a large collection volume, providing high energy resolution up to 16 MeV. The results of these measurements establish the continuum count-rate in the energy region between 3 and 6 MeV. Also assessed is the potential for peaks from passive emissions to interfere with peak measurements resulting from active interrogation delayed emissions. As one of the first documented empirical measurements of passive emissions from spent fuel for energies above 3 MeV, this work provides a foundation for active interrogation model validation and detector development.

3.
Radiat Prot Dosimetry ; 149(3): 251-67, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21693467

RESUMO

In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results is negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty, a likelihood PDF for each individual's measurand is produced. Then using the same assumptions and all the data from the population of individuals, a prior PDF of measurands for the population is produced. The prior PDF is non-negative, and the average is equal to the average of the measurement results for the population. Using Bayes's theorem, posterior PDFs of each individual measurand are calculated. The uncertainty in these bayesian posterior PDFs appears to be all Berkson with no remaining classical component. The method is applied to baseline bioassay data from the Hanford site. The data include (90)Sr urinalysis measurements of 128 people, (137)Cs in vivo measurements of 5337 people and (239)Pu urinalysis measurements of 3270 people. The method produces excellent results for the (90)Sr and (137)Cs measurements, since there are non-zero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the (239)Pu measurements in non-occupationally exposed people because the population average is essentially zero relative to the sensitivity of the measurement technique. The method is shown to give results similar to classical statistical inference when the measurements have relatively small uncertainty.


Assuntos
Bioensaio/métodos , Radiometria/métodos , Algoritmos , Teorema de Bayes , Radioisótopos de Césio/química , Interpretação Estatística de Dados , Exposição Ambiental , Humanos , Isótopos/análise , Modelos Estatísticos , Plutônio/análise , Probabilidade , Doses de Radiação , Análise de Regressão , Radioisótopos de Estrôncio/química , Incerteza
4.
Stat Appl Genet Mol Biol ; 9: Article 14, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20196749

RESUMO

Nuisance factors in a protein-array study add obfuscating variation to spot intensity measurements, diminishing the accuracy and precision of protein concentration predictions. The effects of nuisance factors may be reduced by design of experiments, and by estimating and then subtracting nuisance effects. Estimated nuisance effects also inform about the quality of the study and suggest refinements for future studies.We demonstrate a method to reduce nuisance effects by incorporating a non-interfering internal calibration in the study design and its complemental analysis of variance. We illustrate this method by applying a chip-level internal calibration in a biomarker discovery study. The variability of sample intensity estimates was reduced 16% to 92% with a median of 58%; confidence interval widths were reduced 8% to 70% with a median of 35%. Calibration diagnostics revealed processing nuisance trends potentially related to spot print order and chip location on a slide. The accuracy and precision of a protein-array study may be increased by incorporating a non-interfering internal calibration. Internal calibration modeling diagnostics improve confidence in study results and suggest process steps that may need refinement. Though developed for our protein-array studies, this internal calibration method is applicable to other targeted array-based studies.


Assuntos
Análise Serial de Proteínas/estatística & dados numéricos , Análise de Variância , Bioestatística , Ensaio de Imunoadsorção Enzimática/métodos , Ensaio de Imunoadsorção Enzimática/estatística & dados numéricos , Humanos , Modelos Estatísticos , Análise Serial de Proteínas/métodos
5.
Sensors (Basel) ; 10(9): 8652-62, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-22163677

RESUMO

This paper describes a new method for predicting the detectability of thin gaseous plumes in hyperspectral images. The novelty of this method is the use of basis vectors for each of the spectral channels of a collection instrument to calculate noise-equivalent concentration-pathlengths instead of matching scene pixels to absorbance spectra of gases in a library. This method provides insight into regions of the spectrum where gas detection will be relatively easier or harder, as influenced by ground emissivity, temperature contrast, and the atmosphere. Our results show that data collection planning could be influenced by information about when potential plumes are likely to be over background segments that are most conducive to detection.


Assuntos
Gases/análise , Modelos Químicos , Modelos Estatísticos , Análise Espectral/métodos , Processamento de Imagem Assistida por Computador
6.
Stat Appl Genet Mol Biol ; 7(1): Article21, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18673290

RESUMO

Making sound proteomic inferences using ELISA microarray assay requires both an accurate prediction of protein concentration and a credible estimate of its error. We present a method using monotonic spline statistical models (MS), penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict ELISA microarray protein concentrations and estimate their prediction errors. We contrast the MSMC (monotone spline Monte Carlo) method with a LNLS (logistic nonlinear least squares) method using simulated and real ELISA microarray data sets.MSMC rendered good fits in almost all tests, including those with left and/or right clipped standard curves. MS predictions were nominally more accurate; especially at the extremes of the prediction curve. MC provided credible asymmetric prediction intervals for both MS and LN fits that were superior to LNLS propagation-of-error intervals in achieving the target statistical confidence. MSMC was more reliable when automated prediction across simultaneous assays was applied routinely with minimal user guidance.


Assuntos
Ensaio de Imunoadsorção Enzimática , Modelos Estatísticos , Análise Serial de Proteínas , Proteômica/métodos , Algoritmos , Reações Antígeno-Anticorpo , Simulação por Computador , Relação Dose-Resposta Imunológica , Perfilação da Expressão Gênica , Humanos , Análise dos Mínimos Quadrados , Método de Monte Carlo , Concentração Osmolar , Análise Serial de Proteínas/normas , Padrões de Referência
7.
Bioinformatics ; 24(13): 1554-5, 2008 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-18499697

RESUMO

UNLABELLED: The Bayesian Estimator of Protein-Protein Association Probabilities (BEPro aff3) is a software tool for estimating probabilities of protein-protein association between bait and prey protein pairs using data from multiple-bait, multiple-replicate, protein liquid chromatography tandem mass spectrometry LC-MS/MS affinity isolation experiments. AVAILABILITY: BEPro (3) is public domain software, has been tested on WIndows XP, Linux and Mac OS, and is freely available from http://www.pnl.gov/statistics/BEPro3. SUPPLEMENTARY INFORMATION: A user guide, example dataset with analysis and additional documentation are included with the BEPro (3) download.


Assuntos
Algoritmos , Mapeamento de Interação de Proteínas/métodos , Proteínas/química , Software , Teorema de Bayes , Sítios de Ligação , Interpretação Estatística de Dados , Modelos Estatísticos , Ligação Proteica
8.
Appl Radiat Isot ; 66(3): 362-71, 2008 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-17980610

RESUMO

Large variation in ambient gamma-ray backgrounds challenges the search for radiation sources. Raising detection thresholds is a common response, but one that comes at the price of reduced detection sensitivity. In response to this challenge, we explore several trip-wire detection algorithms for gamma-ray spectrometers. We assess their ability to mitigate background variation and find that the best-performing algorithms focus on the spectral shape over several energy bins using spectral comparison ratios and dynamically predict background with the Kalman Filter.

9.
J Proteome Res ; 6(9): 3788-95, 2007 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-17691832

RESUMO

Affinity isolation of protein complexes followed by protein identification by LC-MS/MS is an increasingly popular approach for mapping protein interactions. However, systematic and random assay errors from multiple sources must be considered to confidently infer authentic protein-protein interactions. To address this issue, we developed a general, robust statistical method for inferring authentic interactions from protein prey-by-bait frequency tables using a binomial-based likelihood ratio test (LRT) coupled with Bayes' Odds estimation. We then applied our LRT-Bayes' algorithm experimentally using data from protein complexes isolated from Rhodopseudomonas palustris. Our algorithm, in conjunction with the experimental protocol, inferred with high confidence authentic interacting proteins from abundant, stable complexes, but few or no authentic interactions for lower-abundance complexes. The algorithm can discriminate against a background of prey proteins that are detected in association with a large number of baits as an artifact of the measurement. We conclude that the experimental protocol including the LRT-Bayes' algorithm produces results with high confidence but moderate sensitivity. We also found that Monte Carlo simulation is a feasible tool for checking modeling assumptions, estimating parameters, and evaluating the significance of results in protein association studies.


Assuntos
Proteínas/química , Proteômica/métodos , Algoritmos , Proteínas de Bactérias/química , Teorema de Bayes , Bioensaio , Cromatografia Líquida/métodos , Espectrometria de Massas/métodos , Modelos Estatísticos , Método de Monte Carlo , Razão de Chances , Mapeamento de Interação de Proteínas , Rodopseudomonas/metabolismo , Sensibilidade e Especificidade
10.
Bioinformatics ; 22(10): 1278-9, 2006 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-16595561

RESUMO

SUMMARY: ProMAT is a software tool for statistically analyzing data from enzyme-linked immunosorbent assay microarray experiments. The software estimates standard curves, sample protein concentrations and their uncertainties for multiple assays. ProMAT generates a set of comprehensive figures for assessing results and diagnosing process quality. The tool is available for Windows or Mac, and is distributed as open-source Java and R code. AVAILABILITY: ProMAT is available at http://www.pnl.gov/statistics/ProMAT. ProMAT requires Java version 1.5.0 and R version 1.9.1 (or more recent versions). ProMAT requires either Windows XP or Mac OS 10.4 or newer versions.


Assuntos
Algoritmos , Ensaio de Imunoadsorção Enzimática/métodos , Análise Serial de Proteínas/métodos , Software , Interface Usuário-Computador , Interpretação Estatística de Dados
11.
Proteome Sci ; 4: 1, 2006 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-16504106

RESUMO

BACKGROUND: The field of proteomics involves the characterization of the peptides and proteins expressed in a cell under specific conditions. Proteomics has made rapid advances in recent years following the sequencing of the genomes of an increasing number of organisms. A prominent technology for high throughput proteomics analysis is the use of liquid chromatography coupled to Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR-MS). Meaningful biological conclusions can best be made when the peptide identities returned by this technique are accompanied by measures of accuracy and confidence. METHODS: After a tryptically digested protein mixture is analyzed by LC-FTICR-MS, the observed masses and normalized elution times of the detected features are statistically matched to the theoretical masses and elution times of known peptides listed in a large database. The probability of matching is estimated for each peptide in the reference database using statistical classification methods assuming bivariate Gaussian probability distributions on the uncertainties in the masses and the normalized elution times. RESULTS: A database of 69,220 features from 32 LC-FTICR-MS analyses of a tryptically digested bovine serum albumin (BSA) sample was matched to a database populated with 97% false positive peptides. The percentage of high confidence identifications was found to be consistent with other database search procedures. BSA database peptides were identified with high confidence on average in 14.1 of the 32 analyses. False positives were identified on average in just 2.7 analyses. CONCLUSION: Using a priori probabilities that contrast peptides from expected and unexpected proteins was shown to perform better in identifying target peptides than using equally likely a priori probabilities. This is because a large percentage of the target peptides were similar to unexpected peptides which were included to be false positives. The use of triplicate analyses with a "2 out of 3" reporting rule was shown to have excellent rejection of false positives.

12.
J Am Soc Mass Spectrom ; 16(8): 1239-49, 2005 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-15979333

RESUMO

The combination of mass and normalized elution time (NET) of a peptide identified by liquid chromatography-mass spectrometry (LC-MS) measurements can serve as a unique signature for that peptide. However, the specificity of an LC-MS measurement depends upon the complexity of the proteome (i.e., the number of possible peptides) and the accuracy of the LC-MS measurements. In this work, theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity. Accuracy of the LC-MS measurement of mass-NET pairs (on a 0 to 1.0 NET scale) was described by bivariate normal sampling distributions centered on the peptide signatures. Measurement accuracy (i.e., mass and NET standard deviations of +/-0.1, 1, 5, and 10 ppm, and +/-0.01 and 0.05, respectively) was varied to evaluate improvements in process quality. The spatially localized confidence score, a conditional probability of peptide uniqueness, formed the basis for the peptide identification. Application of this approach to organisms with comparatively small proteomes, such as Deinococcus radiodurans, shows that modest mass and elution time accuracies are generally adequate for confidently identifying most peptides. For more complex proteomes, more accurate measurements are required. However, the study suggests that the majority of proteins for even the human proteome should be identifiable with reasonable confidence by using LC-MS measurements with mass accuracies within +/-1 ppm and high efficiency separations having elution time measurements within +/-0.01 NET.


Assuntos
Cromatografia Líquida/métodos , Espectrometria de Massas/métodos , Proteoma/análise , Proteômica/métodos , Animais , Simulação por Computador , Deinococcus/química , Humanos , Saccharomyces cerevisiae/química , Proteínas de Saccharomyces cerevisiae/análise , Fatores de Tempo
13.
BMC Bioinformatics ; 6: 17, 2005 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-15673468

RESUMO

BACKGROUND: Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to estimate a protein's concentration in a sample. Deploying ELISA in a microarray format permits simultaneous estimation of the concentrations of numerous proteins in a small sample. These estimates, however, are uncertain due to processing error and biological variability. Evaluating estimation error is critical to interpreting biological significance and improving the ELISA microarray process. Estimation error evaluation must be automated to realize a reliable high-throughput ELISA microarray system. In this paper, we present a statistical method based on propagation of error to evaluate concentration estimation errors in the ELISA microarray process. Although propagation of error is central to this method and the focus of this paper, it is most effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization, and statistical diagnostics when evaluating ELISA microarray concentration estimation errors. RESULTS: We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of concentration estimation errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error. We summarize the results with a simple, three-panel diagnostic visualization featuring a scatterplot of the standard data with logistic standard curve and 95% confidence intervals, an annotated histogram of sample measurements, and a plot of the 95% concentration coefficient of variation, or relative error, as a function of concentration. CONCLUSIONS: This statistical method should be of value in the rapid evaluation and quality control of high-throughput ELISA microarray analyses. Applying propagation of error to a variety of ELISA microarray concentration estimation models is straightforward. Displaying the results in the three-panel layout succinctly summarizes both the standard and sample data while providing an informative critique of applicability of the fitted model, the uncertainty in concentration estimates, and the quality of both the experiment and the ELISA microarray process.


Assuntos
Biologia Computacional/métodos , Ensaio de Imunoadsorção Enzimática/métodos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Algoritmos , Biomarcadores Tumorais , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/genética , Calibragem , Simulação por Computador , Intervalos de Confiança , Interpretação Estatística de Dados , Estudos de Avaliação como Assunto , Perfilação da Expressão Gênica , Humanos , Modelos Logísticos , Modelos Estatísticos , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes , Projetos de Pesquisa , Alinhamento de Sequência , Análise de Sequência de DNA , Análise de Sequência de Proteína
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA