Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
bioRxiv ; 2023 Nov 17.
Artículo en Inglés | MEDLINE | ID: mdl-38014000

RESUMEN

Purpose: To improve reliability of metabolite quantification at both, 3 T and 7 T, we propose a novel parametrized macromolecules quantification model (PRaMM) for brain 1 H MRS, in which the ratios of macromolecule peak intensities are used as soft constraints. Methods: Full- and metabolite-nulled spectra were acquired in three different brain regions with different ratios of grey and white matter from six healthy volunteers, at both 3 T and 7 T. Metabolite-nulled spectra were used to identify highly correlated macromolecular signal contributions and estimate the ratios of their intensities. These ratios were then used as soft constraints in the proposed PRaMM model for quantification of full spectra. The PRaMM model was validated by comparison with a single component macromolecule model and a macromolecule subtraction technique. Moreover, the influence of the PRaMM model on the repeatability and reproducibility compared to those other methods was investigated. Results: The developed PRaMM model performed better than the two other approaches in all three investigated brain regions. Several estimates of metabolite concentration and their Cramér-Rao lower bounds were affected by the PRaMM model reproducibility, and repeatability of the achieved concentrations were tested by evaluating the method on a second repeated acquisitions dataset. While the observed effects on both metrics were not significant, the fit quality metrics were improved for the PRaMM method (p≤0.0001). Minimally detectable changes are in the range 0.5 - 1.9 mM and percent coefficients of variations are lower than 10% for almost all the clinically relevant metabolites. Furthermore, potential overparameterization was ruled out. Conclusion: Here, the PRaMM model, a method for an improved quantification of metabolites was developed, and a method to investigate the role of the MM background and its individual components from a clinical perspective is proposed.

2.
Phys Med ; 105: 102514, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36608390

RESUMEN

PURPOSE: Assess and optimise acquisition parameters for continuous cardiac Magnetic Resonance Fingerprinting (MRF). METHODS: Different acquisition schemes (flip angle amplitude, lobe size, T2-preparation pulses) for cardiac MRF were assessed in simulations and phantom and demonstrated in one healthy volunteer. Three different experimental designs were evaluated using central composite and fractional factorial designs. Relative errors for T1 and T2 were calculated for a wide range of realistic T1 and T2 value combinations. The effect of different designs on the accuracy of T1 and T2 was assessed using response surface modelling and Cohen's f calculations. RESULTS: Larger flip angle amplitudes lead to an improvement of T2 accuracy and precision for simulations and phantom experiments. Similar effects could also be shown qualitatively in in-vivo scans. Accuracy and precision of T1 were robust to different design parameters with improved values for faster flip angle variation. Cohen's f showed that T2-preparation pulses influence the accuracy of T2. The number of pulses used is the most important parameter. Without T2-preparation pulses, RMSE were 3.0 ± 8.09 % for T1 and 16.24 ± 14.47 % for T2. Using those pulses reduced the RMSE to 2.3 ± 8.4 % for T1 and 14.11 ± 13.46 % for T2. Nonetheless, even if the improvement is significant, RMSE are still too high for reliable quantification. CONCLUSION: In contrast to previous study using triggered MRF sequences using < 30° flip angles, large flip angle amplitudes led to better results for continuous cardiac MRF sequences. T2-preparation pulse can improve the accuracy of T2 estimation but lead to longer scan times.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Corazón/diagnóstico por imagen , Espectroscopía de Resonancia Magnética , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos
3.
Magn Reson Med ; 87(3): 1119-1135, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34783376

RESUMEN

PURPOSE: To introduce a study design and statistical analysis framework to assess the repeatability, reproducibility, and minimal detectable changes (MDCs) of metabolite concentrations determined by in vivo MRS. METHODS: An unbalanced nested study design was chosen to acquire in vivo MRS data within different repeatability and reproducibility scenarios. A spin-echo, full-intensity acquired localized (SPECIAL) sequence was employed at 7 T utlizing three different inversion pulses: a hyperbolic secant (HS), a gradient offset independent adiabaticity (GOIA), and a wideband, uniform rate, smooth truncation (WURST) pulse. Metabolite concentrations, Cramér-Rao lower bounds (CRLBs) and coefficients of variation (CVs) were calculated. Both Bland-Altman analysis and a restricted maximum-likelihood estimation (REML) analysis were performed to estimate the different variance contributions of the repeatability and reproducibility of the measured concentration. A Bland-Altmann analysis of the spectral shape was performed to assess the variance of the spectral shape, independent of quantification model influences. RESULTS: For the used setup, minimal detectable changes of brain metabolite concentrations were found to be between 0.40 µmol/g and 2.23 µmol/g. CRLBs account for only 16 % to 74 % of the total variance of the metabolite concentrations. The application of gradient-modulated inversion pulses in SPECIAL led to slightly improved repeatability, but overall reproducibility appeared to be limited by differences in positioning, calibration, and other day-to-day variations throughout different sessions. CONCLUSION: A framework is introduced to estimate the precision of metabolite concentrations obtained by MRS in vivo, and the minimal detectable changes for 13 metabolite concentrations measured at 7 T using SPECIAL are obtained.


Asunto(s)
Encéfalo , Encéfalo/diagnóstico por imagen , Humanos , Espectroscopía de Resonancia Magnética , Reproducibilidad de los Resultados
4.
Anal Methods ; 11(20): 2639-2649, 2019 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-37828743

RESUMEN

Selected robust estimators of covariance or correlation are presented. Their use in the identification of anomalous laboratory results in inter-laboratory data is illustrated. It is shown that robust estimators can substantially reduce the impact of outlying values on multivariate confidence regions and consequently lead to sharper identification of anomalies, even where traditional outlier detection may fail to locate anomalous results.

6.
BMC Bioinformatics ; 17(1): 421, 2016 Oct 12.
Artículo en Inglés | MEDLINE | ID: mdl-27733121

RESUMEN

BACKGROUND: Digital PCR (dPCR) is a technique for estimating the concentration of a target nucleic acid by loading a sample into a large number of partitions, amplifying the target and using a fluorescent marker to identify which partitions contain the target. The standard analysis uses only the proportion of partitions containing target to estimate the concentration and depends on the assumption that the initial distribution of molecules in partitions is Poisson. In this paper we describe a way to extend such analysis using the quantification cycle (Cq) data that may also be available, but rather than assuming the Poisson distribution the more general Conway-Maxwell-Poisson distribution is used instead. RESULTS: A software package for the open source language R has been created for performing the analysis. This was used to validate the method by analysing Cq data from dPCR experiments involving 3 types of DNA (attenuated, virulent and plasmid) at 3 concentrations. Results indicate some deviation from the Poisson distribution, which is strongest for the virulent DNA sample. Theoretical calculations indicate that the deviation from the Poisson distribution results in a bias of around 5 % for the analysed data if the standard analysis is used, but that it could be larger for higher concentrations. Compared to the estimates of subsequent efficiency, the estimates of 1st cycle efficiency are much lower for the virulent DNA, moderately lower for the attenuated DNA and close for the plasmid DNA. Further method validation using simulated data gave results closer to the true values and with lower standard deviations than the standard method, for concentrations up to approximately 2.5 copies/partition. CONCLUSIONS: The Cq-based method is effective at estimating DNA concentration and is not seriously affected by data issues such as outliers and moderately non-linear trends. The data analysis suggests that the Poisson assumption of the standard approach does lead to a bias that is fairly small, though more research is needed. Estimates of the 1st cycle efficiency being lower than estimates of the subsequent efficiency may indicate samples that are mixtures of single-stranded and double-stranded DNA. The model can reduce or eliminate the resulting bias.


Asunto(s)
ADN/análisis , Plásmidos/genética , Reacción en Cadena de la Polimerasa/métodos , ADN/genética , Humanos , Distribución de Poisson
7.
Biomol Detect Quantif ; 8: 15-28, 2016 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27335807

RESUMEN

Measurement of RNA can be used to study and monitor a range of infectious and non-communicable diseases, with profiling of multiple gene expression mRNA transcripts being increasingly applied to cancer stratification and prognosis. An international comparison study (Consultative Committee for Amount of Substance (CCQM)-P103.1) was performed in order to evaluate the comparability of measurements of RNA copy number ratio for multiple gene targets between two samples. Six exogenous synthetic targets comprising of External RNA Control Consortium (ERCC) standards were measured alongside transcripts for three endogenous gene targets present in the background of human cell line RNA. The study was carried out under the auspices of the Nucleic Acids (formerly Bioanalysis) Working Group of the CCQM. It was coordinated by LGC (United Kingdom) with the support of National Institute of Standards and Technology (USA) and results were submitted from thirteen National Metrology Institutes and Designated Institutes. The majority of laboratories performed RNA measurements using RT-qPCR, with datasets also being submitted by two laboratories based on reverse transcription digital polymerase chain reaction and one laboratory using a next-generation sequencing method. In RT-qPCR analysis, the RNA copy number ratios between the two samples were quantified using either a standard curve or a relative quantification approach. In general, good agreement was observed between the reported results of ERCC RNA copy number ratio measurements. Measurements of the RNA copy number ratios for endogenous genes between the two samples were also consistent between the majority of laboratories. Some differences in the reported values and confidence intervals ('measurement uncertainties') were noted which may be attributable to choice of measurement method or quantification approach. This highlights the need for standardised practices for the calculation of fold change ratios and uncertainties in the area of gene expression profiling.

8.
Talanta ; 130: 462-9, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25159436

RESUMEN

Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors.

9.
J AOAC Int ; 95(5): 1433-9, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23175977

RESUMEN

Repeatability and reproducibility data for microbiological methods in food analysis were collated and assessed with a view to identifying useful or important trends. Generalized additive modeling for location, shape, and scale was used to model the distribution of variances. It was found that mean reproducibility for log10(CFU) data is largely independent of concentration, while repeatability SD of log10(CFU) data shows a strongly significant decrease in repeatability SD with increasing enumeration. The model for reproducibility SD gave a mean of 0.44, with an upper 95th percentile of approximately 0.76. Repeatability variance could be described reasonably well by a simple dichotomous model; at enumerations below 10(5)/g, the model for repeatability SD gave a mean of approximately 0.35 and upper 95th percentile of 0.63. Above 10(5)/g, the model gave a mean of 0.2 and upper 95th percentile of 0.36. A Horwitz-like function showed no appreciable advantage in describing the data set and gave apparently worse fit. The relationship between repeatability and reproducibility of log10(CFU) is not constant across the concentration range studied. Both repeatability and reproducibility were found to depend on matrix class and organism.


Asunto(s)
Técnicas Bacteriológicas/normas , Microbiología de Alimentos/métodos , Microbiología de Alimentos/normas , Alimentación Animal/microbiología , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
10.
Anal Bioanal Chem ; 401(10): 3221-7, 2011 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22002559

RESUMEN

A method of calibration for real-time quantitative polymerase chain reaction (qPCR) experiments based on the method of standard additions combined with non-linear curve fitting is described. The method is tested by comparing the results of a traditionally calibrated qPCR experiment with the standard additions experiment in the presence of 2 mM EDTA, a known inhibitor chosen to provide an unambiguous test of the principle by inducing an approximately twofold bias in apparent copy number calculated using traditional calibration. The standard additions method is shown to substantially reduce inhibitor-induced bias in quantitative real-time qPCR.


Asunto(s)
Reacción en Cadena en Tiempo Real de la Polimerasa/normas , Brassica/genética , ADN de Plantas/genética , Ácido Edético/farmacología , Reacción en Cadena en Tiempo Real de la Polimerasa/métodos
11.
Analyst ; 133(8): 992-7, 2008 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-18645637

RESUMEN

Standard additions is a calibration technique devised to eliminate rotational matrix effects in analytical measurement. Although the technique is presented in almost every textbook of analytical chemistry, its behaviour in practice is not well documented and is prone to attract misleading accounts. The most important limitation is that the method cannot deal with translational matrix effects, which need to be handled separately. In addition, because the method involves extrapolation from known data, the method is often regarded as less precise than external calibration (interpolation) techniques. Here, using a generalised model of an analytical system, we look at the behaviour of the method of standard additions under a range of conditions, and find that, if executed optimally, there is no noteworthy loss of precision.


Asunto(s)
Calibración , Técnicas de Química Analítica/métodos , Modelos Estadísticos , Estándares de Referencia
12.
Anal Bioanal Chem ; 390(1): 201-13, 2008 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-18026721

RESUMEN

Consistent treatment of measurement bias, including the question of whether or not to correct for bias, is essential for the comparability of measurement results. The case for correcting for bias is discussed, and it is shown that instances in which bias is known or suspected, but in which a specific correction cannot be justified, are comparatively common. The ISO Guide to the Expression of Uncertainty in Measurement does not provide well for this situation. It is concluded that there is a need for guidance on handling cases of uncorrected bias. Several different published approaches to the treatment of uncorrected bias and its uncertainty are critically reviewed with regard to coverage probability and simplicity of execution. On the basis of current studies, and taking into account testing laboratory needs for a simple and consistent approach with a symmetric uncertainty interval, we conclude that for most cases with large degrees of freedom, linear addition of a bias term adjusted for exact coverage ("U(e)") as described by Synek is to be preferred. This approach does, however, become more complex if degrees of freedom are low. For modest bias and low degrees of freedom, summation of bias, bias uncertainty and observed value uncertainty in quadrature ("RSSu") provides a similar interval and is simpler to adapt to reduced degrees of freedom, at the cost of a more restricted range of application if accurate coverage is desired.

13.
BMC Biotechnol ; 6: 33, 2006 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-16824215

RESUMEN

BACKGROUND: Accurate quantification of DNA using quantitative real-time PCR at low levels is increasingly important for clinical, environmental and forensic applications. At low concentration levels (here referring to under 100 target copies) DNA quantification is sensitive to losses during preparation, and suffers from appreciable valid non-detection rates for sampling reasons. This paper reports studies on a real-time quantitative PCR assay targeting a region of the human SRY gene over a concentration range of 0.5 to 1000 target copies. The effects of different sample preparation and calibration methods on quantitative accuracy were investigated. RESULTS: At very low target concentrations of 0.5-10 genome equivalents (g.e.) eliminating any replicates within each DNA standard concentration with no measurable signal (non-detects) compromised calibration. Improved calibration could be achieved by eliminating all calibration replicates for any calibration standard concentration with non-detects ('elimination by sample'). Test samples also showed positive bias if non-detects were removed prior to averaging; less biased results were obtained by converting to concentration, including non-detects as zero concentration, and averaging all values. Tube plastic proved to have a strongly significant effect on DNA quantitation at low levels (p = 1.8 x 10(-4)). At low concentrations (under 10 g.e.), results for assays prepared in standard plastic were reduced by about 50% compared to the low-retention plastic. Preparation solution (carrier DNA or stabiliser) was not found to have a significant effect in this study.Detection probabilities were calculated using logistic regression. Logistic regression over large concentration ranges proved sensitive to non-detected replicate reactions due to amplification failure at high concentrations; the effect could be reduced by regression against log (concentration) or, better, by eliminating invalid responses. CONCLUSION: Use of low-retention plastic tubes is advised for quantification of DNA solutions at levels below 100 g.e. For low-level calibration using linear least squares, it is better to eliminate the entire replicate group for any standard that shows non-detects reasonably attributable to sampling effects than to either eliminate non-detects or to assign arbitrary high Ct values. In calculating concentrations for low-level test samples with non-detects, concentrations should be calculated for each replicate, zero concentration assigned to non-detects, and all resulting concentration values averaged. Logistic regression is a useful method of estimating detection probability at low DNA concentrations.


Asunto(s)
Artefactos , Microquímica/métodos , Modelos Genéticos , Análisis de Secuencia por Matrices de Oligonucleótidos/métodos , Reacción en Cadena de la Polimerasa de Transcriptasa Inversa/métodos , Análisis de Secuencia de ADN/métodos , Manejo de Especímenes/métodos , Secuencia de Bases , Simulación por Computador , Sistemas de Computación , Microquímica/instrumentación , Datos de Secuencia Molecular , Análisis de Secuencia por Matrices de Oligonucleótidos/instrumentación , Control de Calidad , Reproducibilidad de los Resultados , Reacción en Cadena de la Polimerasa de Transcriptasa Inversa/instrumentación , Sensibilidad y Especificidad , Análisis de Secuencia de ADN/instrumentación , Manejo de Especímenes/instrumentación
14.
Analyst ; 131(6): 710-7, 2006 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-16732358

RESUMEN

Different methods of treating data which lie close to a natural limit in a feasible range, such as zero or 100% mass or mole fraction, are discussed and recommendations made concerning the most appropriate. The methods considered include discarding observations beyond the limit, shifting observations to the limit, truncation of a classical confidence interval based on Student's t (coupled with shifting the result to the limit if outside the feasible range), truncation and renormalisation of an assumed normal distribution, and the maximum density interval of a Bayesian estimate based on a normal measurement distribution and a uniform prior within the feasible range. Based on consideration of bias and simulation to assess coverage, it is recommended that for most purposes, a confidence interval near a natural limit should be constructed by first calculating the usual confidence interval based on Student's t, then truncating the out-of-range portion to leave an asymmetric interval and adjusting the reported value to within the resulting interval if required. It is suggested that the original standard uncertainty is retained for uncertainty propagation purposes.

15.
J AOAC Int ; 89(1): 232-9, 2006.
Artículo en Inglés | MEDLINE | ID: mdl-16512253

RESUMEN

The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.


Asunto(s)
Interpretación Estadística de Datos , Organismos Modificados Genéticamente , Análisis de los Alimentos , Alimentos Modificados Genéticamente , Funciones de Verosimilitud , Modelos Estadísticos , Distribución Normal , Reacción en Cadena de la Polimerasa , Reproducibilidad de los Resultados , Distribuciones Estadísticas
16.
Analyst ; 127(6): 818-24, 2002 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-12146917

RESUMEN

The choice of an analytical procedure and the determination of an appropriate sampling strategy are here treated as a decision theory problem in which sampling and analytical costs are balanced against possible end-user losses due to measurement error. Measurement error is taken here to include both sampling and analytical variances, but systematic errors are not considered. The theory is developed in detail for the case exemplified by a simple accept or reject decision following an analytical measurement on a batch of material, and useful approximate formulae are given for this case. Two worked examples are given, one involving a batch production process and the other a land reclamation site.


Asunto(s)
Técnicas de Química Analítica/economía , Teoría de las Decisiones , Modelos Químicos , Costos y Análisis de Costo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...