Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Anal Biochem ; 694: 115602, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38977233

RESUMO

Modern isothermal titration calorimetry instruments give great precision, but for comparable accuracy they require chemical calibration. For the heat factor, one recommended process is HCl into the weak base TRIS. In studying this reaction with a VP-ITC and two Nano-ITCs, we have encountered some problems, most importantly a titrant volume shortfall Δv ≈ 0.3 µL, which we attribute to diffusive loss of HCl in the syringe tip. This interpretation is supported by a mathematical treatment of the diffusion problem. The effect was discovered through a variable-v protocol, which thus should be used to properly allow for it in any reaction that similarly approaches completion. We also find that the effects from carbonate contamination and from OH- from weak base hydrolysis can be more significant that previously thought. To facilitate proper weighting in the least-squares fitting of data, we have estimated data variance functions from replicate data. All three instruments have low-signal precision of σ ≈ 1 µJ; titrant volume uncertainty is a factor of ∼2 larger for the Nano-ITCs than for the VP-ITC. The final heat factors remain uncertain by more than the ∼1 % precision of the instruments and are unduly sensitive to the HCl concentration.


Assuntos
Calorimetria , Calorimetria/métodos , Calibragem , Ácido Clorídrico/química
2.
Anal Chem ; 94(46): 15997-16005, 2022 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-36343110

RESUMO

ANSWER: No. Most goodness-of-fit (GOF) tests attempt to discern a preferred weighting using either absolute or relative errors in the back-calculated calibration x values. However, the former are predisposed to select constant weighting and the latter 1/x2 or 1/y2 weighting, no matter what the true weighting should be. Here, I use Monte Carlo simulations to quantify the flaws in GOF tests and show why they falsely prefer their predisposition weighting. The weighting problem is solved properly through variance function (VF) estimation from replicate data, conveniently separating this from the problem of selecting a response function (RF). Any weighting other than inverse-variance must give loss of precision in the RF parameters and in the estimates of unknowns x0. In particular, the widely used 1/x2 weighting, if wrong, not only sacrifices precision but even worse, appears to give better precision at small x, leading to falsely optimistic estimates of detection and quantification limits. Realistic VFs typically become constant in the low-x, low-y limit. Thus, even when 1/x2 weighting is correct at large signal, the neglect of the constant variance component at small signal again gives too-small detection and quantification limits. VF estimation has been disparaged as too demanding of data. Why this is not true is demonstrated with Monte Carlo simulations that show only a few percent increase in calibration parameter uncertainties when the VF is estimated from just three replicates at each of six calibration x values. This point is further demonstrated using examples from the recent literature.


Assuntos
Calibragem , Análise dos Mínimos Quadrados , Método de Monte Carlo , Incerteza
3.
Anal Biochem ; 642: 114481, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-34843699

RESUMO

By conducting binding experiments at a range of temperatures T using isothermal titration calorimetry (ITC), one can obtain two estimates of the binding enthalpy - calorimetric (ΔH°cal) from the experiments at each T, and van't Hoff (ΔH°vH) from the T dependence of the binding constant K°. From thermodynamics it is clear that these two must be identical, but early efforts to demonstrate this for ITC data indicated significant inconsistency. In an extensive 2004 study of the Ba2+ + 18-crown-6 ether complexation used in prior comparisons, Mizoue and Tellinghuisen found modest (10-20%) but statistically significant differences, which were tentatively attributed to problems converting the calorimetric estimates to their standard state values, as implied by the superscript ° in the notation. In the present work the 2004 results are reanalyzed using results obtained since then from temperature, heat, and volume calibration of the instrument and a better determination of the data variance function required for the weighted least-squares fitting of the data. The new results show consistency for temperatures 5-30 °C but persistent statistically significant differences from 35 to 46 °C. Several possible explanations for the remaining discrepancies are examined, with methods that include fitting the K and ΔHcal data together.


Assuntos
Bário/química , Calorimetria , Éteres de Coroa/química , Termodinâmica , Calibragem
4.
BMC Bioinformatics ; 21(1): 291, 2020 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-32640980

RESUMO

BACKGROUND: A recently proposed method for estimating qPCR amplification efficiency E analyzes fluorescence intensity ratios from pairs of points deemed to lie in the exponential growth region on the amplification curves for all reactions in a dilution series. This method suffers from a serious problem: The resulting ratios are highly correlated, as they involve multiple use of the raw data, for example, yielding ~ 250 E estimates from ~ 25 intensity readings. The resulting statistics for such estimates are falsely optimistic in their assessment of the estimation precision. RESULTS: Monte Carlo simulations confirm that the correlated pairs method yields precision estimates that are better than actual by a factor of two or more. This result is further supported by estimating E by both pairwise and Cq calibration methods for the 16 replicate datasets from the critiqued work, and then comparing the ensemble statistics for these methods. CONCLUSION: Contrary to assertions in the proposing work, the pairwise method does not yield E estimates a factor of 2 more precise than estimates from Cq calibration fitting (the standard curve method). On the other hand, the statistically correct direct fit of the data to the model behind the pairwise method can yield E estimates of comparable precision. Ways in which the approach might be improved are discussed briefly.


Assuntos
Reação em Cadeia da Polimerase em Tempo Real , Correlação de Dados , Fluorescência , Método de Monte Carlo
5.
Anal Chem ; 92(16): 10863-10871, 2020 08 18.
Artigo em Inglês | MEDLINE | ID: mdl-32678579

RESUMO

Methods for straight-line fitting of data having uncertainty in x and y are compared through Monte Carlo simulations and application to specific data sets. Under special circumstances, the "ignorance" methods, methods which are typically used without information about the data errors σx and σy, are equivalent to the recommended best approach. The latter is numerical rather than formulaic but is easy to implement in programs that permit user-defined fit functions. It can handle any response function, linear or nonlinear, for any σxi and σyi. Estimates for the latter must be supplied and rightfully belong in any data analysis.

6.
Anal Biochem ; 611: 113946, 2020 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-32918867

RESUMO

The ultimate precision in both dPCR and qPCR experiments is limited by the Poisson statistics in the total number m of template molecules in the sample, giving relative standard deviation 1/m. This means that precision is limited by sample volume at low concentrations. Accordingly qPCR instruments, used in dPCR mode, can give better precision than dPCR instruments in this limit. For example, 13% standard deviation can be achieved with a 96-well plate for number concentrations ~20-5000 mL-1. For fixed m, qPCR loses to dPCR by a factor of ~2 in precision when calibration is needed.


Assuntos
Modelos Químicos , Reação em Cadeia da Polimerase em Tempo Real , Calibragem , Humanos
7.
Anal Chem ; 91(14): 8715-8722, 2019 07 16.
Artigo em Inglês | MEDLINE | ID: mdl-31180654

RESUMO

Inverse variance weighting ensures optimal parameter estimation in least-squares fitting, with exact parameter standard errors for linear least-squares with known data variance. In this Feature, I emphasize the virtues of numerical methods for estimating data variance functions and for determining these limits for any calibration model, linear or nonlinear.

8.
Anal Biochem ; 563: 79-86, 2018 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-30149027

RESUMO

Isothermal titration calorimetry data recorded on a MicroCal/Malvern VP-ITC instrument for water-water blanks and for dilution of aqueous solutions of BaCl2 and Ba(NO3)2 are analyzed using Origin software, the freeware NITPIC program, and in-house algorithms, to compare precisions for estimating the heat per injection q. The data cover temperatures 6-47 °C, injection volumes 4-40 µL, and average heats 0-200 µcal. For water-water blanks, where baseline noise limits precision, NITPIC and the in-house algorithm achieve precisions of 0.05 µcal, which is better than Origin by a factor of 4. The precision differences decrease with increasing |q|, becoming insignificant for |q| > 200 µcal. In its default mode, NITPIC underestimates |q| for peaks with incomplete return to baseline, but the shortfall can be largely corrected by overriding the default injection time parameter. The variance estimates from 26 dilution experiments are used to assess the data variance function. The results determine the conditions under which weighted least squares should be used to estimate thermodynamic parameters from ITC data.


Assuntos
Calorimetria/métodos , Algoritmos , Temperatura Alta , Análise dos Mínimos Quadrados , Temperatura , Termodinâmica
9.
Biochim Biophys Acta Gen Subj ; 1862(4): 886-894, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-29289616

RESUMO

BACKGROUND: Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. METHODS: The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting eA, ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. RESULTS: Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. CONCLUSION: Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. GENERAL SIGNIFICANCE: Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods.


Assuntos
Algoritmos , Simulação por Computador , Análise dos Mínimos Quadrados , Método de Monte Carlo , Calorimetria/métodos , Enzimas/metabolismo , Humanos , Cinética , Reprodutibilidade dos Testes
10.
Biochim Biophys Acta ; 1860(5): 861-867, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-26477875

RESUMO

BACKGROUND: Successful ITC experiments require conversion of cell reagent (titrand M) to product and production or consumption of heat. These conditions are quantified for 1:1 binding, M+X ⇔ MX. METHODS: Nonlinear least squares is used in error-propagation mode to predict the precisions with which the key quantities - binding constant K, reaction enthalpy ΔH°, and stoichiometry number n - can be estimated over a wide range of the dimensionless quantity that governs isotherm shape, c=K[M]0. The measurement precision σq is estimated from analysis of water-water blanks. RESULTS: When the product conversion exceeds 90%, the parameter relative standard errors are proportional to σq/qtot, where the total heat qtot ≈ ΔH° [M]0V0. Specifically, σK/K×qtot/σq ≈ 25 for c=10(-3)-10, ≈ 11 c(1/3) for c=10-10(4). For c>1, n and ΔH° are more precise than K; this holds also at smaller c for the product n×ΔH° and for ΔH° when n can be held fixed. Use of as few as 10 titrant injections can outperform the customary 20-40 while also improving productivity. CONCLUSION: These principles are illustrated in experiment design using the program ITC-PLANNER15. GENERAL SIGNIFICANCE: Simple quantitative guidelines replace the "c rules" that have dominated the literature for decades.


Assuntos
Compostos de Bário/química , Calorimetria/normas , Cloretos/química , Éteres de Coroa/química , Software , Temperatura Alta , Cinética , Análise dos Mínimos Quadrados , Nitratos/química , Projetos de Pesquisa , Temperatura , Termodinâmica
11.
Anal Chem ; 88(24): 12183-12187, 2016 12 20.
Artigo em Inglês | MEDLINE | ID: mdl-28193077

RESUMO

The role of partition volume variability, or polydispersity, in digital polymerase chain reaction methods is examined through formal considerations and Monte Carlo simulations. Contrary to intuition, polydispersity causes little precision loss for low average copy number per partition µ and can actually improve precision when µ exceeds ∼4. It does this by negatively biasing the estimates of µ, thus increasing the number of negative (null) partitions N0. In keeping with binomial statistics, this increases the relative precision of N0 and hence of the biased estimate m of µ. Below µ = 1, the precision loss and the bias are both small enough to be negligible for many applications. For higher µ the bias becomes more important than the imprecision, making accuracy dependent on knowledge of the partition volume distribution function. This information can be gained with optical microscopy or through calibration with reference materials.


Assuntos
Reação em Cadeia da Polimerase/métodos , Calibragem , Modelos Estatísticos , Método de Monte Carlo , Tamanho da Amostra
12.
Anal Biochem ; 513: 43-46, 2016 11 15.
Artigo em Inglês | MEDLINE | ID: mdl-27567993

RESUMO

Isothermal titration calorimetry data for very low c (≡K[M]0) must normally be analyzed with the stoichiometry parameter n fixed - at its known value or at any reasonable value if the system is not well characterized. In the latter case, ΔH° (and hence n) can be estimated from the T-dependence of the binding constant K, using the van't Hoff (vH) relation. An alternative is global or simultaneous fitting of data at multiple temperatures. In this Note, global analysis of low-c data at two temperatures is shown to estimate ΔH° and n with double the precision of the vH method.


Assuntos
Modelos Teóricos , Calorimetria Indireta/métodos
13.
Anal Biochem ; 496: 1-3, 2016 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-26562324

RESUMO

Relative expression ratios are commonly estimated in real-time qPCR studies by comparing the quantification cycle for the target gene with that for a reference gene in the treatment samples, normalized to the same quantities determined for a control sample. For the "standard curve" design, where data are obtained for all four of these at several dilutions, nonlinear least squares can be used to assess the amplification efficiencies (AE) and the adjusted ΔΔCq and its uncertainty, with automatic inclusion of the effect of uncertainty in the AEs. An algorithm is illustrated for the KaleidaGraph program.


Assuntos
Análise dos Mínimos Quadrados , Reação em Cadeia da Polimerase em Tempo Real/métodos , Incerteza
14.
Anal Chem ; 87(3): 1889-95, 2015 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-25582662

RESUMO

The quantification cycle (Cq) is widely used for calibration in real-time quantitative polymerase chain reaction (qPCR), to estimate the initial amount, or copy number (N0), of the target DNA. Cq may be defined several ways, including the cycle where the detected fluorescence achieves a prescribed threshold level. For all methods of defining Cq, the standard deviation from replicate experiments is typically much greater than the estimated standard errors from the least-squares fits used to obtain Cq. For moderate-to-large copy number (N0 > 10(2)), pipet volume uncertainty and variability in the amplification efficiency (E) likely account for most of the excess variance in Cq. For small N0, the dispersion of Cq is determined by the Poisson statistics of N0, which means that N0 can be estimated directly from the variance of Cq. The estimation precision is determined by the statistical properties of χ(2), giving a relative standard deviation of ∼(2/n)(1/2), where n is the number of replicates, for example, a 20% standard deviation in N0 from 50 replicates.


Assuntos
Reação em Cadeia da Polimerase em Tempo Real/métodos , Análise de Variância , Dosagem de Genes , Análise dos Mínimos Quadrados
15.
Anal Chem ; 87(17): 8925-31, 2015 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-26235706

RESUMO

Monte Carlo simulations are used to examine the bias and loss of precision that result from experimental error and analysis procedures in real-time quantitative polymerase chain reaction (PCR). In the limit of small copy numbers (N0), Poisson statistics govern the dispersion in estimates of the quantification cycle (Cq) for replicate experiments, permitting the estimation of N0 from the Cq variance, which is inversely proportional to N0. We derive corrections to expressions given previously for this determination. With increasing N0, the Poisson contribution decreases and other effects, like pipet volume uncertainty (typically >3%), dominate. Cycle-to-cycle variability in the amplification efficiency E produces scale dispersion similar to that for variability in the sensitivity of fluorescence detection. When this E variability is proportional to just the amplification (E - 1), there is insignificant effect on Cq if scale-independent definitions are used for this marker. Single-reaction analysis methods based on the exponential growth equation are inherently low-biased in E and high-biased in N0, and these biases can amount to factor-of-4 or greater error in N0. For estimating Cq, their greatest limitation is use of a constant absolute threshold, making them inefficient for data that exhibit scale variability.

16.
Anal Biochem ; 464: 94-102, 2014 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-24991688

RESUMO

Most methods for analyzing real-time quantitative polymerase chain reaction (qPCR) data for single experiments estimate the hypothetical cycle 0 signal y0 by first estimating the quantification cycle (Cq) and amplification efficiency (E) from least-squares fits of fluorescence intensity data for cycles near the onset of the growth phase. The resulting y0 values are statistically equivalent to the corresponding Cq if and only if E is taken to be error free. But uncertainty in E usually dominates the total uncertainty in y0, making the latter much degraded in precision compared with Cq. Bias in E can be an even greater source of error in y0. So-called mechanistic models achieve higher precision in estimating y0 by tacitly assuming E=2 in the baseline region and so are subject to this bias error. When used in calibration, the mechanistic y0 is statistically comparable to Cq from the other methods. When a signal threshold yq is used to define Cq, best estimation precision is obtained by setting yq near the maximum signal in the range of fitted cycles, in conflict with common practice in the y0 estimation algorithms.


Assuntos
Reação em Cadeia da Polimerase/métodos , Incerteza , Calibragem , Análise dos Mínimos Quadrados
17.
Anal Biochem ; 449: 76-82, 2014 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-24365068

RESUMO

New methods are used to compare seven qPCR analysis methods for their performance in estimating the quantification cycle (Cq) and amplification efficiency (E) for a large test data set (94 samples for each of 4 dilutions) from a recent study. Precision and linearity are assessed using chi-square (χ(2)), which is the minimized quantity in least-squares (LS) fitting, equivalent to the variance in unweighted LS, and commonly used to define statistical efficiency. All methods yield Cqs that vary strongly in precision with the starting concentration N0, requiring weighted LS for proper calibration fitting of Cq vs log(N0). Then χ(2) for cubic calibration fits compares the inherent precision of the Cqs, while increases in χ(2) for quadratic and linear fits show the significance of nonlinearity. Nonlinearity is further manifested in unphysical estimates of E from the same Cq data, results which also challenge a tenet of all qPCR analysis methods - that E is constant throughout the baseline region. Constant-threshold (Ct) methods underperform the other methods when the data vary considerably in scale, as these data do.


Assuntos
Reação em Cadeia da Polimerase em Tempo Real/métodos , Calibragem , Análise dos Mínimos Quadrados , Modelos Lineares
18.
Anal Biochem ; 424(2): 211-20, 2012 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-22306472

RESUMO

Literature recommendations for designing isothermal titration calorimetry (ITC) experiments to study 1:1 binding, M+X -->/<-- MX, are not consistent and have persisted through time with little quantitative justification. In particular, the "standard protocol" employed by most workers involves 20 to 30 injections of titrant to a final titrant/titrand mole ratio (R(m)) of ~ 2-a scheme that can be far from optimal and can needlessly limit applicability of the ITC technique. These deficiencies are discussed here along with other misconceptions. Whether a specific binding process can be studied by ITC is determined less by c (the product of binding constant K and titrand concentration [M](0)) than by the total detectable heat q(tot) and the extent to which M can be converted to MX. As guidelines, with 90% conversion to MX, K can be estimated within 5% over the range 10 to 10(8)M(-1) when q(tot)/σ(q)≈700, where σ(q) is the standard deviation for estimation of q. This ratio drops to ~150 when the stoichiometry parameter n is treated as known. A computer application for modeling 1:1 binding yields realistic estimates of parameter standard errors for use in protocol design and feasibility assessment.


Assuntos
Modelos Químicos , Modelos Estatísticos , Calorimetria , Humanos , Cinética , Projetos de Pesquisa , Temperatura , Termodinâmica , Titulometria
19.
J Phys Chem A ; 116(1): 391-8, 2012 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-22128887

RESUMO

Absorption spectra of I(2) dissolved in n-heptane and CCl(4) are analyzed with a quantum gas-phase model, in which spectra at four temperatures between 15° and 50 °C are least-squares fitted by bound-free spectral simulations to obtain estimates of the excited-state potential energy curves and transition moment functions for the three component bands--A ← X, B ← X, and C ← X. Compared with a phenomenological band-fitting model used previously on these spectra, the physical model (1) is better statistically, and (2) yields component bands with less variability. The results support the earlier tentative conclusion that most of the ~20% gain in intensity in solution is attributable to the C ← X transition. The T-dependent changes in the spectrum are accounted for by potential energy shifts that are linear in T and negative (giving red shifts in the spectra) and about twice as large for CCl(4) as for heptane. The derived upper potentials resemble those in the gas phase, with one major exception: In the statistically best convergence mode, the A potential is much lower and steeper, with a strongly varying transition moment function. This observation leads to the realization that two markedly different potential curves can give nearly identical absorption spectra.

20.
Anal Biochem ; 414(2): 297-9, 2011 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-21443854

RESUMO

In the study of 1:1 binding by isothermal titration calorimetry, reagent concentration errors are fully absorbed in the data analysis, giving incorrect values for the key parameters--K, ΔH, and n--with no effect on the least-squares statistics. Reanalysis of results from an interlaboratory study of a selected biochemical process demonstrates that concentration errors are likely responsible for most of the overall statistical error in these parameters. The concentration errors are approximately 10%, greatly exceeding expected levels. Furthermore, examination of selected data sets reveals a surprising sensitivity to the baseline, suggesting a need for great care in treating dilution heats.


Assuntos
Calorimetria/métodos , Animais , Anidrase Carbônica II/química , Bovinos , Ligação Proteica , Sulfonamidas/química , Termodinâmica , Benzenossulfonamidas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA