Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
J Biopharm Stat ; : 1-19, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38889012

RESUMO

BACKGROUND: Positive and negative likelihood ratios (PLR and NLR) are important metrics of accuracy for diagnostic devices with a binary output. However, the properties of Bayesian and frequentist interval estimators of PLR/NLR have not been extensively studied and compared. In this study, we explore the potential use of the Bayesian method for interval estimation of PLR/NLR, and, more broadly, for interval estimation of the ratio of two independent proportions. METHODS: We develop a Bayesian-based approach for interval estimation of PLR/NLR for use as a part of a diagnostic device performance evaluation. Our approach is applicable to a broader setting for interval estimation of any ratio of two independent proportions. We compare score and Bayesian interval estimators for the ratio of two proportions in terms of the coverage probability (CP) and expected interval width (EW) via extensive experiments and applications to two case studies. A supplementary experiment was also conducted to assess the performance of the proposed exact Bayesian method under different priors. RESULTS: Our experimental results show that the overall mean CP for Bayesian interval estimation is consistent with that for the score method (0.950 vs. 0.952), and the overall mean EW for Bayesian is shorter than that for score method (15.929 vs. 19.724). Application to two case studies showed that the intervals estimated using the Bayesian and frequentist approaches are very similar. DISCUSSION: Our numerical results indicate that the proposed Bayesian approach has a comparable CP performance with the score method while yielding higher precision (i.e. a shorter EW).

2.
Multivariate Behav Res ; 59(5): 1058-1076, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39042102

RESUMO

While Bayesian methodology is increasingly favored in behavioral research for its clear probabilistic inference and model structure, its widespread acceptance as a standard meta-analysis approach remains limited. Although some conventional Bayesian hierarchical models are frequently used for analysis, their performance has not been thoroughly examined. This study evaluates two commonly used Bayesian models for meta-analysis of standardized mean difference and identifies significant issues with these models. In response, we introduce a new Bayesian model equipped with novel features that address existing model concerns and a broader limitation of the current Bayesian meta-analysis. Furthermore, we introduce a simple computational approach to construct simultaneous credible intervals for the summary effect and between-study heterogeneity, based on their joint posterior samples. This fully captures the joint uncertainty in these parameters, a task that is challenging or impractical with frequentist models. Through simulation studies rooted in a joint Bayesian/frequentist paradigm, we compare our model's performance against existing ones under conditions that mirror realistic research scenarios. The results reveal that our new model outperforms others and shows enhanced statistical properties. We also demonstrate the practicality of our models using real-world examples, highlighting how our approach strengthens the robustness of inferences regarding the summary effect.


Assuntos
Teorema de Bayes , Simulação por Computador , Metanálise como Assunto , Modelos Estatísticos , Humanos , Interpretação Estatística de Dados , Pesquisa Comportamental/métodos , Pesquisa Comportamental/estatística & dados numéricos , Pesquisa Comportamental/normas
3.
Entropy (Basel) ; 26(6)2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38920519

RESUMO

Ensuring that the proposed probabilistic model accurately represents the problem is a critical step in statistical modeling, as choosing a poorly fitting model can have significant repercussions on the decision-making process. The primary objective of statistical modeling often revolves around predicting new observations, highlighting the importance of assessing the model's accuracy. However, current methods for evaluating predictive ability typically involve model comparison, which may not guarantee a good model selection. This work presents an accuracy measure designed for evaluating a model's predictive capability. This measure, which is straightforward and easy to understand, includes a decision criterion for model rejection. The development of this proposal adopts a Bayesian perspective of inference, elucidating the underlying concepts and outlining the necessary procedures for application. To illustrate its utility, the proposed methodology was applied to real-world data, facilitating an assessment of its practicality in real-world scenarios.

4.
Mol Phylogenet Evol ; 180: 107689, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36587884

RESUMO

Phylogenetic trees constructed from molecular sequence data rely on largely arbitrary assumptions about the substitution model, the distribution of substitution rates across sites, the version of the molecular clock, and, in the case of Bayesian inference, the prior distribution. Those assumptions affect results reported in the form of clade probabilities and error bars on divergence times and substitution rates. Overlooking the uncertainty in the assumptions leads to overly confident conclusions in the form of inflated clade probabilities and short confidence intervals or credible intervals. This paper demonstrates how to propagate that uncertainty by combining the models considered along with all of their assumptions, including their prior distributions. The combined models incorporate much more of the uncertainty than Bayesian model averages since the latter tend to settle on a single model due to the higher-level assumption that one of the models is true. Nucleotide sequence data illustrates the proposed model combination method.


Assuntos
Evolução Molecular , Modelos Genéticos , Filogenia , Incerteza , Teorema de Bayes , Probabilidade
5.
Stat Med ; 42(17): 2928-2943, 2023 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-37158167

RESUMO

Surveillance research is of great importance for effective and efficient epidemiological monitoring of case counts and disease prevalence. Taking specific motivation from ongoing efforts to identify recurrent cases based on the Georgia Cancer Registry, we extend recently proposed "anchor stream" sampling design and estimation methodology. Our approach offers a more efficient and defensible alternative to traditional capture-recapture (CRC) methods by leveraging a relatively small random sample of participants whose recurrence status is obtained through a principled application of medical records abstraction. This sample is combined with one or more existing signaling data streams, which may yield data based on arbitrarily non-representative subsets of the full registry population. The key extension developed here accounts for the common problem of false positive or negative diagnostic signals from the existing data stream(s). In particular, we show that the design only requires documentation of positive signals in these non-anchor surveillance streams, and permits valid estimation of the true case count based on an estimable positive predictive value (PPV) parameter. We borrow ideas from the multiple imputation paradigm to provide accompanying standard errors, and develop an adapted Bayesian credible interval approach that yields favorable frequentist coverage properties. We demonstrate the benefits of the proposed methods through simulation studies, and provide a data example targeting estimation of the breast cancer recurrence case count among Metro Atlanta area patients from the Georgia Cancer Registry-based Cancer Recurrence Information and Surveillance Program (CRISP) database.


Assuntos
Neoplasias da Mama , Recidiva Local de Neoplasia , Humanos , Feminino , Teorema de Bayes , Sistema de Registros , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/epidemiologia , Monitoramento Epidemiológico
6.
Int J Mol Sci ; 24(20)2023 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-37894754

RESUMO

We compare several different methods to quantify the uncertainty of binding parameters estimated from isothermal titration calorimetry data: the asymptotic standard error from maximum likelihood estimation, error propagation based on a first-order Taylor series expansion, and the Bayesian credible interval. When the methods are applied to simulated experiments and to measurements of Mg(II) binding to EDTA, the asymptotic standard error underestimates the uncertainty in the free energy and enthalpy of binding. Error propagation overestimates the uncertainty for both quantities, except in the simulations, where it underestimates the uncertainty of enthalpy for confidence intervals less than 70%. In both datasets, Bayesian credible intervals are much closer to observed confidence intervals.


Assuntos
Incerteza , Teorema de Bayes , Calorimetria/métodos , Termodinâmica , Ligação Proteica
7.
Behav Res Methods ; 55(3): 1069-1078, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-35581436

RESUMO

The current practice of reliability analysis is both uniform and troublesome: most reports consider only Cronbach's α, and almost all reports focus exclusively on a point estimate, disregarding the impact of sampling error. In an attempt to improve the status quo we have implemented Bayesian estimation routines for five popular single-test reliability coefficients in the open-source statistical software program JASP. Using JASP, researchers can easily obtain Bayesian credible intervals to indicate a range of plausible values and thereby quantify the precision of the point estimate. In addition, researchers may use the posterior distribution of the reliability coefficients to address practically relevant questions such as "What is the probability that the reliability of my test is larger than a threshold value of .80?". In this tutorial article, we outline how to conduct a Bayesian reliability analysis in JASP and correctly interpret the results. By making available a computationally complex procedure in an easy-to-use software package, we hope to motivate researchers to include uncertainty estimates whenever reporting the results of a single-test reliability analysis.


Assuntos
Software , Humanos , Teorema de Bayes , Reprodutibilidade dos Testes , Incerteza
8.
Am J Epidemiol ; 191(3): 487-498, 2022 02 19.
Artigo em Inglês | MEDLINE | ID: mdl-34718388

RESUMO

Estimating incidence of rare cancers is challenging for exceptionally rare entities and in small populations. In a previous study, investigators in the Information Network on Rare Cancers (RARECARENet) provided Bayesian estimates of expected numbers of rare cancers and 95% credible intervals for 27 European countries, using data collected by population-based cancer registries. In that study, slightly different results were found by implementing a Poisson model in integrated nested Laplace approximation/WinBUGS platforms. In this study, we assessed the performance of a Poisson modeling approach for estimating rare cancer incidence rates, oscillating around an overall European average and using small-count data in different scenarios/computational platforms. First, we compared the performance of frequentist, empirical Bayes, and Bayesian approaches for providing 95% confidence/credible intervals for the expected rates in each country. Second, we carried out an empirical study using 190 rare cancers to assess different lower/upper bounds of a uniform prior distribution for the standard deviation of the random effects. For obtaining a reliable measure of variability for country-specific incidence rates, our results suggest the suitability of using 1 as the lower bound for that prior distribution and selecting the random-effects model through an averaged indicator derived from 2 Bayesian model selection criteria: the deviance information criterion and the Watanabe-Akaike information criterion.


Assuntos
Neoplasias , Teorema de Bayes , Europa (Continente)/epidemiologia , Humanos , Incidência , Neoplasias/epidemiologia , Sistema de Registros
9.
Mol Phylogenet Evol ; 167: 107357, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34785383

RESUMO

Confidence intervals of divergence times and branch lengths do not reflect uncertainty about their clades or about the prior distributions and other model assumptions on which they are based. Uncertainty about the clade may be propagated to a confidence interval by multiplying its confidence level by the bootstrap proportion of its clade or by another probability that the clade is correct. (If the confidence level is 95% and the bootstrap proportion is 90%, then the uncertainty-adjusted confidence level is (0.95)(0.90) = 86%.) Uncertainty about the model can be propagated to the confidence interval by reporting the union of the confidence intervals from all the plausible models. Unless there is no overlap between the confidence intervals, that results in an uncertainty-adjusted interval that has as its lower and upper limits the most extreme limits of the models. The proposed methods of uncertainty quantification may be used together.


Assuntos
Modelos Estatísticos , Intervalos de Confiança , Filogenia , Probabilidade , Incerteza
10.
Prev Med ; 164: 107127, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35787846

RESUMO

It is well known that the statistical analyses in health-science and medical journals are frequently misleading or even wrong. Despite many decades of reform efforts by hundreds of scientists and statisticians, attempts to fix the problem by avoiding obvious error and encouraging good practice have not altered this basic situation. Statistical teaching and reporting remain mired in damaging yet editorially enforced jargon of "significance", "confidence", and imbalanced focus on null (no-effect or "nil") hypotheses, leading to flawed attempts to simplify descriptions of results in ordinary terms. A positive development amidst all this has been the introduction of interval estimates alongside or in place of significance tests and P-values, but intervals have been beset by similar misinterpretations. Attempts to remedy this situation by calling for replacement of traditional statistics with competitors (such as pure-likelihood or Bayesian methods) have had little impact. Thus, rather than ban or replace P-values or confidence intervals, we propose to replace traditional jargon with more accurate and modest ordinary-language labels that describe these statistics as measures of compatibility between data and hypotheses or models, which have long been in use in the statistical modeling literature. Such descriptions emphasize the full range of possibilities compatible with observations. Additionally, a simple transform of the P-value called the surprisal or S-value provides a sense of how much or how little information the data supply against those possibilities. We illustrate these reforms using some examples from a highly charged topic: trials of ivermectin treatment for Covid-19.


Assuntos
COVID-19 , Humanos , Interpretação Estatística de Dados , Teorema de Bayes , COVID-19/prevenção & controle , Probabilidade , Modelos Estatísticos , Intervalos de Confiança
11.
Geoderma ; 405: 115396, 2022 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-34980929

RESUMO

A crucial decision in designing a spatial sample for soil survey is the number of sampling locations required to answer, with sufficient accuracy and precision, the questions posed by decision makers at different levels of geographic aggregation. In the Indian Soil Health Card (SHC) scheme, many thousands of locations are sampled per district. In this paper the SHC data are used to estimate the mean of a soil property within a defined study area, e.g., a district, or the areal fraction of the study area where some condition is satisfied, e.g., exceedence of a critical level. The central question is whether this large sample size is needed for this aim. The sample size required for a given maximum length of a confidence interval can be computed with formulas from classical sampling theory, using a prior estimate of the variance of the property of interest within the study area. Similarly, for the areal fraction a prior estimate of this fraction is required. In practice we are uncertain about these prior estimates, and our uncertainty is not accounted for in classical sample size determination (SSD). This deficiency can be overcome with a Bayesian approach, in which the prior estimate of the variance or areal fraction is replaced by a prior distribution. Once new data from the sample are available, this prior distribution is updated to a posterior distribution using Bayes' rule. The apparent problem with a Bayesian approach prior to a sampling campaign is that the data are not yet available. This dilemma can be solved by computing, for a given sample size, the predictive distribution of the data, given a prior distribution on the population and design parameter. Thus we do not have a single vector with data values, but a finite or infinite set of possible data vectors. As a consequence, we have as many posterior distribution functions as we have data vectors. This leads to a probability distribution of lengths or coverages of Bayesian credible intervals, from which various criteria for SSD can be derived. Besides the fully Bayesian approach, a mixed Bayesian-likelihood approach for SSD is available. This is of interest when, after the data have been collected, we prefer to estimate the mean from these data only, using the frequentist approach, ignoring the prior distribution. The fully Bayesian and mixed Bayesian-likelihood approach are illustrated for estimating the mean of log-transformed Zn and the areal fraction with Zn-deficiency, defined as Zn concentration <0.9 mg kg -1, in the thirteen districts of Andhra Pradesh state. The SHC data from 2015-2017 are used to derive prior distributions. For all districts the Bayesian and mixed Bayesian-likelihood sample sizes are much smaller than the current sample sizes. The hyperparameters of the prior distributions have a strong effect on the sample sizes. We discuss methods to deal with this. Even at the mandal (sub-district) level the sample size can almost always be reduced substantially. Clearly SHC over-sampled, and here we show how to reduce the effort while still providing information required for decision-making. R scripts for SSD are provided as supplementary material.

12.
Entropy (Basel) ; 23(9)2021 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-34573724

RESUMO

This paper investigates the statistical inference of inverse power Lomax distribution parameters under progressive first-failure censored samples. The maximum likelihood estimates (MLEs) and the asymptotic confidence intervals are derived based on the iterative procedure and asymptotic normality theory of MLEs, respectively. Bayesian estimates of the parameters under squared error loss and generalized entropy loss function are obtained using independent gamma priors. For Bayesian computation, Tierney-Kadane's approximation method is used. In addition, the highest posterior credible intervals of the parameters are constructed based on the importance sampling procedure. A Monte Carlo simulation study is carried out to compare the behavior of various estimates developed in this paper. Finally, a real data set is analyzed for illustration purposes.

13.
Multivariate Behav Res ; 55(2): 188-210, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31179751

RESUMO

Complex mediation models, such as a two-mediator sequential model, have become more prevalent in the literature. To test an indirect effect in a two-mediator model, we conducted a large-scale Monte Carlo simulation study of the Type I error, statistical power, and confidence interval coverage rates of 10 frequentist and Bayesian confidence/credible intervals (CIs) for normally and nonnormally distributed data. The simulation included never-studied methods and conditions (e.g., Bayesian CI with flat and weakly informative prior methods, two model-based bootstrap methods, and two nonnormality conditions) as well as understudied methods (e.g., profile-likelihood, Monte Carlo with maximum likelihood standard error [MC-ML] and robust standard error [MC-Robust]). The popular BC bootstrap showed inflated Type I error rates and CI under-coverage. We recommend different methods depending on the purpose of the analysis. For testing the null hypothesis of no mediation, we recommend MC-ML, profile-likelihood, and two Bayesian methods. To report a CI, if data has a multivariate normal distribution, we recommend MC-ML, profile-likelihood, and the two Bayesian methods; otherwise, for multivariate nonnormal data we recommend the percentile bootstrap. We argue that the best method for testing hypotheses is not necessarily the best method for CI construction, which is consistent with the findings we present.


Assuntos
Pesquisa Comportamental/métodos , Intervalos de Confiança , Modelos Estatísticos , Análise Multivariada , Teorema de Bayes , Simulação por Computador , Humanos , Método de Monte Carlo
14.
Stat Neerl ; 73(3): 351-372, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31341338

RESUMO

We propose to use the squared multiple correlation coefficient as an effect size measure for experimental analysis-of-variance designs and to use Bayesian methods to estimate its posterior distribution. We provide the expressions for the squared multiple, semipartial, and partial correlation coefficients corresponding to four commonly used analysis-of-variance designs and illustrate our contribution with two worked examples.

15.
J Biopharm Stat ; 28(5): 824-839, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29172970

RESUMO

Frequentist design for two-arm randomized Phase II clinical trials with outcomes from the exponential dispersion family was proposed previously, where the total sample sizes are minimized under multiple constraints on the standard errors of the estimated group means and their difference. This design was generalized from an approach specific for dichotomous outcomes. The two previous approaches measure the central tendency of each group and treatment effect based on mean and difference in means. Other measures such as median or hazard ratio are more appropriate under certain situations. In addition, the frequentist approaches assume that unknown parameters are fixed values. This does not reflect the reality that uncertainty always exists for unknowns. Compared to the frequentist methods, the Bayesian approach offers a flexible way to measure central tendency and treatment effect, and incorporate uncertainty in parameters of interest into considerations. In this article, we generalize a Bayesian design for Phase II clinical trials with endpoints in the exponential family from the two previously developed frequentist approaches. The proposed design minimizes the total sample sizes under pre-specified constraints on the expected length of posterior credible intervals for measures of treatment effect and central tendency in each group. The design is applicable for trials with fixed or optimal randomization allocation ratio and can be applied under adaptive procedure. Examples of method implementations are provided for different types of endpoints from the exponential family in both fixed and adaptive settings.


Assuntos
Ensaios Clínicos Fase II como Assunto/estatística & dados numéricos , Determinação de Ponto Final/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Teorema de Bayes , Ensaios Clínicos Fase II como Assunto/métodos , Determinação de Ponto Final/métodos , Humanos , Neoplasias/diagnóstico , Neoplasias/mortalidade , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Taxa de Sobrevida/tendências , Carga Tumoral
17.
J Clin Epidemiol ; 173: 111464, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39019349

RESUMO

BACKGROUND: Cardiovascular disease (CVD) risk scores provide point estimates of individual risk without uncertainty quantification. The objective of the current study was to demonstrate the feasibility and clinical utility of calculating uncertainty surrounding individual CVD-risk predictions using Bayesian methods. STUDY DESIGN AND SETTING: Individuals with established atherosclerotic CVD were included from the Utrecht Cardiovascular Cohort-Secondary Manifestations of ARTerial disease (UCC-SMART). In 8,355 individuals, followed for median of 8.2 years (IQR 4.2-12.5), a Bayesian Weibull model was derived to predict the 10-year risk of recurrent CVD events. RESULTS: Model coefficients and individual predictions from the Bayesian model were very similar to that of a traditional ('frequentist') model but the Bayesian model also predicted 95% credible intervals (CIs) surrounding individual risk estimates. The median width of the individual 95%CrI was 5.3% (IQR 3.6-6.5) and 17% of the population had a 95%CrI width of 10% or greater. The uncertainty decreased with increasing sample size used for derivation of the model. Combining the Bayesian Weibull model with sampled hazard ratios based on trial reports may be used to estimate individual estimates of absolute risk reduction with uncertainty measures and the probability that a treatment option will result in a clinically relevant risk reduction. CONCLUSION: Estimating uncertainty surrounding individual CVD risk predictions using Bayesian methods is feasible. The uncertainty regarding individual risk predictions could have several applications in clinical practice, like the comparison of different treatment options or by calculating the probability of the individual risk being below a certain treatment threshold. However, as the individual uncertainty measures only reflect sampling error and no biases in risk prediction, physicians should be familiar with the interpretation before widespread clinical adaption.


Assuntos
Teorema de Bayes , Doenças Cardiovasculares , Humanos , Incerteza , Doenças Cardiovasculares/mortalidade , Feminino , Masculino , Pessoa de Meia-Idade , Medição de Risco/métodos , Análise de Sobrevida , Idoso , Estudos de Viabilidade , Fatores de Risco de Doenças Cardíacas
18.
Am Stat ; 78(2): 192-198, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38645436

RESUMO

Epidemiologic screening programs often make use of tests with small, but non-zero probabilities of misdiagnosis. In this article, we assume the target population is finite with a fixed number of true cases, and that we apply an imperfect test with known sensitivity and specificity to a sample of individuals from the population. In this setting, we propose an enhanced inferential approach for use in conjunction with sampling-based bias-corrected prevalence estimation. While ignoring the finite nature of the population can yield markedly conservative estimates, direct application of a standard finite population correction (FPC) conversely leads to underestimation of variance. We uncover a way to leverage the typical FPC indirectly toward valid statistical inference. In particular, we derive a readily estimable extra variance component induced by misclassification in this specific but arguably common diagnostic testing scenario. Our approach yields a standard error estimate that properly captures the sampling variability of the usual bias-corrected maximum likelihood estimator of disease prevalence. Finally, we develop an adapted Bayesian credible interval for the true prevalence that offers improved frequentist properties (i.e., coverage and width) relative to a Wald-type confidence interval. We report the simulation results to demonstrate the enhanced performance of the proposed inferential methods.

19.
PeerJ ; 11: e16397, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38025676

RESUMO

Thailand is a country that is prone to both floods and droughts, and these natural disasters have significant impacts on the country's people, economy, and environment. Estimating rainfall is an important part of flood and drought prevention. Rainfall data typically contains both zero and positive observations, and the distribution of rainfall often follows the delta-lognormal distribution. However, it is important to note that rainfall data can be censored, meaning that some values may be missing or truncated. The interval estimator for the ratio of means will be useful when comparing the means of two samples. The purpose of this article was to compare the performance of several approaches for statistically analyzing left-censored data. The performance of the confidence intervals was evaluated using the coverage probability and average length, which were assessed through Monte Carlo simulation. The approaches examined included several variations of the generalized confidence interval, the Bayesian, the parametric bootstrap, and the method of variance estimates recovery approaches. For (ξ1, ξ2) = (0.10,0.10), simulations showed that the Bayesian approach would be a suitable choice for constructing the credible interval for the ratio of means of delta-lognormal distributions based on left-censored data. For (ξ1, ξ2) = (0.10,0.25), the parametric bootstrap approach was a strong alternative for constructing the confidence interval. However, the generalized confidence interval approach can be considered to construct the confidence when the sample sizes are increase. Practical applications demonstrating the use of these techniques on rainfall data showed that the confidence interval based on the generalized confidence interval approach covered the ratio of population means and had the smallest length. The proposed approaches' effectiveness was illustrated using daily rainfall datasets from the provinces of Chiang Rai and Chiang Mai in Thailand.


Assuntos
Intervalos de Confiança , Humanos , Teorema de Bayes , Tailândia , Simulação por Computador , Distribuições Estatísticas
20.
Stat Methods Med Res ; 32(11): 2158-2171, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37674462

RESUMO

This article presents an objective Bayesian approach to estimating the binomial parameter in group sequential experiments with a binary endpoint. The idea of deriving design-dependent priors was first introduced using Jeffreys criterion. Another class of priors was developed based on the reference prior theory. A theoretical framework was established showing that explicit reference to the experimental design in the prior is fully Bayesian justified. Using a design-dependent prior which generalizes the reference prior, I propose a comprehensive and unified approach to the point and the interval estimations in group sequential experiments, and I evidence the good frequentist properties of the posterior estimators through comparative studies with the existing methods. The effect of the prior correction on the posterior estimates is studied in three classical designs of clinical trials. Finally, I discuss the idea of using this approach as a default choice for estimation upon sequential experiment termination.


Assuntos
Projetos de Pesquisa , Teorema de Bayes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA