Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
Philos Trans A Math Phys Eng Sci ; 381(2247): 20220142, 2023 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-36970827

RESUMO

Prediction has a central role in the foundations of Bayesian statistics and is now the main focus in many areas of machine learning, in contrast to the more classical focus on inference. We discuss that, in the basic setting of random sampling-that is, in the Bayesian approach, exchangeability-uncertainty expressed by the posterior distribution and credible intervals can indeed be understood in terms of prediction. The posterior law on the unknown distribution is centred on the predictive distribution and we prove that it is marginally asymptotically Gaussian with variance depending on the predictive updates, i.e. on how the predictive rule incorporates information as new observations become available. This allows to obtain asymptotic credible intervals only based on the predictive rule (without having to specify the model and the prior law), sheds light on frequentist coverage as related to the predictive learning rule, and, we believe, opens a new perspective towards a notion of predictive efficiency that seems to call for further research. This article is part of the theme issue 'Bayesian inference: challenges, perspectives, and prospects'.

2.
Biom J ; 64(4): 681-695, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34889467

RESUMO

In Bayesian inference, prior distributions formalize preexperimental information and uncertainty on model parameters. Sometimes different sources of knowledge are available, possibly leading to divergent posterior distributions and inferences. Research has been recently devoted to the development of sample size criteria that guarantee agreement of posterior information in terms of credible intervals when multiple priors are available. In these articles, the goals of reaching consensus and evidence are typically kept separated. Adopting a Bayesian performance-based approach, the present article proposes new sample size criteria for superiority trials that jointly control the achievement of both minimal evidence and consensus, measured by appropriate functions of the posterior distributions. We develop both an average criterion and a more stringent criterion that accounts for the entire predictive distributions of the selected measures of minimal evidence and consensus. Methods are developed and illustrated via simulation for trials involving binary outcomes. A real clinical trial example on Covid-19 vaccine data is presented.


Assuntos
Vacinas contra COVID-19 , COVID-19 , Teorema de Bayes , Consenso , Humanos , Projetos de Pesquisa , Tamanho da Amostra
3.
J Anesth ; 36(4): 524-531, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35641661

RESUMO

PURPOSE: We aimed to provide clinicians with introductory guidance for interpreting and assessing confidence in on Network meta-analysis (NMA) results. METHODS: We reviewed current literature on NMA and summarized key points. RESULTS: Network meta-analysis (NMA) is a statistical method for comparing the efficacy of three or more interventions simultaneously in a single analysis by synthesizing both direct and indirect evidence across a network of randomized clinical trials. It has become increasingly popular in healthcare, since direct evidence (head-to-head randomized clinical trials) are not always available. NMA methods are categorized as either Bayesian or frequentist, and while the two mostly provide similar results, the two approaches are theoretically different and require different interpretations of the results. CONCLUSIONS: We recommend a careful approach to interpreting NMA results and the validity of an NMA depends on its underlying statistical assumptions and the quality of the evidence used in the NMA.


Assuntos
Metanálise em Rede , Teorema de Bayes
4.
Entropy (Basel) ; 24(6)2022 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-35741478

RESUMO

Estimates based on expert judgements of quantities of interest are commonly used to supplement or replace measurements when the latter are too expensive or impossible to obtain. Such estimates are commonly accompanied by information about the uncertainty of the estimate, such as a credible interval. To be considered well-calibrated, an expert's credible intervals should cover the true (but unknown) values a certain percentage of time, equal to the percentage specified by the expert. To assess expert calibration, so-called calibration questions may be asked in an expert elicitation exercise; these are questions with known answers used to assess and compare experts' performance. An approach that is commonly applied to assess experts' performance by using these questions is to directly compare the stated percentage cover with the actual coverage. We show that this approach has statistical drawbacks when considered in a rigorous hypothesis testing framework. We generalize the test to an equivalence testing framework and discuss the properties of this new proposal. We show that comparisons made on even a modest number of calibration questions have poor power, which suggests that the formal testing of the calibration of experts in an experimental setting may be prohibitively expensive. We contextualise the theoretical findings with a couple of applications and discuss the implications of our findings.

5.
Stat Med ; 38(23): 4566-4573, 2019 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-31297825

RESUMO

Many sample size criteria exist. These include power calculations and methods based on confidence interval widths from a frequentist viewpoint, and Bayesian methods based on credible interval widths or decision theory. Bayesian methods account for the inherent uncertainty of inputs to sample size calculations through the use of prior information rather than the point estimates typically used by frequentist methods. However, the choice of prior density can be problematic because there will almost always be different appreciations of the past evidence. Such differences can be accommodated a priori by robust methods for Bayesian design, for example, using mixtures or ϵ-contaminated priors. This would then ensure that the prior class includes divergent opinions. However, one may prefer to report several posterior densities arising from a "community of priors," which cover the range of plausible prior densities, rather than forming a single class of priors. To date, however, there are no corresponding sample size methods that specifically account for a community of prior densities in the sense of ensuring a large-enough sample size for the data to sufficiently overwhelm the priors to ensure consensus across widely divergent prior views. In this paper, we develop methods that account for the variability in prior opinions by providing the sample size required to induce posterior agreement to a prespecified degree. Prototypic examples to one- and two-sample binomial outcomes are included. We compare sample sizes from criteria that consider a family of priors to those that would result from previous interval-based Bayesian criteria.


Assuntos
Teorema de Bayes , Ensaios Clínicos como Assunto , Tamanho da Amostra , Distribuição Binomial , Humanos , Sensibilidade e Especificidade
6.
Biotechnol Bioeng ; 114(11): 2668-2684, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28695999

RESUMO

13 C Metabolic Fluxes Analysis (13 C MFA) remains to be the most powerful approach to determine intracellular metabolic reaction rates. Decisions on strain engineering and experimentation heavily rely upon the certainty with which these fluxes are estimated. For uncertainty quantification, the vast majority of 13 C MFA studies relies on confidence intervals from the paradigm of Frequentist statistics. However, it is well known that the confidence intervals for a given experimental outcome are not uniquely defined. As a result, confidence intervals produced by different methods can be different, but nevertheless equally valid. This is of high relevance to 13 C MFA, since practitioners regularly use three different approximate approaches for calculating confidence intervals. By means of a computational study with a realistic model of the central carbon metabolism of E. coli, we provide strong evidence that confidence intervals used in the field depend strongly on the technique with which they were calculated and, thus, their use leads to misinterpretation of the flux uncertainty. In order to provide a better alternative to confidence intervals in 13 C MFA, we demonstrate that credible intervals from the paradigm of Bayesian statistics give more reliable flux uncertainty quantifications which can be readily computed with high accuracy using Markov chain Monte Carlo. In addition, the widely applied chi-square test, as a means of testing whether the model reproduces the data, is examined closer.


Assuntos
Carbono/metabolismo , Escherichia coli/metabolismo , Análise do Fluxo Metabólico/métodos , Redes e Vias Metabólicas/fisiologia , Modelos Biológicos , Modelos Estatísticos , Teorema de Bayes , Isótopos de Carbono/farmacocinética , Simulação por Computador , Proteínas de Escherichia coli/metabolismo , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
7.
Stat Med ; 36(24): 3858-3874, 2017 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-28762546

RESUMO

Recently developed methods of longitudinal discriminant analysis allow for classification of subjects into prespecified prognostic groups using longitudinal history of both continuous and discrete biomarkers. The classification uses Bayesian estimates of the group membership probabilities for each prognostic group. These estimates are derived from a multivariate generalised linear mixed model of the biomarker's longitudinal evolution in each of the groups and can be updated each time new data is available for a patient, providing a dynamic (over time) allocation scheme. However, the precision of the estimated group probabilities differs for each patient and also over time. This precision can be assessed by looking at credible intervals for the group membership probabilities. In this paper, we propose a new allocation rule that incorporates credible intervals for use in context of a dynamic longitudinal discriminant analysis and show that this can decrease the number of false positives in a prognostic test, improving the positive predictive value. We also establish that by leaving some patients unclassified for a certain period, the classification accuracy of those patients who are classified can be improved, giving increased confidence to clinicians in their decision making. Finally, we show that determining a stopping rule dynamically can be more accurate than specifying a set time point at which to decide on a patient's status. We illustrate our methodology using data from patients with epilepsy and show how patients who fail to achieve adequate seizure control are more accurately identified using credible intervals compared to existing methods.


Assuntos
Teorema de Bayes , Classificação/métodos , Probabilidade , Simulação por Computador , Tomada de Decisões , Análise Discriminante , Epilepsia/diagnóstico , Epilepsia/terapia , Humanos , Modelos Lineares , Estudos Longitudinais , Análise Multivariada , Prognóstico , Indução de Remissão , Sensibilidade e Especificidade
8.
Biometrics ; 72(1): 136-45, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26224325

RESUMO

The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library.


Assuntos
Teorema de Bayes , Etiquetas de Sequências Expressas , Aprendizado de Máquina , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Análise de Sequência de DNA/métodos , Algoritmos , Simulação por Computador , Interpretação Estatística de Dados
9.
Stat Methods Med Res ; 33(7): 1197-1210, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38767225

RESUMO

In disease surveillance, capture-recapture methods are commonly used to estimate the number of diseased cases in a defined target population. Since the number of cases never identified by any surveillance system cannot be observed, estimation of the case count typically requires at least one crucial assumption about the dependency between surveillance systems. However, such assumptions are generally unverifiable based on the observed data alone. In this paper, we advocate a modeling framework hinging on the choice of a key population-level parameter that reflects dependencies among surveillance streams. With the key dependency parameter as the focus, the proposed method offers the benefits of (a) incorporating expert opinion in the spirit of prior information to guide estimation; (b) providing accessible bias corrections, and (c) leveraging an adapted credible interval approach to facilitate inference. We apply the proposed framework to two real human immunodeficiency virus surveillance datasets exhibiting three-stream and four-stream capture-recapture-based case count estimation. Our approach enables estimation of the number of human immunodeficiency virus positive cases for both examples, under realistic assumptions that are under the investigator's control and can be readily interpreted. The proposed framework also permits principled uncertainty analyses through which a user can acknowledge their level of confidence in assumptions made about the key non-identifiable dependency parameter.


Assuntos
Modelos Estatísticos , Humanos , Infecções por HIV/epidemiologia , Vigilância da População/métodos , Prova Pericial
10.
Psychon Bull Rev ; 30(5): 1759-1781, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37170004

RESUMO

We examined the relationship between the Bayes factor and the separation of credible intervals in between- and within-subject designs under a range of effect and sample sizes. For the within-subject case, we considered five intervals: (1) the within-subject confidence interval of Loftus and Masson (1994); (2) the within-subject Bayesian interval developed by Nathoo et al. (2018), whose derivation conditions on estimated random effects; (3) and (4) two modifications of (2) based on a proposal by Heck (2019) to allow for shrinkage and account for uncertainty in the estimation of random effects; and (5) the standard Bayesian highest-density interval. We derived and observed through simulations a clear and consistent relationship between the Bayes factor and the separation of credible intervals. Remarkably, for a given sample size, this relationship is described well by a simple quadratic exponential curve and is most precise in case (4). In contrast, interval (5) is relatively wide due to between-subjects variability and is likely to obscure effects when used in within-subject designs, rendering its relationship with the Bayes factor unclear in that case. We discuss how the separation percentage of (4), combined with knowledge of the sample size, could provide evidence in support of either a null or an alternative hypothesis. We also present a case study with example data and provide an R package 'rmBayes' to enable computation of each of the within-subject credible intervals investigated here using a number of possible prior distributions.


Assuntos
Teorema de Bayes , Humanos , Tamanho da Amostra , Incerteza
11.
J Appl Stat ; 50(5): 1152-1177, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37009595

RESUMO

We introduce a new family via the log mean of an underlying distribution and as baseline the proportional hazards model and derive some important properties. A special model is proposed by taking the Weibull for the baseline. We derive several properties of the sub-model such as moments, order statistics, hazard function, survival regression and certain characterization results. We estimate the parameters using frequentist and Bayesian approaches. Further, Bayes estimators, posterior risks, credible intervals and highest posterior density intervals are obtained under different symmetric and asymmetric loss functions. A Monte Carlo simulation study examines the biases and mean square errors of the maximum likelihood estimators. For the illustrative purposes, we consider heart transplant and bladder cancer data sets and investigate the efficiency of proposed model.

12.
Lancet Reg Health West Pac ; 31: 100637, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36879780

RESUMO

Background: We aimed to estimate the future burden of coronary heart disease (CHD) and stroke mortalities by sex and all 47 prefectures of Japan until 2040 while accounting for effects of age, period, and cohort and integrating them to be at the national level to account for regional differences among prefectures. Methods: We estimated future CHD and stroke mortality projections, developing Bayesian age-period-cohort (BAPC) models in population and the number of CHD and stroke by age, sex, and all 47 prefectures observed from 1995 to 2019; then applying these to official future population estimates until 2040. The present participants were all men and women aged over 30 years and were residents of Japan. Findings: In the BAPC models, the predicted number of national-level cardiovascular deaths from 2020 to 2040 would decrease (39,600 [95% credible interval: 32,200-47,900] to 36,200 [21,500-58,900] CHD deaths in men, and 27,400 [22,000-34,000] to 23,600 [12,700-43,800] in women; and 50,400 [41,900-60,200] to 40,800 [25,200-67,800] stroke deaths in men, and 52,200 [43,100-62,800] to 47,400 [26,800-87,200] in women). Interpretation: After adjusting these factors, future CHD and stroke deaths will decline until 2040 at the national level and in most prefectures. Funding: This research was supported by the Intramural Research Fund of Cardiovascular Diseases of the National Cerebral and Cardiovascular Center (21-1-6, 21-6-8), JSPS KAKENHI Grant Number JP22K17821, and the Ministry of Health, Labour and Welfare Comprehensive Research on Life-Style Related (Diseases Cardiovascular Diseases and Diabetes Mellitus Program), Grant Number 22FA1015.

13.
EClinicalMedicine ; 56: 101777, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36578882

RESUMO

Background: Immune thrombocytopenia is an autoimmune disease characterised by decreased platelet count. In recent years, novel therapeutic regimens have been investigated in randomised controlled trials (RCTs). We aimed to compare the efficacy and safety of different treatments in newly diagnosed adult primary immune thrombocytopenia. Methods: We did a systematic review and network meta-analysis of RCTs involving treatments for newly diagnosed primary immune thrombocytopenia. PubMed, Embase, the Cochrane Central Register of Controlled Trials, and ClinicalTrials.gov databases were searched up to April 31, 2022. The primary outcomes were 6-month sustained response and early response. Secondary outcome was grade 3 or higher adverse events. This study is registered with PROSPERO (CRD42022296179). Findings: Eighteen RCTs (n = 1944) were included in this study. Pairwise meta-analysis showed that the percentage of patients achieving early response was higher in the dexamethasone-containing doublet group than in the dexamethasone group (79.7% vs 68.7%, odds ratio [OR] 1.82, 95% CI 1.10-3.02). The difference was more profound for sustained response (60.5% vs 37.4%, OR 2.57, 95% CI 1.95-3.40). Network meta-analysis showed that dexamethasone plus recombinant human thrombopoietin ranked first for early response, followed by dexamethasone plus oseltamivir or tacrolimus. Rituximab plus prednisolone achieved highest sustained response, followed by dexamethasone plus all-trans retinoic acid or rituximab. Rituximab plus dexamethasone showed 15.3% of grade 3 or higher adverse events, followed by prednis(ol)one (4.8%) and all-trans retinoic acid plus dexamethasone (4.7%). Interpretation: Our findings suggested that compared with monotherapy dexamethasone or prednis(ol)one, the combined regimens had better early and sustained responses. rhTPO plus dexamethasone ranked top in early response, while rituximab plus corticosteroids obtained the best sustained response, but with more adverse events. Adding oseltamivir, all-trans retinoic acid or tacrolimus to dexamethasone reached equally encouraging sustained response, without compromising safety profile. Although this network meta-analysis compared all the therapeutic regimens up to date, more head-to-head RCTs with larger sample size are warranted to make direct comparison among these strategies. Funding: National Natural Science Foundation of China, Major Research Plan of National Natural Science Foundation of China, Shandong Provincial Natural Science Foundation and Young Taishan Scholar Foundation of Shandong Province.

14.
Inquiry ; 59: 469580221082356, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35373630

RESUMO

Hypertension has become a major public health challenge and a crucial area of research due to its high prevalence across the world including the sub-Saharan Africa. No previous study in South Africa has investigated the impact of blood pressure risk factors on different specific conditional quantile functions of systolic and diastolic blood pressure using Bayesian quantile regression. Therefore, this study presents a comparative analysis of the classical and Bayesian inference techniques to quantile regression. Both classical and Bayesian inference techniques were demonstrated on a sample of secondary data obtained from South African National Income Dynamics Study (2017-2018). Age, BMI, gender male, cigarette consumption and exercises presented statistically significant associations with both SBP and DBP across all the upper quantiles (τ∈{0.75,0.95}). The white noise phenomenon was observed on the diagnostic tests of convergence used in the study. Results suggested that the Bayesian approach to quantile regression reveals more precise estimates than the frequentist approach due to narrower width of the 95% credible intervals than the width of the 95% confidence intervals. It is therefore suggested that Bayesian approach to quantile regression modelling to be used to estimate hypertension.


Assuntos
Hipertensão , Teorema de Bayes , Pressão Sanguínea/fisiologia , Exercício Físico , Humanos , Hipertensão/epidemiologia , Masculino , África do Sul/epidemiologia
15.
PeerJ ; 10: e13465, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35607452

RESUMO

Precipitation and flood forecasting are difficult due to rainfall variability. The mean of a delta-gamma distribution can be used to analyze rainfall data for predicting future rainfall, thereby reducing the risks of future disasters due to excessive or too little rainfall. In this study, we construct credible and highest posterior density (HPD) intervals for the mean and the difference between the means of delta-gamma distributions by using Bayesian methods based on Jeffrey's rule and uniform priors along with a confidence interval based on fiducial quantities. The results of a simulation study indicate that the Bayesian HPD interval based on Jeffrey's rule prior performed well in terms of coverage probability and provided the shortest expected length. Rainfall data from Chiang Mai province, Thailand, are also used to illustrate the efficacies of the proposed methods.


Assuntos
Teorema de Bayes , Tailândia , Simulação por Computador , Probabilidade , Distribuições Estatísticas
16.
J Appl Stat ; 49(11): 2891-2912, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36093036

RESUMO

This paper presents methods of estimation of the parameters and acceleration factor for Nadarajah-Haghighi distribution based on constant-stress partially accelerated life tests. Based on progressive Type-II censoring, Maximum likelihood and Bayes estimates of the model parameters and acceleration factor are established, respectively. In addition, approximate confidence interval are constructed via asymptotic variance and covariance matrix, and Bayesian credible intervals are obtained based on importance sampling procedure. For comparison purpose, alternative bootstrap confidence intervals for unknown parameters and acceleration factor are also presented. Finally, extensive simulation studies are conducted for investigating the performance of the our results, and two data sets are analyzed to show the applicabilities of the proposed methods.

17.
One Health ; 10: 100167, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33117879

RESUMO

In February 2020, the exponential growth of COVID-19 cases in Wuhan city posed a huge economic burden to local medical systems. Consequently, Wuhan established Fangcang Shelter hospitals as a One Health approach for responding to and containing the COVID-19 outbreak by isolating and caring for mild-to-moderate cases. However, it is unclear to what degree the hospitals contained COVID-19. This study performed an interrupted time series analysis to compare the number of new confirmed cases of COVID-19 before and after the operation of Fangcang Shelter hospitals. The initial number of confirmed cases in Wuhan increased significantly by 68.54 cases per day prior to February 4, 2020. Compared with the number of cases noted 20 days before the use of Fangcang Shelter hospitals, a sustained reduction in the number of confirmed cases (trend change, -125.57; P < 0.0001) was noted 41 days after the use of the hospitals. Immediate-level changes were observed for confirmed cases (level change, 725.97; P = 0.025). These changes led to an estimated 5148 fewer confirmed cases (P < 0.0001). According to the mean confirmed cases of 395.71 per day before the intervention, we estimated that Wuhan had advanced the terminal phase of COVID-19 by 13 days. Furthermore, immediately after introduction of Fangcang Shelter Hospitals on February 5, the reproduction number dropped rapidly, from a pre-introduction rate of 4.0 to 2.0. The Fangcang Shelter hospitals most likely to reversed the epidemic trend of COVID-19 while a containment strategy was implemented in Wuhan. In a One Health perspective, Fangcang Shelter hospitals, with their functions of isolation and treatment of confirmed COVID-19 patients, engaging professionals from many disciplines, such as medicine, engineering, architecture, psychology, environmental health, and social sciences. The results of this study provide a valuable reference for health policy makers in other countries.

18.
Spat Spatiotemporal Epidemiol ; 24: 53-62, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29413714

RESUMO

The purpose of this study is to identify regions with diabetes health-service shortage. American Diabetes Association (ADA)-accredited diabetes self-management education (DSME) is recommended for all those with diabetes. In this study, we focus on demographic patterns and geographic regionalization of the disease by including accessibility and availability of diabetes education resources as a critical component in understanding and confronting differences in diabetes prevalence, as well as addressing regional or sub-regional differences in awareness, treatment and control. We conducted an ecological county-level study utilizing publicly available secondary data on 3,109 counties in the continental U.S. We used a Bayesian spatial cluster model that enabled spatial heterogeneities across the continental U.S. to be addressed. We used the American Diabetes Association (ADA) website to identify 2012 DSME locations and national 2010 county-level diabetes rates estimated by the Centers for Disease Control and Prevention and identified regions with low DSME program availability relative to their diabetes rates and population density. Only 39.8% of the U.S. counties had at least one ADA-accredited DSME program location. Based on our 95% credible intervals, age-adjusted diabetes rates and DSME program locations were associated in only seven out of thirty five identified clusters. Out of these seven, only two clusters had a positive association. We identified clusters that were above the 75th percentile of average diabetes rates, but below the 25th percentile of average DSME location counts and found that these clusters were all located in the Southeast portion of the country. Overall, there was a lack of relationship between diabetes rates and DSME center locations in the U.S., suggesting resources could be more efficiently placed according to need. Clusters that were high in diabetes rates and low in DSME placements, all in the southeast, should particularly be considered for additional DSME programming.


Assuntos
Diabetes Mellitus Tipo 2/epidemiologia , Educação em Saúde , Acessibilidade aos Serviços de Saúde , Autogestão , Fatores Etários , Idoso , Análise por Conglomerados , Diabetes Mellitus Tipo 2/prevenção & controle , Feminino , Humanos , Masculino , Análise Espaço-Temporal , Estados Unidos/epidemiologia
19.
Vision Res ; 122: 105-123, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-27013261

RESUMO

The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available.


Assuntos
Teorema de Bayes , Interpretação Estatística de Dados , Psicometria/métodos , Psicofísica/métodos , Humanos , Modelos Estatísticos , Limiar Sensorial
20.
Hum Vaccin Immunother ; 11(6): 1557-63, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25909683

RESUMO

In the evaluation of vaccine seroresponse rates and adverse reaction rates, extreme test results often occur, with substantial adverse event rates of 0% and/or seroresponse rates of 100%, which has produced several data challenges. Few studies have used both the Bayesian and frequentist methods on the same sets of data that contain extreme test cases to evaluate vaccine safety and immunogenicity. In this study, Bayesian methods were introduced, and the comparison with frequentist methods was made based on practical cases from randomized controlled vaccine trials and a simulation experiment to examine the rationality of the Bayesian methods. The results demonstrated that the Bayesian non-informative method obtained lower limits (for extreme cases of 100%) and upper limits (for extreme cases of zero), which were similar to the limits that were identified with the frequentist method. The frequentist rate estimates and corresponding confidence intervals (CIs) for extreme cases of 0 or 100% always equaled and included 0 or 100%, respectively, whereas the Bayesian estimations varied depending on the sample size, with none equaling zero or 100%. The Bayesian method obtained more reasonable interval estimates of the rates with extreme data compared with the frequentist method, whereas the frequentist method objectively expressed the outcomes of clinical vaccine trials. The two types of statistical results are complementary, and it is proposed that the Bayesian and frequentist methods should be combined to more comprehensively evaluate clinical vaccine trials.


Assuntos
Anticorpos/sangue , Bioestatística/métodos , Vacinas/efeitos adversos , Vacinas/imunologia , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa