Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 39
1.
Behav Res Methods ; 56(4): 4085-4102, 2024 Apr.
Article En | MEDLINE | ID: mdl-38532062

Synthesizing results across multiple studies is a popular way to increase the robustness of scientific findings. The most well-known method for doing this is meta-analysis. However, because meta-analysis requires conceptually comparable effect sizes with the same statistical form, meta-analysis may not be possible when studies are highly diverse in terms of their research design, participant characteristics, or operationalization of key variables. In these situations, Bayesian evidence synthesis may constitute a flexible and feasible alternative, as this method combines studies at the hypothesis level rather than at the level of the effect size. This method therefore poses less constraints on the studies to be combined. In this study, we introduce Bayesian evidence synthesis and show through simulations when this method diverges from what would be expected in a meta-analysis to help researchers correctly interpret the synthesis results. As an empirical demonstration, we also apply Bayesian evidence synthesis to a published meta-analysis on statistical learning in people with and without developmental language disorder. We highlight the strengths and weaknesses of the proposed method and offer suggestions for future research.


Bayes Theorem , Meta-Analysis as Topic , Humans , Computer Simulation , Research Design
2.
Psychol Methods ; 2023 Sep 07.
Article En | MEDLINE | ID: mdl-37676166

To establish a theory one needs cleverly designed and well-executed studies with appropriate and correctly interpreted statistical analyses. Equally important, one also needs replications of such studies and a way to combine the results of several replications into an accumulated state of knowledge. An approach that provides an appropriate and powerful analysis for studies targeting prespecified theories is the use of Bayesian informative hypothesis testing. An additional advantage of the use of this Bayesian approach is that combining the results from multiple studies is straightforward. In this article, we discuss the behavior of Bayes factors in the context of evaluating informative hypotheses with multiple studies. By using simple models and (partly) analytical solutions, we introduce and evaluate Bayesian evidence synthesis (BES) and compare its results to Bayesian sequential updating. By doing so, we clarify how different replications or updating questions can be evaluated. In addition, we illustrate BES with two simulations, in which multiple studies are generated to resemble conceptual replications. The studies in these simulations are too heterogeneous to be aggregated with conventional research synthesis methods. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

3.
Prev Vet Med ; 217: 105972, 2023 Aug.
Article En | MEDLINE | ID: mdl-37499309

Estimation of the accuracy of diagnostic tests in the absence of a gold standard is an important research subject in epidemiology (Dohoo et al., 2009). One of the most used methods the last few decades is the Bayesian Hui-Walter (HW) latent class model (Hui and Walter, 1980). However, the classic HW models aggregate the observed individual test results to the population level, and as a result, potentially valuable information from the lower level(s) is not fully incorporated. An alternative approach is the Bayesian logistic regression (LR) latent class model that allows inclusion of individual level covariates (McInturff et al., 2004). In this study, we explored both classic HW and individual level LR latent class models using Bayesian methodology within a simulation context where true disease status and true test properties were predefined. Population prevalences and test characteristics that were realistic for paratuberculosis in cattle (Toft et al., 2005) were used for the simulation. Individual animals were generated to be clustered within herds in two regions. Two tests with binary outcomes were simulated with constant test characteristics across the two regions. On top of the prevalence properties and test characteristics, one animal level binary risk factor was added to the data. The main objective was to compare the performance of Bayesian HW and LR approaches in estimating test sensitivity and specificity in simulated datasets with different population characteristics. Results from various settings showed that LR models provided posterior estimates that were closer to the true values. The LR models that incorporated herd level clustering effects provided the most accurate estimates, in terms of being closest to the true values and having smaller estimation intervals. This work illustrates that individual level LR models are in many situations preferable over classic HW models for estimation of test characteristics in the absence of a gold standard.


Cattle Diseases , Paratuberculosis , Animals , Cattle , Latent Class Analysis , Logistic Models , Bayes Theorem , Paratuberculosis/diagnosis , Paratuberculosis/epidemiology , Cattle Diseases/epidemiology , Sensitivity and Specificity , Prevalence , Diagnostic Tests, Routine/veterinary , Diagnostic Tests, Routine/methods
4.
Psychol Methods ; 28(3): 558-579, 2023 Jun.
Article En | MEDLINE | ID: mdl-35298215

The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The article is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Bayes Theorem , Behavioral Research , Psychology , Humans , Behavioral Research/methods , Psychology/methods , Software , Research Design
5.
Eur J Psychotraumatol ; 12(1): 1883924, 2021 Apr 09.
Article En | MEDLINE | ID: mdl-33889309

Background: Visual Schema Displacement Therapy (VSDT) is a novel therapy for the treatment of fears and trauma-related mental health problems including PTSD. VSDT proved to be effective in reducing emotionality of aversive memories in healthy individuals in two previous randomized controlled trials and outperformed both a non-active control condition (CC) and an abbreviated version of EMDR therapy, a well-established first-line treatment for posttraumatic stress disorder. Objectives: In an effort to enhance the understanding concerning the efficacy of VSDT, and to determine its active components, a dismantling study was conducted in individuals with disturbing memories in which the effects of VSDT were tested against EMDR therapy, a non-active CC and three different VSDT-protocols, each excluding or altering a hypothesized active component. Method: Participants (N = 144) were asked to recall an emotional aversive event and were randomly assigned to one of these six interventions, each lasting 8 minutes. Emotional disturbance and vividness of participants' memories were rated before and after the intervention and at one and four-week follow-up. Results: Replicatory Bayesian analyses supported hypotheses in which VSDT was superior to the CC and the EMDR condition in reducing emotionality, both directly after the intervention and at one week follow-up. However, at four-week follow-up, VSDT proved equal to EMDR while both treatments were superior to the CC. Concerning vividness the data also showed support for hypotheses predicting VSDT being equal to EMDR and both being superior to the CC in vividness reduction. Further analyses specifying differences between the abbreviated VSDT protocols detected no differences between these conditions. Conclusion: It remains unclear how VSDT yields its positive effects. Because VSDT appears to be unique and effective in decreasing emotionality of aversive memories, replication of the results in clinical samples is needed.


Antecedentes: La terapia de desplazamiento del esquema visual (VSDT por sus siglas en inglés) es una terapia novedosa para tratar los miedos y los problemas de salud mental relacionados con el trauma, incluido el TEPT. La VSDT demostró ser eficaz para reducir la emocionalidad de los recuerdos aversivos en individuos sanos en dos ensayos previos controlados aleatorizados y superó tanto a una condición de control no activa (CC por sus siglas en inglés) como a una versión abreviada de terapia EMDR, una terapia de primera línea bien establecida para el trastorno de estrés postraumático.Objetivos: En un esfuerzo para mejorar la comprensión de la eficacia de VSDT y para determinar sus componentes activos, se realizó un estudio de desmantelamiento en individuos con recuerdos perturbadores en el que se probaron los efectos del VSDT en contraste con la terapia EMDR, una CC no activa y tres diferentes protocolos de VSDT, cada uno excluyendo o alterando un componente activo hipotético.Método: Se pidió a los participantes (N= 144) que recordaran un evento aversivo emocionalmente y fueron asignados aleatoriamente a una de las seis intervenciones, cada una con una duración de 8 minutos. La alteración emocional y la viveza de los recuerdos de los participantes fueron calificados antes y después de la intervención y en el seguimiento luego de una y cuatro semanas.Resultados: Los análisis bayesianos replicativos apoyaron la hipótesis en las que VSDT fue superior a las condiciones CC y EMDR en la reducción de la emocionalidad, tanto directamente después de la intervención y a la semana de seguimiento. Sin embargo, a las cuatro semanas de seguimiento, VSDT resultó ser igual a EMDR mientras que ambos tratamientos fueron superiores al CC. Con respecto a la viveza, los datos también mostraron apoyo hacia las hipótesis que predicen que VSDT es igual a EMDR y que ambos son superiores a CC en la reducción de la viveza. Los análisis adicionales que especifican las diferencias entre los protocolos VSDT abreviados no detectaron diferencias entre estas condiciones.Conclusiones: No está claro cómo VSDT produce sus efectos positivos. Debido a que VSDT parece ser único y efectivo en disminuir la emocionalidad de los recuerdos aversivos, se requiere la replicación de estos estos resultados en muestras clínicas.

6.
PLoS One ; 16(1): e0244752, 2021.
Article En | MEDLINE | ID: mdl-33444385

Random effects regression models are routinely used for clustered data in etiological and intervention research. However, in prediction models, the random effects are either neglected or conventionally substituted with zero for new clusters after model development. In this study, we applied a Bayesian prediction modelling method to the subclinical ketosis data previously collected by Van der Drift et al. (2012). Using a dataset of 118 randomly selected Dutch dairy farms participating in a regular milk recording system, the authors proposed a prediction model with milk measures as well as available test-day information as predictors for the diagnosis of subclinical ketosis in dairy cows. While their original model included random effects to correct for the clustering, the random effect term was removed for their final prediction model. With the Bayesian prediction modelling approach, we first used non-informative priors for the random effects for model development as well as for prediction. This approach was evaluated by comparing it to the original frequentist model. In addition, herd level expert opinion was elicited from a bovine health specialist using three different scales of precision and incorporated in the prediction as informative priors for the random effects, resulting in three more Bayesian prediction models. Results showed that the Bayesian approach could naturally take the clustering structure of clusters into account by keeping the random effects in the prediction model. Expert opinion could be explicitly combined with individual level data for prediction. However in this dataset, when elicited expert opinion was incorporated, little improvement was seen at the individual level as well as at the herd level. When the prediction models were applied to the 118 herds, at the individual cow level, with the original frequentist approach we obtained a sensitivity of 82.4% and a specificity of 83.8% at the optimal cutoff, while with the three Bayesian models with elicited expert opinion, we obtained sensitivities ranged from 78.7% to 84.6% and specificities ranged from 75.0% to 83.6%. At the herd level, 30 out of 118 within herd prevalences were correctly predicted by the original frequentist approach, and 31 to 44 herds were correctly predicted by the three Bayesian models with elicited expert opinion. Further investigation in expert opinion and distributional assumption for the random effects was carried out and discussed.


Cattle Diseases/diagnosis , Ketosis/veterinary , Animals , Bayes Theorem , Cattle , Cattle Diseases/epidemiology , Cluster Analysis , Dairying , Female , Ketosis/diagnosis , Ketosis/epidemiology , Prevalence , Prognosis
7.
Assessment ; 28(2): 585-600, 2021 03.
Article En | MEDLINE | ID: mdl-31257905

This article describes a new way to analyze data from the interpersonal circumplex (IPC) for interpersonal behavior. Instead of analyzing Agency and Communion separately or analyzing the IPC's octants, we propose using a circular regression model that allows us to investigate effects on a blend of Agency and Communion. The proposed circular model is called a projected normal (PN) model. We illustrate the use of a PN mixed-effects model on three repeated measures data sets with circumplex measurements from interpersonal and educational psychology. This model allows us to detect different types of patterns in the data and provides a more valid analysis of circumplex data. In addition to being able to investigate the effect on the location (mean) of scores on the IPC, we can also investigate effects on the spread (variance) of scores on the IPC. We also introduce new tools that help interpret the fixed and random effects of PN models.


Interpersonal Relations , Social Behavior , Humans
8.
Pediatr Phys Ther ; 32(4): 375-380, 2020 10.
Article En | MEDLINE | ID: mdl-32991564

PURPOSE: To create a motor growth curve based on the Test of Basic Motor Skills for Children with Down Syndrome (BMS) and estimate the age of achieving BMS milestones. METHODS: A multilevel exponential model was applied to create a motor growth curve based on BMS data from 119 children with Down syndrome (DS) aged 2 months to 5 years. Logistic regression was applied to estimate the 50% probability of achieving BMS milestones. RESULTS: The BMS growth curve had the largest increase during infancy with smaller increases as children approached the predicted maximum score. The age at which children with DS have a 50% probability of achieving the milestone sitting was 22 months, for crawling 25 months, and for walking 38 months. CONCLUSIONS: The creation of a BMS growth curve provides a standardization of the gross motor development of children with DS. Physical therapists then may monitor a child's individual progress and improve clinical decisions.


Child Development/physiology , Down Syndrome/physiopathology , Growth Charts , Motor Skills Disorders/physiopathology , Motor Skills/physiology , Walking/physiology , Child, Preschool , Female , Humans , Infant , Male , Netherlands
9.
Res Synth Methods ; 11(3): 413-425, 2020 May.
Article En | MEDLINE | ID: mdl-32104971

In mixed methods reviewing, data from quantitative and qualitative studies are combined at the review level. One possible way to combine findings of quantitative and qualitative studies is to quantitize qualitative findings prior to their incorporation in a quantitative review. There are only a few examples of the quantification of qualitative findings within this context. This study adds to current research on mixed methods review methodology by reporting the pilot implementation of a new four-step quantitizing approach. We report how we extract and quantitize the strength of relationships found in qualitative studies by assigning correlations to vague quantifiers in text fragments. This article describes (a) how the analysis is prepared; (b) how vague quantifiers in text fragments are organized and transformed to numerical values; (c) how qualitative studies as a whole are assigned effect sizes; and (d) how the overall mean effects size and variance can be calculated. The pilot implementation shows how findings from 26 primary qualitative studies are transformed into mean effect sizes and corresponding variances.


Qualitative Research , Research Design , Bayes Theorem , Data Interpretation, Statistical , Humans , Models, Statistical , Models, Theoretical , Pilot Projects
11.
J Abnorm Psychol ; 128(6): 517-527, 2019 Aug.
Article En | MEDLINE | ID: mdl-31368731

Data analysis in psychopathology research typically entails multiple stages of data preprocessing (e.g., coding of physiological measures), statistical decisions (e.g., inclusion of covariates), and reporting (e.g., selecting which variables best answer the research questions). The complexity and lack of transparency of these procedures have resulted in two troubling trends: the central hypotheses and analytical approaches are often selected after observing the data, and the research data are often not properly indexed. These practices are particularly problematic for (experimental) psychopathology research because the data are often hard to gather due to the target populations (e.g., individuals with mental disorders), and because the standard methodological approaches are challenging and time consuming (e.g., longitudinal studies). Here, we present a workflow that covers study preregistration, data anonymization, and the easy sharing of data and experimental material with the rest of the research community. This workflow is tailored to both original studies and secondary statistical analyses of archival data sets. In order to facilitate the implementation of the described workflow, we have developed a free and open-source software program. We argue that this workflow will result in more transparent and easily shareable psychopathology research, eventually increasing and replicability reproducibility in our research field. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Biomedical Research/standards , Guidelines as Topic/standards , Information Dissemination , Psychopathology/standards , Reproducibility of Results , Humans
12.
J Behav Ther Exp Psychiatry ; 63: 48-56, 2019 06.
Article En | MEDLINE | ID: mdl-30514434

BACKGROUND AND OBJECTIVES: Visual Schema Displacement Therapy (VSDT) is a novel therapy which has been described as a treatment for stress and dysfunction caused by a traumatic event. Although its developers claim this therapy is quicker and more beneficial than other forms of trauma therapy, its effectiveness has not been tested. METHODS: We compared the efficacy of VSDT to an abbreviated EMDR protocol and a non-active control condition (CC) in two studies. In Study 1 participants (N = 30) were asked to recall three negative emotional memories under three conditions: VSDT, EMDR, and a CC, each lasting 8 min. Emotional disturbance and vividness of the memories were rated before and after the (within group) conditions. The experiment was replicated using a between group study. In Study 2 participants (N = 75) were assigned to one of the three conditions, and a follow-up after 6-8 days was added. RESULTS: In both studies VSDT and EMDR were superior to the CC in reducing emotional disturbance, and VSDT was superior to EMDR. VSDT and EMDR outperformed the CC in terms of reducing vividness. LIMITATION: Results need to be replicated in clinical samples. CONCLUSIONS: It is unclear how VSDT yields positive effects, but irrespective of its causal mechanisms, VSDT warrants clinical exploration.


Emotions , Eye Movement Desensitization Reprocessing/methods , Healthy Volunteers/psychology , Memory, Short-Term , Psychotherapy/methods , Bayes Theorem , Female , Humans , Male , Mental Recall , Young Adult
13.
BMC Med Res Methodol ; 18(1): 174, 2018 12 22.
Article En | MEDLINE | ID: mdl-30577773

BACKGROUND: Observational studies of medical interventions or risk factors are potentially biased by unmeasured confounding. In this paper we propose a Bayesian approach by defining an informative prior for the confounder-outcome relation, to reduce bias due to unmeasured confounding. This approach was motivated by the phenomenon that the presence of unmeasured confounding may be reflected in observed confounder-outcome relations being unexpected in terms of direction or magnitude. METHODS: The approach was tested using simulation studies and was illustrated in an empirical example of the relation between LDL cholesterol levels and systolic blood pressure. In simulated data, a comparison of the estimated exposure-outcome relation was made between two frequentist multivariable linear regression models and three Bayesian multivariable linear regression models, which varied in the precision of the prior distributions. Simulated data contained information on a continuous exposure, a continuous outcome, and two continuous confounders (one considered measured one unmeasured), under various scenarios. RESULTS: In various scenarios the proposed Bayesian analysis with an correctly specified informative prior for the confounder-outcome relation substantially reduced bias due to unmeasured confounding and was less biased than the frequentist model with covariate adjustment for one of the two confounding variables. Also, in general the MSE was smaller for the Bayesian model with informative prior, compared to the other models. CONCLUSIONS: As incorporating (informative) prior information for the confounder-outcome relation may reduce the bias due to unmeasured confounding, we consider this approach one of many possible sensitivity analyses of unmeasured confounding.


Algorithms , Bayes Theorem , Confounding Factors, Epidemiologic , Outcome Assessment, Health Care/statistics & numerical data , Blood Pressure/physiology , Cholesterol, LDL/metabolism , Computer Simulation , Humans , Linear Models , Multivariate Analysis , Outcome Assessment, Health Care/methods , Reproducibility of Results
14.
Front Psychol ; 9: 2040, 2018.
Article En | MEDLINE | ID: mdl-30425670

Circular data is data that is measured on a circle in degrees or radians. It is fundamentally different from linear data due to its periodic nature (0° = 360°). Circular data arises in a large variety of research fields. Among others in ecology, the medical sciences, personality measurement, educational science, sociology, and political science circular data is collected. The most direct examples of circular data within the social sciences arise in cognitive and experimental psychology. However, despite numerous examples of circular data being collected in different areas of cognitive and experimental psychology, the knowledge of this type of data is not well-spread and literature in which these types of data are analyzed using methods for circular data is relatively scarce. This paper therefore aims to give a tutorial in working with and analyzing circular data to researchers in cognitive psychology and the social sciences in general. It will do so by focusing on data inspection, model fit, estimation and hypothesis testing for two specific models for circular data using packages from the statistical programming language R.

15.
BMC Med Res Methodol ; 18(1): 83, 2018 08 06.
Article En | MEDLINE | ID: mdl-30081875

BACKGROUND: Random effects modelling is routinely used in clustered data, but for prediction models, random effects are commonly substituted with the mean zero after model development. In this study, we proposed a novel approach of including prior knowledge through the random effects distribution and investigated to what extent this could improve the predictive performance. METHODS: Data were simulated on the basis of a random effects logistic regression model. Five prediction models were specified: a frequentist model that set the random effects to zero for all new clusters, a Bayesian model with weakly informative priors for the random effects of new clusters, Bayesian models with expert opinion incorporated into low informative, medium informative and highly informative priors for the random effects. Expert opinion at the cluster level was elicited in the form of a truncated area of the random effects distribution. The predictive performance of the five models was assessed. In addition, impact of suboptimal expert opinion that deviated from the true quantity as well as including expert opinion by means of a categorical variable in the frequentist approach were explored. The five models were further investigated in various sensitivity analyses. RESULTS: The Bayesian prediction model using weakly informative priors for the random effects showed similar results to the frequentist model. Bayesian prediction models using expert opinion as informative priors showed smaller Brier scores, better overall discrimination and calibration, as well as better within cluster calibration. Results also indicated that incorporation of more precise expert opinion led to better predictions. Predictive performance from the frequentist models with expert opinion incorporated as categorical variable showed similar patterns as the Bayesian models with informative priors. When suboptimal expert opinion was used as prior information, results indicated that prediction still improved in certain settings. CONCLUSIONS: The prediction models that incorporated cluster level information showed better performance than the models that did not. The Bayesian prediction models we proposed, with cluster specific expert opinion incorporated as priors for the random effects showed better predictive ability in new data, compared to the frequentist method that replaced random effects with zero after model development.


Algorithms , Cluster Analysis , Data Interpretation, Statistical , Models, Theoretical , Bayes Theorem , Calibration , Computer Simulation , Humans , Reproducibility of Results
16.
Br J Math Stat Psychol ; 71(1): 75-95, 2018 02.
Article En | MEDLINE | ID: mdl-28868792

The interpretation of the effect of predictors in projected normal regression models is not straight-forward. The main aim of this paper is to make this interpretation easier such that these models can be employed more readily by social scientific researchers. We introduce three new measures: the slope at the inflection point (bc ), average slope (AS) and slope at mean (SAM) that help us assess the marginal effect of a predictor in a Bayesian projected normal regression model. The SAM or AS are preferably used in situations where the data for a specific predictor do not lie close to the inflection point of a circular regression curve. In this case bc is an unstable and extrapolated effect. In addition, we outline how the projected normal regression model allows us to distinguish between an effect on the mean and spread of a circular outcome variable. We call these types of effects location and accuracy effects, respectively. The performance of the three new measures and of the methods to distinguish between location and accuracy effects is investigated in a simulation study. We conclude that the new measures and methods to distinguish between accuracy and location effects work well in situations with a clear location effect. In situations where the location effect is not clearly distinguishable from an accuracy effect not all measures work equally well and we recommend the use of the SAM.


Computer Simulation , Data Interpretation, Statistical , Psychometrics/methods , Regression Analysis , Bayes Theorem , Humans , Logistic Models , Models, Statistical
17.
Eur J Psychotraumatol ; 8(sup1): 1314782, 2017.
Article En | MEDLINE | ID: mdl-29038683

Threat conditioning procedures have allowed the experimental investigation of the pathogenesis of Post-Traumatic Stress Disorder. The findings of these procedures have also provided stable foundations for the development of relevant intervention programs (e.g. exposure therapy). Statistical inference of threat conditioning procedures is commonly based on p-values and Null Hypothesis Significance Testing (NHST). Nowadays, however, there is a growing concern about this statistical approach, as many scientists point to the various limitations of p-values and NHST. As an alternative, the use of Bayes factors and Bayesian hypothesis testing has been suggested. In this article, we apply this statistical approach to threat conditioning data. In order to enable the easy computation of Bayes factors for threat conditioning data we present a new R package named condir, which can be used either via the R console or via a Shiny application. This article provides both a non-technical introduction to Bayesian analysis for researchers using the threat conditioning paradigm, and the necessary tools for computing Bayes factors easily.

18.
J Clin Epidemiol ; 86: 51-58.e2, 2017 Jun.
Article En | MEDLINE | ID: mdl-28428139

OBJECTIVES: The objective of this systematic review is to investigate the use of Bayesian data analysis in epidemiology in the past decade and particularly to evaluate the quality of research papers reporting the results of these analyses. STUDY DESIGN AND SETTING: Complete volumes of five major epidemiological journals in the period 2005-2015 were searched via PubMed. In addition, we performed an extensive within-manuscript search using a specialized Java application. Details of reporting on Bayesian statistics were examined in the original research papers with primary Bayesian data analyses. RESULTS: The number of studies in which Bayesian techniques were used for primary data analysis remains constant over the years. Though many authors presented thorough descriptions of the analyses they performed and the results they obtained, several reports presented incomplete method sections and even some incomplete result sections. Especially, information on the process of prior elicitation, specification, and evaluation was often lacking. CONCLUSION: Though available guidance papers concerned with reporting of Bayesian analyses emphasize the importance of transparent prior specification, the results obtained in this systematic review show that these guidance papers are often not used. Additional efforts should be made to increase the awareness of the existence and importance of these checklists to overcome the controversy with respect to the use of Bayesian techniques. The reporting quality in epidemiological literature could be improved by updating existing guidelines on the reporting of frequentist analyses to address issues that are important for Bayesian data analyses.


Bayes Theorem , Epidemiologic Research Design , Epidemiologic Studies , Research Report/standards , Humans
19.
Front Med (Lausanne) ; 4: 228, 2017.
Article En | MEDLINE | ID: mdl-29520360

The current system of harm assessment of medicines has been criticized for relying on intuitive expert judgment. There is a call for more quantitative approaches and transparency in decision-making. Illustrated with the case of cardiovascular safety concerns for rosiglitazone, we aimed to explore a structured procedure for the collection, quality assessment, and statistical modeling of safety data from observational and randomized studies. We distinguished five stages in the synthesis process. In Stage I, the general research question, population and outcome, and general inclusion and exclusion criteria are defined and a systematic search is performed. Stage II focusses on the identification of sub-questions examined in the included studies and the classification of the studies into the different categories of sub-questions. In Stage III, the quality of the identified studies is assessed. Coding and data extraction are performed in Stage IV. Finally, meta-analyses on the study results per sub-question are performed in Stage V. A Pubmed search identified 30 randomized and 14 observational studies meeting our search criteria. From these studies, we identified 4 higher level sub-questions and 4 lower level sub-questions. We were able to categorize 29 individual treatment comparisons into one or more of the sub-question categories, and selected study duration as an important covariate. We extracted covariate, outcome, and sample size information at the treatment arm level of the studies. We extracted absolute numbers of myocardial infarctions from the randomized study, and adjusted risk estimates with 95% confidence intervals from the observational studies. Overall, few events were observed in the randomized studies that were frequently of relatively short duration. The large observational studies provided more information since these were often of longer duration. A Bayesian random effects meta-analysis on these data showed no significant increase in risk of rosiglitazone for any of the sub-questions. The proposed procedure can be of additional value for drug safety assessment because it provides a stepwise approach that guides the decision-making in increasing process transparency. The procedure allows for the inclusion of results from both randomized an observational studies, which is especially relevant for this type of research.

...