Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 63.376
Filtrar
1.
BMC Med Res Methodol ; 24(1): 110, 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38714936

RESUMEN

Bayesian statistics plays a pivotal role in advancing medical science by enabling healthcare companies, regulators, and stakeholders to assess the safety and efficacy of new treatments, interventions, and medical procedures. The Bayesian framework offers a unique advantage over the classical framework, especially when incorporating prior information into a new trial with quality external data, such as historical data or another source of co-data. In recent years, there has been a significant increase in regulatory submissions using Bayesian statistics due to its flexibility and ability to provide valuable insights for decision-making, addressing the modern complexity of clinical trials where frequentist trials are inadequate. For regulatory submissions, companies often need to consider the frequentist operating characteristics of the Bayesian analysis strategy, regardless of the design complexity. In particular, the focus is on the frequentist type I error rate and power for all realistic alternatives. This tutorial review aims to provide a comprehensive overview of the use of Bayesian statistics in sample size determination, control of type I error rate, multiplicity adjustments, external data borrowing, etc., in the regulatory environment of clinical trials. Fundamental concepts of Bayesian sample size determination and illustrative examples are provided to serve as a valuable resource for researchers, clinicians, and statisticians seeking to develop more complex and innovative designs.


Asunto(s)
Teorema de Bayes , Ensayos Clínicos como Asunto , Humanos , Ensayos Clínicos como Asunto/métodos , Ensayos Clínicos como Asunto/estadística & datos numéricos , Proyectos de Investigación/normas , Tamaño de la Muestra , Interpretación Estadística de Datos , Modelos Estadísticos
2.
Elife ; 122024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38722146

RESUMEN

Imputing data is a critical issue for machine learning practitioners, including in the life sciences domain, where missing clinical data is a typical situation and the reliability of the imputation is of great importance. Currently, there is no canonical approach for imputation of clinical data and widely used algorithms introduce variance in the downstream classification. Here we propose novel imputation methods based on determinantal point processes (DPP) that enhance popular techniques such as the multivariate imputation by chained equations and MissForest. Their advantages are twofold: improving the quality of the imputed data demonstrated by increased accuracy of the downstream classification and providing deterministic and reliable imputations that remove the variance from the classification results. We experimentally demonstrate the advantages of our methods by performing extensive imputations on synthetic and real clinical data. We also perform quantum hardware experiments by applying the quantum circuits for DPP sampling since such quantum algorithms provide a computational advantage with respect to classical ones. We demonstrate competitive results with up to 10 qubits for small-scale imputation tasks on a state-of-the-art IBM quantum processor. Our classical and quantum methods improve the effectiveness and robustness of clinical data prediction modeling by providing better and more reliable data imputations. These improvements can add significant value in settings demanding high precision, such as in pharmaceutical drug trials where our approach can provide higher confidence in the predictions made.


Asunto(s)
Algoritmos , Aprendizaje Automático , Humanos , Interpretación Estadística de Datos , Reproducibilidad de los Resultados
3.
West J Nurs Res ; 46(6): 403, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38733127
4.
Elife ; 122024 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-38739437

RESUMEN

In several large-scale replication projects, statistically non-significant results in both the original and the replication study have been interpreted as a 'replication success.' Here, we discuss the logical problems with this approach: Non-significance in both studies does not ensure that the studies provide evidence for the absence of an effect and 'replication success' can virtually always be achieved if the sample sizes are small enough. In addition, the relevant error rates are not controlled. We show how methods, such as equivalence testing and Bayes factors, can be used to adequately quantify the evidence for the absence of an effect and how they can be applied in the replication setting. Using data from the Reproducibility Project: Cancer Biology, the Experimental Philosophy Replicability Project, and the Reproducibility Project: Psychology we illustrate that many original and replication studies with 'null results' are in fact inconclusive. We conclude that it is important to also replicate studies with statistically non-significant results, but that they should be designed, analyzed, and interpreted appropriately.


Asunto(s)
Teorema de Bayes , Reproducibilidad de los Resultados , Humanos , Proyectos de Investigación , Tamaño de la Muestra , Interpretación Estadística de Datos
5.
Cogn Res Princ Implic ; 9(1): 27, 2024 05 03.
Artículo en Inglés | MEDLINE | ID: mdl-38700660

RESUMEN

The .05 boundary within Null Hypothesis Statistical Testing (NHST) "has made a lot of people very angry and been widely regarded as a bad move" (to quote Douglas Adams). Here, we move past meta-scientific arguments and ask an empirical question: What is the psychological standing of the .05 boundary for statistical significance? We find that graduate students in the psychological sciences show a boundary effect when relating p-values across .05. We propose this psychological boundary is learned through statistical training in NHST and reading a scientific literature replete with "statistical significance". Consistent with this proposal, undergraduates do not show the same sensitivity to the .05 boundary. Additionally, the size of a graduate student's boundary effect is not associated with their explicit endorsement of questionable research practices. These findings suggest that training creates distortions in initial processing of p-values, but these might be dampened through scientific processes operating over longer timescales.


Asunto(s)
Estadística como Asunto , Humanos , Adulto , Adulto Joven , Interpretación Estadística de Datos , Masculino , Psicología , Femenino
6.
Trials ; 25(1): 312, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38725072

RESUMEN

BACKGROUND: Clinical trials often involve some form of interim monitoring to determine futility before planned trial completion. While many options for interim monitoring exist (e.g., alpha-spending, conditional power), nonparametric based interim monitoring methods are also needed to account for more complex trial designs and analyses. The upstrap is one recently proposed nonparametric method that may be applied for interim monitoring. METHODS: Upstrapping is motivated by the case resampling bootstrap and involves repeatedly sampling with replacement from the interim data to simulate thousands of fully enrolled trials. The p-value is calculated for each upstrapped trial and the proportion of upstrapped trials for which the p-value criteria are met is compared with a pre-specified decision threshold. To evaluate the potential utility for upstrapping as a form of interim futility monitoring, we conducted a simulation study considering different sample sizes with several different proposed calibration strategies for the upstrap. We first compared trial rejection rates across a selection of threshold combinations to validate the upstrapping method. Then, we applied upstrapping methods to simulated clinical trial data, directly comparing their performance with more traditional alpha-spending and conditional power interim monitoring methods for futility. RESULTS: The method validation demonstrated that upstrapping is much more likely to find evidence of futility in the null scenario than the alternative across a variety of simulations settings. Our three proposed approaches for calibration of the upstrap had different strengths depending on the stopping rules used. Compared to O'Brien-Fleming group sequential methods, upstrapped approaches had type I error rates that differed by at most 1.7% and expected sample size was 2-22% lower in the null scenario, while in the alternative scenario power fluctuated between 15.7% lower and 0.2% higher and expected sample size was 0-15% lower. CONCLUSIONS: In this proof-of-concept simulation study, we evaluated the potential for upstrapping as a resampling-based method for futility monitoring in clinical trials. The trade-offs in expected sample size, power, and type I error rate control indicate that the upstrap can be calibrated to implement futility monitoring with varying degrees of aggressiveness and that performance similarities can be identified relative to considered alpha-spending and conditional power futility monitoring methods.


Asunto(s)
Ensayos Clínicos como Asunto , Simulación por Computador , Inutilidad Médica , Proyectos de Investigación , Humanos , Ensayos Clínicos como Asunto/métodos , Tamaño de la Muestra , Interpretación Estadística de Datos , Modelos Estadísticos , Resultado del Tratamiento
7.
Trials ; 25(1): 296, 2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38698442

RESUMEN

BACKGROUND: The optimal amount and timing of protein intake in critically ill patients are unknown. REPLENISH (Replacing Protein via Enteral Nutrition in a Stepwise Approach in Critically Ill Patients) trial evaluates whether supplemental enteral protein added to standard enteral nutrition to achieve a high amount of enteral protein given from ICU day five until ICU discharge or ICU day 90 as compared to no supplemental enteral protein to achieve a moderate amount of enteral protein would reduce all-cause 90-day mortality in adult critically ill mechanically ventilated patients. METHODS: In this multicenter randomized trial, critically ill patients will be randomized to receive supplemental enteral protein (1.2 g/kg/day) added to standard enteral nutrition to achieve a high amount of enteral protein (range of 2-2.4 g/kg/day) or no supplemental enteral protein to achieve a moderate amount of enteral protein (0.8-1.2 g/kg/day). The primary outcome is 90-day all-cause mortality; other outcomes include functional and health-related quality-of-life assessments at 90 days. The study sample size of 2502 patients will have 80% power to detect a 5% absolute risk reduction in 90-day mortality from 30 to 25%. Consistent with international guidelines, this statistical analysis plan specifies the methods for evaluating primary and secondary outcomes and subgroups. Applying this statistical analysis plan to the REPLENISH trial will facilitate unbiased analyses of clinical data. CONCLUSION: Ethics approval was obtained from the institutional review board, Ministry of National Guard Health Affairs, Riyadh, Saudi Arabia (RC19/414/R). Approvals were also obtained from the institutional review boards of each participating institution. Our findings will be disseminated in an international peer-reviewed journal and presented at relevant conferences and meetings. TRIAL REGISTRATION: ClinicalTrials.gov, NCT04475666 . Registered on July 17, 2020.


Asunto(s)
Enfermedad Crítica , Proteínas en la Dieta , Nutrición Enteral , Estudios Multicéntricos como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto , Humanos , Nutrición Enteral/métodos , Proteínas en la Dieta/administración & dosificación , Interpretación Estadística de Datos , Unidades de Cuidados Intensivos , Calidad de Vida , Resultado del Tratamiento , Respiración Artificial , Factores de Tiempo
8.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38768225

RESUMEN

Conventional supervised learning usually operates under the premise that data are collected from the same underlying population. However, challenges may arise when integrating new data from different populations, resulting in a phenomenon known as dataset shift. This paper focuses on prior probability shift, where the distribution of the outcome varies across datasets but the conditional distribution of features given the outcome remains the same. To tackle the challenges posed by such shift, we propose an estimation algorithm that can efficiently combine information from multiple sources. Unlike existing methods that are restricted to discrete outcomes, the proposed approach accommodates both discrete and continuous outcomes. It also handles high-dimensional covariate vectors through variable selection using an adaptive least absolute shrinkage and selection operator penalty, producing efficient estimates that possess the oracle property. Moreover, a novel semiparametric likelihood ratio test is proposed to check the validity of prior probability shift assumptions by embedding the null conditional density function into Neyman's smooth alternatives (Neyman, 1937) and testing study-specific parameters. We demonstrate the effectiveness of our proposed method through extensive simulations and a real data example. The proposed methods serve as a useful addition to the repertoire of tools for dealing dataset shifts.


Asunto(s)
Algoritmos , Simulación por Computador , Modelos Estadísticos , Probabilidad , Humanos , Funciones de Verosimilitud , Biometría/métodos , Interpretación Estadística de Datos , Aprendizaje Automático Supervisado
9.
Biom J ; 66(4): e2300084, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38775273

RESUMEN

The cumulative incidence function is the standard method for estimating the marginal probability of a given event in the presence of competing risks. One basic but important goal in the analysis of competing risk data is the comparison of these curves, for which limited literature exists. We proposed a new procedure that lets us not only test the equality of these curves but also group them if they are not equal. The proposed method allows determining the composition of the groups as well as an automatic selection of their number. Simulation studies show the good numerical behavior of the proposed methods for finite sample size. The applicability of the proposed method is illustrated using real data.


Asunto(s)
Modelos Estadísticos , Humanos , Incidencia , Biometría/métodos , Medición de Riesgo , Simulación por Computador , Interpretación Estadística de Datos
10.
Stat Med ; 43(11): 2062-2082, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38757695

RESUMEN

This paper discusses regression analysis of interval-censored failure time data arising from semiparametric transformation models in the presence of missing covariates. Although some methods have been developed for the problem, they either apply only to limited situations or may have some computational issues. Corresponding to these, we propose a new and unified two-step inference procedure that can be easily implemented using the existing or standard software. The proposed method makes use of a set of working models to extract partial information from incomplete observations and yields a consistent estimator of regression parameters assuming missing at random. An extensive simulation study is conducted and indicates that it performs well in practical situations. Finally, we apply the proposed approach to an Alzheimer's Disease study that motivated this study.


Asunto(s)
Enfermedad de Alzheimer , Simulación por Computador , Modelos Estadísticos , Humanos , Análisis de Regresión , Interpretación Estadística de Datos
12.
Elife ; 132024 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-38752987

RESUMEN

We discuss 12 misperceptions, misstatements, or mistakes concerning the use of covariates in observational or nonrandomized research. Additionally, we offer advice to help investigators, editors, reviewers, and readers make more informed decisions about conducting and interpreting research where the influence of covariates may be at issue. We primarily address misperceptions in the context of statistical management of the covariates through various forms of modeling, although we also emphasize design and model or variable selection. Other approaches to addressing the effects of covariates, including matching, have logical extensions from what we discuss here but are not dwelled upon heavily. The misperceptions, misstatements, or mistakes we discuss include accurate representation of covariates, effects of measurement error, overreliance on covariate categorization, underestimation of power loss when controlling for covariates, misinterpretation of significance in statistical models, and misconceptions about confounding variables, selecting on a collider, and p value interpretations in covariate-inclusive analyses. This condensed overview serves to correct common errors and improve research quality in general and in nutrition research specifically.


Asunto(s)
Estudios Observacionales como Asunto , Proyectos de Investigación , Humanos , Proyectos de Investigación/normas , Modelos Estadísticos , Interpretación Estadística de Datos
13.
PLoS One ; 19(5): e0303262, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38753677

RESUMEN

In recent years, concern has grown about the inappropriate application and interpretation of P values, especially the use of P<0.05 to denote "statistical significance" and the practice of P-hacking to produce results below this threshold and selectively reporting these in publications. Such behavior is said to be a major contributor to the large number of false and non-reproducible discoveries found in academic journals. In response, it has been proposed that the threshold for statistical significance be changed from 0.05 to 0.005. The aim of the current study was to use an evolutionary agent-based model comprised of researchers who test hypotheses and strive to increase their publication rates in order to explore the impact of a 0.005 P value threshold on P-hacking and published false positive rates. Three scenarios were examined, one in which researchers tested a single hypothesis, one in which they tested multiple hypotheses using a P<0.05 threshold, and one in which they tested multiple hypotheses using a P<0.005 threshold. Effects sizes were varied across models and output assessed in terms of researcher effort, number of hypotheses tested and number of publications, and the published false positive rate. The results supported the view that a more stringent P value threshold can serve to reduce the rate of published false positive results. Researchers still engaged in P-hacking with the new threshold, but the effort they expended increased substantially and their overall productivity was reduced, resulting in a decline in the published false positive rate. Compared to other proposed interventions to improve the academic publishing system, changing the P value threshold has the advantage of being relatively easy to implement and could be monitored and enforced with minimal effort by journal editors and peer reviewers.


Asunto(s)
Modelos Estadísticos , Reacciones Falso Positivas , Humanos , Interpretación Estadística de Datos
14.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38742907

RESUMEN

We propose a new non-parametric conditional independence test for a scalar response and a functional covariate over a continuum of quantile levels. We build a Cramer-von Mises type test statistic based on an empirical process indexed by random projections of the functional covariate, effectively avoiding the "curse of dimensionality" under the projected hypothesis, which is almost surely equivalent to the null hypothesis. The asymptotic null distribution of the proposed test statistic is obtained under some mild assumptions. The asymptotic global and local power properties of our test statistic are then investigated. We specifically demonstrate that the statistic is able to detect a broad class of local alternatives converging to the null at the parametric rate. Additionally, we recommend a simple multiplier bootstrap approach for estimating the critical values. The finite-sample performance of our statistic is examined through several Monte Carlo simulation experiments. Finally, an analysis of an EEG data set is used to show the utility and versatility of our proposed test statistic.


Asunto(s)
Simulación por Computador , Modelos Estadísticos , Método de Montecarlo , Humanos , Electroencefalografía/estadística & datos numéricos , Interpretación Estadística de Datos , Biometría/métodos , Estadísticas no Paramétricas
15.
Trials ; 25(1): 317, 2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38741218

RESUMEN

BACKGROUND: Surgical left atrial appendage (LAA) closure concomitant to open-heart surgery prevents thromboembolism in high-risk patients. Nevertheless, high-level evidence does not exist for LAA closure performed in patients with any CHA2DS2-VASc score and preoperative atrial fibrillation or flutter (AF) status-the current trial attempts to provide such evidence. METHODS: The study is designed as a randomized, open-label, blinded outcome assessor, multicenter trial of adult patients undergoing first-time elective open-heart surgery. Patients with and without AF and any CHA2DS2-VASc score will be enrolled. The primary exclusion criteria are planned LAA closure, planned AF ablation, or ongoing endocarditis. Before randomization, a three-step stratification process will sort patients by site, surgery type, and preoperative or expected oral anticoagulation treatment. Patients will undergo balanced randomization (1:1) to LAA closure on top of the planned cardiac surgery or standard care. Block sizes vary from 8 to 16. Neurologists blinded to randomization will adjudicate the primary outcome of stroke, including transient ischemic attack (TIA). The secondary outcomes include a composite outcome of stroke, including TIA, and silent cerebral infarcts, an outcome of ischemic stroke, including TIA, and a composite outcome of stroke and all-cause mortality. LAA closure is expected to provide a 60% relative risk reduction. In total, 1500 patients will be randomized and followed for 2 years. DISCUSSION: The trial is expected to help form future guidelines within surgical LAA closure. This statistical analysis plan ensures transparency of analyses and limits potential reporting biases. TRIAL REGISTRATION: Clinicaltrials.gov, NCT03724318. Registered 26 October 2018, https://clinicaltrials.gov/study/NCT03724318 . PROTOCOL VERSION: https://doi.org/10.1016/j.ahj.2023.06.003 .


Asunto(s)
Apéndice Atrial , Fibrilación Atrial , Procedimientos Quirúrgicos Cardíacos , Estudios Multicéntricos como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto , Accidente Cerebrovascular , Humanos , Apéndice Atrial/cirugía , Fibrilación Atrial/cirugía , Fibrilación Atrial/complicaciones , Accidente Cerebrovascular/prevención & control , Accidente Cerebrovascular/etiología , Procedimientos Quirúrgicos Cardíacos/efectos adversos , Factores de Riesgo , Resultado del Tratamiento , Medición de Riesgo , Interpretación Estadística de Datos , Ataque Isquémico Transitorio/prevención & control , Ataque Isquémico Transitorio/etiología , Masculino , Femenino , Cierre del Apéndice Auricular Izquierdo
16.
Sci Rep ; 14(1): 8593, 2024 04 13.
Artículo en Inglés | MEDLINE | ID: mdl-38615051

RESUMEN

Previous studies have indicated that brain functional plasticity and reorganization in patients with degenerative cervical myelopathy (DCM). However, the effects of cervical cord compression on the functional integration and separation between and/or within modules remain unclear. This study aimed to address these questions using graph theory. Functional MRI was conducted on 46 DCM patients and 35 healthy controls (HCs). The intra- and inter-modular connectivity properties of the whole-brain functional network and nodal topological properties were then calculated using theoretical graph analysis. The difference in categorical variables between groups was compared using a chi-squared test, while that between continuous variables was evaluated using a two-sample t-test. Correlation analysis was conducted between modular connectivity properties and clinical parameters. Modules interaction analyses showed that the DCM group had significantly greater inter-module connections than the HCs group (DMN-FPN: t = 2.38, p = 0.02); inversely, the DCM group had significantly lower intra-module connections than the HCs group (SMN: t = - 2.13, p = 0.036). Compared to HCs, DCM patients exhibited higher nodal topological properties in the default-mode network and frontal-parietal network. In contrast, DCM patients exhibited lower nodal topological properties in the sensorimotor network. The Japanese Orthopedic Association (JOA) score was positively correlated with inter-module connections (r = 0.330, FDR p = 0.029) but not correlated with intra-module connections. This study reported alterations in modular connections and nodal centralities in DCM patients. Decreased nodal topological properties and intra-modular connection in the sensory-motor regions may indicate sensory-motor dysfunction. Additionally, increased nodal topological properties and inter-modular connection in the default mode network and frontal-parietal network may serve as a compensatory mechanism for sensory-motor dysfunction in DCM patients. This could provide an implicative neural basis to better understand alterations in brain networks and the patterns of changes in brain plasticity in DCM patients.


Asunto(s)
Cuello , Enfermedades de la Médula Espinal , Humanos , Encéfalo/diagnóstico por imagen , Enfermedades de la Médula Espinal/diagnóstico por imagen , Interpretación Estadística de Datos , Plasticidad Neuronal , Factor de Crecimiento Transformador beta
17.
Biom J ; 66(3): e2200326, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38637322

RESUMEN

In the context of missing data, the identifiability or "recoverability" of the average causal effect (ACE) depends not only on the usual causal assumptions but also on missingness assumptions that can be depicted by adding variable-specific missingness indicators to causal diagrams, creating missingness directed acyclic graphs (m-DAGs). Previous research described canonical m-DAGs, representing typical multivariable missingness mechanisms in epidemiological studies, and examined mathematically the recoverability of the ACE in each case. However, this work assumed no effect modification and did not investigate methods for estimation across such scenarios. Here, we extend this research by determining the recoverability of the ACE in settings with effect modification and conducting a simulation study to evaluate the performance of widely used missing data methods when estimating the ACE using correctly specified g-computation. Methods assessed were complete case analysis (CCA) and various implementations of multiple imputation (MI) with varying degrees of compatibility with the outcome model used in g-computation. Simulations were based on an example from the Victorian Adolescent Health Cohort Study (VAHCS), where interest was in estimating the ACE of adolescent cannabis use on mental health in young adulthood. We found that the ACE is recoverable when no incomplete variable (exposure, outcome, or confounder) causes its own missingness, and nonrecoverable otherwise, in simplified versions of 10 canonical m-DAGs that excluded unmeasured common causes of missingness indicators. Despite this nonrecoverability, simulations showed that MI approaches that are compatible with the outcome model in g-computation may enable approximately unbiased estimation across all canonical m-DAGs considered, except when the outcome causes its own missingness or causes the missingness of a variable that causes its own missingness. In the latter settings, researchers may need to consider sensitivity analysis methods incorporating external information (e.g., delta-adjustment methods). The VAHCS case study illustrates the practical implications of these findings.


Asunto(s)
Estudios de Cohortes , Humanos , Adulto Joven , Adulto , Adolescente , Interpretación Estadística de Datos , Causalidad , Simulación por Computador
18.
Trials ; 25(1): 286, 2024 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-38678289

RESUMEN

BACKGROUND: The fragility index is a statistical measure of the robustness or "stability" of a statistically significant result. It has been adapted to assess the robustness of statistically significant outcomes from randomized controlled trials. By hypothetically switching some non-responders to responders, for instance, this metric measures how many individuals would need to have responded for a statistically significant finding to become non-statistically significant. The purpose of this study is to assess the fragility index of randomized controlled trials evaluating opioid substitution and antagonist therapies for opioid use disorder. This will provide an indication as to the robustness of trials in the field and the confidence that should be placed in the trials' outcomes, potentially identifying ways to improve clinical research in the field. This is especially important as opioid use disorder has become a global epidemic, and the incidence of opioid related fatalities have climbed 500% in the past two decades. METHODS: Six databases were searched from inception to September 25, 2021, for randomized controlled trials evaluating opioid substitution and antagonist therapies for opioid use disorder, and meeting the necessary requirements for fragility index calculation. Specifically, we included all parallel arm or two-by-two factorial design RCTs that assessed the effectiveness of any opioid substitution and antagonist therapies using a binary primary outcome and reported a statistically significant result. The fragility index of each study was calculated using methods described by Walsh and colleagues. The risk of bias of included studies was assessed using the Revised Cochrane Risk of Bias tool for randomized trials. RESULTS: Ten studies with a median sample size of 82.5 (interquartile range (IQR) 58, 179, range 52-226) were eligible for inclusion. Overall risk of bias was deemed to be low in seven studies, have some concerns in two studies, and be high in one study. The median fragility index was 7.5 (IQR 4, 12, range 1-26). CONCLUSIONS: Our results suggest that approximately eight participants are needed to overturn the conclusions of the majority of trials in opioid use disorder. Future work should focus on maximizing transparency in reporting of study results, by reporting confidence intervals, fragility indexes, and emphasizing the clinical relevance of findings. TRIAL REGISTRATION: PROSPERO CRD42013006507. Registered on November 25, 2013.


Asunto(s)
Antagonistas de Narcóticos , Tratamiento de Sustitución de Opiáceos , Trastornos Relacionados con Opioides , Ensayos Clínicos Controlados Aleatorios como Asunto , Humanos , Analgésicos Opioides/uso terapéutico , Analgésicos Opioides/efectos adversos , Interpretación Estadística de Datos , Antagonistas de Narcóticos/uso terapéutico , Antagonistas de Narcóticos/efectos adversos , Tratamiento de Sustitución de Opiáceos/métodos , Trastornos Relacionados con Opioides/tratamiento farmacológico , Proyectos de Investigación , Resultado del Tratamiento
19.
Stat Med ; 43(12): 2421-2438, 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38589978

RESUMEN

Identifying predictive factors for an outcome of interest via a multivariable analysis is often difficult when the data set is small. Combining data from different medical centers into a single (larger) database would alleviate this problem, but is in practice challenging due to regulatory and logistic problems. Federated learning (FL) is a machine learning approach that aims to construct from local inferences in separate data centers what would have been inferred had the data sets been merged. It seeks to harvest the statistical power of larger data sets without actually creating them. The FL strategy is not always efficient and precise. Therefore, in this paper we refine and implement an alternative Bayesian federated inference (BFI) framework for multicenter data with the same aim as FL. The BFI framework is designed to cope with small data sets by inferring locally not only the optimal parameter values, but also additional features of the posterior parameter distribution, capturing information beyond what is used in FL. BFI has the additional benefit that a single inference cycle across the centers is sufficient, whereas FL needs multiple cycles. We quantify the performance of the proposed methodology on simulated and real life data.


Asunto(s)
Teorema de Bayes , Modelos Estadísticos , Estudios Multicéntricos como Asunto , Humanos , Aprendizaje Automático , Simulación por Computador , Interpretación Estadística de Datos , Análisis Multivariante
20.
Stat Med ; 43(12): 2452-2471, 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38599784

RESUMEN

Many longitudinal studies are designed to monitor participants for major events related to the progression of diseases. Data arising from such longitudinal studies are usually subject to interval censoring since the events are only known to occur between two monitoring visits. In this work, we propose a new method to handle interval-censored multistate data within a proportional hazards model framework where the hazard rate of events is modeled by a nonparametric function of time and the covariates affect the hazard rate proportionally. The main idea of this method is to simplify the likelihood functions of a discrete-time multistate model through an approximation and the application of data augmentation techniques, where the assumed presence of censored information facilitates a simpler parameterization. Then the expectation-maximization algorithm is used to estimate the parameters in the model. The performance of the proposed method is evaluated by numerical studies. Finally, the method is employed to analyze a dataset on tracking the advancement of coronary allograft vasculopathy following heart transplantation.


Asunto(s)
Algoritmos , Trasplante de Corazón , Modelos de Riesgos Proporcionales , Humanos , Funciones de Verosimilitud , Trasplante de Corazón/estadística & datos numéricos , Estudios Longitudinales , Simulación por Computador , Modelos Estadísticos , Interpretación Estadística de Datos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA