Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 351
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
BMC Plant Biol ; 24(1): 306, 2024 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-38644480

RESUMEN

Linkage maps are essential for genetic mapping of phenotypic traits, gene map-based cloning, and marker-assisted selection in breeding applications. Construction of a high-quality saturated map requires high-quality genotypic data on a large number of molecular markers. Errors in genotyping cannot be completely avoided, no matter what platform is used. When genotyping error reaches a threshold level, it will seriously affect the accuracy of the constructed map and the reliability of consequent genetic studies. In this study, repeated genotyping of two recombinant inbred line (RIL) populations derived from crosses Yangxiaomai × Zhongyou 9507 and Jingshuang 16 × Bainong 64 was used to investigate the effect of genotyping errors on linkage map construction. Inconsistent data points between the two replications were regarded as genotyping errors, which were classified into three types. Genotyping errors were treated as missing values, and therefore the non-erroneous data set was generated. Firstly, linkage maps were constructed using the two replicates as well as the non-erroneous data set. Secondly, error correction methods implemented in software packages QTL IciMapping (EC) and Genotype-Corrector (GC) were applied to the two replicates. Linkage maps were therefore constructed based on the corrected genotypes and then compared with those from the non-erroneous data set. Simulation study was performed by considering different levels of genotyping errors to investigate the impact of errors and the accuracy of error correction methods. Results indicated that map length and marker order differed among the two replicates and the non-erroneous data sets in both RIL populations. For both actual and simulated populations, map length was expanded as the increase in error rate, and the correlation coefficient between linkage and physical maps became lower. Map quality can be improved by repeated genotyping and error correction algorithm. When it is impossible to genotype the whole mapping population repeatedly, 30% would be recommended in repeated genotyping. The EC method had a much lower false positive rate than did the GC method under different error rates. This study systematically expounded the impact of genotyping errors on linkage analysis, providing potential guidelines for improving the accuracy of linkage maps in the presence of genotyping errors.


Asunto(s)
Mapeo Cromosómico , Genotipo , Triticum , Triticum/genética , Mapeo Cromosómico/métodos , Sitios de Carácter Cuantitativo , Ligamiento Genético , Técnicas de Genotipaje/métodos , Análisis de Secuencia por Matrices de Oligonucleótidos/métodos
2.
BMC Med Res Methodol ; 24(1): 5, 2024 01 06.
Artículo en Inglés | MEDLINE | ID: mdl-38184529

RESUMEN

BACKGROUND: In the last decades, medical research fields studying rare conditions such as spinal cord injury (SCI) have made extensive efforts to collect large-scale data. However, most analysis methods rely on complete data. This is particularly troublesome when studying clinical data as they are prone to missingness. Often, researchers mitigate this problem by removing patients with missing data from the analyses. Less commonly, imputation methods to infer likely values are applied. OBJECTIVE: Our objective was to study how handling missing data influences the results reported, taking the example of SCI registries. We aimed to raise awareness on the effects of missing data and provide guidelines to be applied for future research projects, in SCI research and beyond. METHODS: Using the Sygen clinical trial data (n = 797), we analyzed the impact of the type of variable in which data is missing, the pattern according to which data is missing, and the imputation strategy (e.g. mean imputation, last observation carried forward, multiple imputation). RESULTS: Our simulations show that mean imputation may lead to results strongly deviating from the underlying expected results. For repeated measures missing at late stages (> = 6 months after injury in this simulation study), carrying the last observation forward seems the preferable option for the imputation. This simulation study could show that a one-size-fit-all imputation strategy falls short in SCI data sets. CONCLUSIONS: Data-tailored imputation strategies are required (e.g., characterisation of the missingness pattern, last observation carried forward for repeated measures evolving to a plateau over time). Therefore, systematically reporting the extent, kind and decisions made regarding missing data will be essential to improve the interpretation, transparency, and reproducibility of the research presented.


Asunto(s)
Investigación Biomédica , Traumatismos de la Médula Espinal , Humanos , Reproducibilidad de los Resultados , Traumatismos de la Médula Espinal/epidemiología , Traumatismos de la Médula Espinal/terapia , Simulación por Computador , Enfermedades Raras
3.
BMC Med Res Methodol ; 24(1): 2, 2024 01 03.
Artículo en Inglés | MEDLINE | ID: mdl-38172688

RESUMEN

Estimation of mortality rates and mortality rate ratios (MRR) of diseased and non-diseased individuals is a core metric of disease impact used in chronic disease epidemiology. Estimation of mortality rates is often conducted through retrospective linkage of information from nationwide surveys such as the National Health Interview Survey (NHIS) and death registries. These surveys usually collect information on disease status during only one study visit. This infrequency leads to missing disease information (with right censored survival times) for deceased individuals who were disease-free at study participation, and a possibly biased estimation of the MRR because of possible undetected disease onset after study participation. This occurrence is called "misclassification of disease status at death (MicDaD)" and it is a potentially common source of bias in epidemiologic studies. In this study, we conducted a simulation analysis with a high and a low incidence setting to assess the extent of MicDaD-bias in the estimated mortality. For the simulated populations, MRR for diseased and non-diseased individuals with and without MicDaD were calculated and compared. Magnitude of MicDaD-bias depends on and is driven by the incidence of the chronic disease under consideration; our analysis revealed a noticeable shift towards underestimation for high incidences when MicDaD is present. Impact of MicDaD was smaller for lower incidence (but associated with greater uncertainty in the estimation of MRR in general). Further research can consider the amount of missing information and potential influencers such as duration and risk factors of the disease.


Asunto(s)
Estudios Retrospectivos , Humanos , Sesgo , Factores de Riesgo , Sistema de Registros , Enfermedad Crónica
4.
BMC Med Res Methodol ; 24(1): 49, 2024 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-38413862

RESUMEN

BACKGROUND: Several approaches are commonly used to estimate the effect of diet on changes of various intermediate disease markers in prospective studies, including "change-score analysis", "concurrent change-change analysis" and "lagged change-change analysis". Although empirical evidence suggests that concurrent change-change analysis is most robust, consistent, and biologically plausible, in-depth dissection and comparison of these approaches from a causal inference perspective is lacking. We intend to explicitly elucidate and compare the underlying causal model, causal estimand and interpretation of these approaches, intuitively illustrate it with directed acyclic graph (DAG), and further clarify strengths and limitations of the recommended concurrent change-change analysis through simulations. METHODS: Causal model and DAG are deployed to clarify the causal estimand and interpretation of each approach theoretically. Monte Carlo simulation is used to explore the performance of distinct approaches under different extents of time-invariant heterogeneity and the performance of concurrent change-change analysis when its causal identification assumptions are violated. RESULTS: Concurrent change-change analysis targets the contemporaneous effect of exposure on outcome (measured at the same survey wave), which is more relevant and plausible in studying the associations of diet and intermediate biomarkers in prospective studies, while change-score analysis and lagged change-change analysis target the effect of exposure on outcome after one-period timespan (typically several years). Concurrent change-change analysis always yields unbiased estimates even with severe unobserved time-invariant confounding, while the other two approaches are always biased even without time-invariant heterogeneity. However, concurrent change-change analysis produces almost linearly increasing estimation bias with violation of its causal identification assumptions becoming more serious. CONCLUSIONS: Concurrent change-change analysis might be the most superior method in studying the diet and intermediate biomarkers in prospective studies, which targets the most plausible estimand and circumvents the bias from unobserved individual heterogeneity. Importantly, careful examination of the vital identification assumptions behind it should be underscored before applying this promising method.


Asunto(s)
Modelos Teóricos , Humanos , Estudios Prospectivos , Causalidad , Sesgo , Biomarcadores
5.
Mol Divers ; 2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38775995

RESUMEN

The remarkable conservation of the FtsZ among Gram-positive and Gram-negative bacteria, a crucial GTPase in bacterial cell division, has emerged as a promising antibacterial drug target to combat antibacterial resistance. There have been several coordinated efforts to develop inhibitors against FtsZ which can also serve as potential candidates for future antibiotics. In the present study, a natural product-like library (≈50,000 compounds) was employed to conduct HTVS against Staphylococcus aureus FtsZ protein (PDB Id: 6KVP). Additionally, molecular docking was carried out in two modes, SP and XP docking, using the Schrödinger suite. The glide scores of ligands obtained by XP docking were further summarized and compared with the control ligands (ZI1- co-crystal and PC190723-a compound undergoing clinical trial). Using the Prime-MM-GBSA approach, BFE calculations were performed on the top XP-scored ligands (≈598 compounds). These hits were also evaluated for ADMET parameters using the Qikprop algorithm, SwissADME, and in silico carcinogenicity testing using Carcinopred-El. Based on the results, ligand 4-FtsZ complex was considered for the 300 ns MDS analysis to get insights into its binding modes within the catalytic pocket of FtsZ protein. The analysis revealed that the amide linkage sandwiched between the triazole and 1-oxa-8-azaspirodecan-8-ium moiety (Val203) as well as the aminoethyl group present at 1st position on the triazole moiety (Leu209, Leu200, Asp210, and Ala202) were responsible for the FtsZ inhibitory activity, owing to their crucial interactions with key amino acid residues. Further, the complex also displayed good protein-ligand stability, ultimately predicting ligand 4 as a potent lead compound for the inhibition of FtsZ. Thus, our in silico findings will serve as a framework for in-depth in-vitro and in-vivo investigations encouraging the development of FtsZ inhibitors as a new generation of antibacterial agents.

6.
Knee Surg Sports Traumatol Arthrosc ; 32(5): 1332-1343, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38520187

RESUMEN

PURPOSE: This study aimed to elucidate the characteristics of varus knee deformities in the Japanese population, prevalence of various around knee osteotomy procedures and influence of femoral and tibial bowing. METHODS: Varus knee deformity was defined as a weight-bearing line ratio of <50%. A total of 1010 varus knees were selected from 1814 varus knees with weight-bearing full-length radiographs, obtained at two facilities, based on exclusion criteria. Various parameters were measured, and around knee osteotomy simulations based on the deformity centre were conducted using digital planning tools. Bowing of the femoral and tibial shafts was measured, with bowing defined as follows: ≤ -0.6° indicating lateral bowing and ≥ 0.6° indicating medial bowing. Statistical analysis was performed to investigate age-related correlations and their impact on surgical techniques. RESULTS: The study revealed that the proximal tibia was the centre of deformity in Japanese varus knees (42.8%), and high tibial osteotomy was frequently indicated (81.6%). Age demonstrated a mild correlation with femoral shaft bowing (r = -0.29), leading to an increase in the mechanical lateral distal femoral angle and to a decrease in the hip-knee-ankle angle and weight-bearing line ratio (r = -0.29, 0.221, 0.219). The tibial shaft bowing was unaffected by age (r = -0.022). CONCLUSION: A significant proportion of Japanese individuals with varus knees exhibit a deformity centre located in the proximal tibia, making them suitable candidates for high tibial osteotomy. No age-related alterations were discerned in tibial morphology, indicating that the occurrence of constitutional varus knees is attributable to tibial deformities in the Japanese patient cohort. LEVEL OF EVIDENCE: Level IV.


Asunto(s)
Articulación de la Rodilla , Osteotomía , Tibia , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pueblos del Este de Asia , Fémur/cirugía , Fémur/anomalías , Fémur/diagnóstico por imagen , Japón , Articulación de la Rodilla/cirugía , Articulación de la Rodilla/diagnóstico por imagen , Articulación de la Rodilla/anomalías , Osteotomía/métodos , Radiografía , Tibia/cirugía , Tibia/anomalías , Tibia/diagnóstico por imagen , Soporte de Peso , Anciano de 80 o más Años
7.
Multivariate Behav Res ; 59(3): 461-481, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38247019

RESUMEN

Network analysis has gained popularity as an approach to investigate psychological constructs. However, there are currently no guidelines for applied researchers when encountering missing values. In this simulation study, we compared the performance of a two-step EM algorithm with separated steps for missing handling and regularization, a combined direct EM algorithm, and pairwise deletion. We investigated conditions with varying network sizes, numbers of observations, missing data mechanisms, and percentages of missing values. These approaches are evaluated with regard to recovering population networks in terms of loss in the precision matrix, edge set identification and network statistics. The simulation showed adequate performance only in conditions with large samples (n≥500) or small networks (p = 10). Comparing the missing data approaches, the direct EM appears to be more sensitive and superior in nearly all chosen conditions. The two-step EM yields better results when the ratio of n/p is very large - being less sensitive but more specific. Pairwise deletion failed to converge across numerous conditions and yielded inferior results overall. Overall, direct EM is recommended in most cases, as it is able to mitigate the impact of missing data quite well, while modifications to two-step EM could improve its performance.


Asunto(s)
Algoritmos , Simulación por Computador , Humanos , Simulación por Computador/estadística & datos numéricos , Interpretación Estadística de Datos , Modelos Estadísticos
8.
Multivariate Behav Res ; 59(2): 187-205, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37524119

RESUMEN

Propensity score analyses (PSA) of continuous treatments often operationalize the treatment as a multi-indicator composite, and its composite reliability is unreported. Latent variables or factor scores accounting for this unreliability are seldom used as alternatives to composites. This study examines the effects of the unreliability of indicators of a latent treatment in PSA using the generalized propensity score (GPS). A Monte Carlo simulation study was conducted varying composite reliability, continuous treatment representation, variability of factor loadings, sample size, and number of treatment indicators to assess whether Average Treatment Effect (ATE) estimates differed in their relative bias, Root Mean Squared Error, and coverage rates. Results indicate that low composite reliability leads to underestimation of the ATE of latent continuous treatments, while the number of treatment indicators and variability of factor loadings show little effect on ATE estimates, after controlling for overall composite reliability. The results also show that, in correctly specified GPS models, the effects of low composite reliability can be somewhat ameliorated by using factor scores that were estimated including covariates. An illustrative example is provided using survey data to estimate the effect of teacher adoption of a workbook related to a virtual learning environment in the classroom.


Asunto(s)
Puntaje de Propensión , Reproducibilidad de los Resultados , Simulación por Computador , Sesgo , Método de Montecarlo
9.
Biom J ; 66(4): e2200334, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38747086

RESUMEN

Many data sets exhibit a natural group structure due to contextual similarities or high correlations of variables, such as lipid markers that are interrelated based on biochemical principles. Knowledge of such groupings can be used through bi-level selection methods to identify relevant feature groups and highlight their predictive members. One of the best known approaches of this kind combines the classical Least Absolute Shrinkage and Selection Operator (LASSO) with the Group LASSO, resulting in the Sparse Group LASSO. We propose the Sparse Group Penalty (SGP) framework, which allows for a flexible combination of different SGL-style shrinkage conditions. Analogous to SGL, we investigated the combination of the Smoothly Clipped Absolute Deviation (SCAD), the Minimax Concave Penalty (MCP) and the Exponential Penalty (EP) with their group versions, resulting in the Sparse Group SCAD, the Sparse Group MCP, and the novel Sparse Group EP (SGE). Those shrinkage operators provide refined control of the effect of group formation on the selection process through a tuning parameter. In simulation studies, SGPs were compared with other bi-level selection methods (Group Bridge, composite MCP, and Group Exponential LASSO) for variable and group selection evaluated with the Matthews correlation coefficient. We demonstrated the advantages of the new SGE in identifying parsimonious models, but also identified scenarios that highlight the limitations of the approach. The performance of the techniques was further investigated in a real-world use case for the selection of regulated lipids in a randomized clinical trial.


Asunto(s)
Biometría , Biometría/métodos , Humanos
10.
Biom J ; 66(1): e2200095, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36642811

RESUMEN

Statistical simulation studies are becoming increasingly popular to demonstrate the performance or superiority of new computational procedures and algorithms. Despite this status quo, previous surveys of the literature have shown that the reporting of statistical simulation studies often lacks relevant information and structure. The latter applies in particular to Bayesian simulation studies, and in this paper the Bayesian simulation study framework (BASIS) is presented as a step towards improving the situation. The BASIS framework provides a structured skeleton for planning, coding, executing, analyzing, and reporting Bayesian simulation studies in biometrical research and computational statistics. It encompasses various features of previous proposals and recommendations in the methodological literature and aims to promote neutral comparison studies in statistical research. Computational aspects covered in the BASIS include algorithmic choices, Markov-chain-Monte-Carlo convergence diagnostics, sensitivity analyses, and Monte Carlo standard error calculations for Bayesian simulation studies. Although the BASIS framework focuses primarily on methodological research, it also provides useful guidance for researchers who rely on the results of Bayesian simulation studies or analyses, as current state-of-the-art guidelines for Bayesian analyses are incorporated into the BASIS.


Asunto(s)
Algoritmos , Teorema de Bayes , Simulación por Computador , Cadenas de Markov , Método de Montecarlo
11.
J Environ Manage ; 356: 120692, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38547828

RESUMEN

Accurate characterization of soil contaminant concentrations is often crucial for assessing risks to human and ecological health. However, fine-scale assessments of large tracts of land can be cost prohibitive due to the number of samples needed. One solution to this problem is to extrapolate sampling results from one area to another unsampled area. In the absence of a validated extrapolation methodology, regulatory agencies have employed policy-based techniques for large sites, but the likelihood of decision errors resulting from these extrapolations is largely unexplored. This study describes the results of a simulation study aimed at guiding environmental sampling for sites where extrapolation concepts are of interest. The objective of this study is to provide practical recommendations to regulatory agencies for extrapolating sampling results on large tracts of land while minimizing errors that are detrimental to human health. A variety of site investigation scenarios representative of environmental conditions and sampling schemes were tested using adaptive sampling when collecting discrete samples or applying incremental sampling methodology (ISM). These simulations address extrapolation uncertainty in cases where a Pilot Study might result in either false noncompliance or false compliance conclusions. A wide range of plausible scenarios were used that reflect the variety of heterogeneity seen at large sites. This simulation study demonstrates that ISM can be reliably applied in a Pilot Study for purposes of extrapolating the outcome to a large area site because it decreases the likelihood of false non-compliance errors while also providing reliable estimates of true compliance across unsampled areas. The results demonstrate how errors depend on the magnitude of the 95% upper confidence limit for the mean concentration (95UCL) relative to the applicable action level, and that error rates are highest when the 95UCL is within 10%-40% of the action level. The false compliance rate can be reduced to less than 5% when 30% or more of the site is characterized with ISM. False compliance error rates using ISM are insensitive to the fraction of the decision units (DUs) that are characterized with three replicates (with a minimum of 10 percent), so long as 95UCLs are calculated for the DUs with one replicate using the average coefficient of variation from the three replicate DUs.


Asunto(s)
Incertidumbre , Humanos , Proyectos Piloto
12.
Emerg Infect Dis ; 29(11): 2292-2297, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37877559

RESUMEN

Earlier global detection of novel SARS-CoV-2 variants gives governments more time to respond. However, few countries can implement timely national surveillance, resulting in gaps in monitoring. The United Kingdom implemented large-scale community and hospital surveillance, but experience suggests it might be faster to detect new variants through testing England arrivals for surveillance. We developed simulations of emergence and importation of novel variants with a range of infection hospitalization rates to the United Kingdom. We compared time taken to detect the variant though testing arrivals at England borders, hospital admissions, and the general community. We found that sampling 10%-50% of arrivals at England borders could confer a speed advantage of 3.5-6 weeks over existing community surveillance and 1.5-5 weeks (depending on infection hospitalization rates) over hospital testing. Directing limited global capacity for surveillance to highly connected ports could speed up global detection of novel SARS-CoV-2 variants.


Asunto(s)
COVID-19 , Humanos , COVID-19/diagnóstico , SARS-CoV-2/genética , Inglaterra/epidemiología , Reino Unido/epidemiología
13.
Stat Med ; 42(3): 331-352, 2023 02 10.
Artículo en Inglés | MEDLINE | ID: mdl-36546512

RESUMEN

This review condenses the knowledge on variable selection methods implemented in R and appropriate for datasets with grouped features. The focus is on regularized regressions identified through a systematic review of the literature, following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A total of 14 methods are discussed, most of which use penalty terms to perform group variable selection. Depending on how the methods account for the group structure, they can be classified into knowledge and data-driven approaches. The first encompass group-level and bi-level selection methods, while two-step approaches and collinearity-tolerant methods constitute the second category. The identified methods are briefly explained and their performance compared in a simulation study. This comparison demonstrated that group-level selection methods, such as the group minimax concave penalty, are superior to other methods in selecting relevant variable groups but are inferior in identifying important individual variables in scenarios where not all variables in the groups are predictive. This can be better achieved by bi-level selection methods such as group bridge. Two-step and collinearity-tolerant approaches such as elastic net and ordered homogeneity pursuit least absolute shrinkage and selection operator are inferior to knowledge-driven methods but provide results without requiring prior knowledge. Possible applications in proteomics are considered, leading to suggestions on which method to use depending on existing prior knowledge and research question.


Asunto(s)
Simulación por Computador , Humanos
14.
BMC Med Res Methodol ; 23(1): 255, 2023 10 31.
Artículo en Inglés | MEDLINE | ID: mdl-37907863

RESUMEN

BACKGROUND: Looking for treatment-by-subset interaction on a right-censored outcome based on observational data using propensity-score (PS) modeling is of interest. However, there are still issues regarding its implementation, notably when the subsets are very imbalanced in terms of prognostic features and treatment prevalence. METHODS: We conducted a simulation study to compare two main PS estimation strategies, performed either once on the whole sample ("across subset") or in each subset separately ("within subsets"). Several PS models and estimands are also investigated. We then illustrated those approaches on the motivating example, namely, evaluating the benefits of facial nerve resection in patients with parotid cancer in contact with the nerve, according to pretreatment facial palsy. RESULTS: Our simulation study demonstrated that both strategies provide close results in terms of bias and variance of the estimated treatment effect, with a slight advantage for the "across subsets" strategy in very small samples, provided that interaction terms between the subset variable and other covariates influencing the choice of treatment are incorporated. PS matching without replacement resulted in biased estimates and should be avoided in the case of very imbalanced subsets. CONCLUSIONS: When assessing heterogeneity in the treatment effect in small samples, the "across subsets" strategy of PS estimation is preferred. Then, either a PS matching with replacement or a weighting method must be used to estimate the average treatment effect in the treated or in the overlap population. In contrast, PS matching without replacement should be avoided in this setting.


Asunto(s)
Puntaje de Propensión , Humanos , Método de Montecarlo , Simulación por Computador , Sesgo
15.
BMC Med Res Methodol ; 23(1): 116, 2023 05 13.
Artículo en Inglés | MEDLINE | ID: mdl-37179343

RESUMEN

BACKGROUND: Effectiveness-implementation hybrid designs are a relatively new approach to evaluate efficacious interventions in real-world settings while concurrently gathering information on the implementation. Intervention fidelity can significantly influence the effectiveness of an intervention during implementation. However little guidance exists for applied researchers conducting effectiveness-implementation hybrid trials regarding the impact of fidelity on intervention effects and power. METHODS: We conducted a simulation study based on parameters from a clinical example study. For the simulation, we explored parallel and stepped-wedge cluster randomized trials (CRTs) and hypothetical patterns of fidelity increase during implementation: slow, linear, and fast. Based on fixed design parameters, i.e., the number of clusters (C = 6), time points (T = 7), and patients per cluster (n = 10) we used linear mixed models to estimate the intervention effect and calculated the power for different fidelity patterns. Further, we conducted a sensitivity analysis to compare outcomes based on different assumptions for the intracluster-correlation coefficient and the cluster size. RESULTS: Ensuring high fidelity from the beginning is central to achieve accurate intervention effect estimates in stepped-wedge and parallel CRTs. The importance of high fidelity in the earlier stages is more emphasized in stepped-wedge designs than in parallel CRTs. In contrast, if the increase of fidelity is too slow despite relatively high starting levels, the study will likely be underpowered and the intervention effect estimates will also be biased. This effect is more accentuated in parallel CRTs, here reaching 100% fidelity within the next measurement points is crucial. CONCLUSIONS: This study discusses the importance of intervention fidelity for the study`s power and highlights different recommendations to deal with low fidelity in parallel and stepped-wedge CRTs from a design perspective. Applied researchers should consider the detrimental effect of low fidelity in their evaluation design. Overall, there are fewer options to adjust the trial design after the fact in parallel CRT as compared to stepped-wedge CRTs. Particular emphasis should be placed on the selection of contextually relevant implementation strategies.


Asunto(s)
Proyectos de Investigación , Humanos , Simulación por Computador , Tamaño de la Muestra , Modelos Lineales , Análisis por Conglomerados
16.
BMC Med Res Methodol ; 23(1): 19, 2023 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-36650428

RESUMEN

BACKGROUND: Advantages of meta-analysis depend on the assumptions underlying the statistical procedures used being met. One of the main assumptions that is usually taken for granted is the normality underlying the population of true effects in a random-effects model, even though the available evidence suggests that this assumption is often not met. This paper examines how 21 frequentist and 24 Bayesian methods, including several novel procedures, for computing a point estimate of the heterogeneity parameter ([Formula: see text]) perform when the distribution of random effects departs from normality compared to normal scenarios in meta-analysis of standardized mean differences. METHODS: A Monte Carlo simulation was carried out using the R software, generating data for meta-analyses using the standardized mean difference. The simulation factors were the number and average sample size of primary studies, the amount of heterogeneity, as well as the shape of the random-effects distribution. The point estimators were compared in terms of absolute bias and variance, although results regarding mean squared error were also discussed. RESULTS: Although not all the estimators were affected to the same extent, there was a general tendency to obtain lower and more variable [Formula: see text] estimates as the random-effects distribution departed from normality. However, the estimators ranking in terms of their absolute bias and variance did not change: Those estimators that obtained lower bias also showed greater variance. Finally, a large number and sample size of primary studies acted as a bias-protective factor against a lack of normality for several procedures, whereas only a high number of studies was a variance-protective factor for most of the estimators analyzed. CONCLUSIONS: Although the estimation and inference of the combined effect have proven to be sufficiently robust, our work highlights the role that the deviation from normality may be playing in the meta-analytic conclusions from the simulation results and the numerical examples included in this work. With the aim to exercise caution in the interpretation of the results obtained from random-effects models, the tau2() R function is made available for obtaining the range of [Formula: see text] values computed from the 45 estimators analyzed in this work, as well as to assess how the pooled effect, its confidence and prediction intervals vary according to the estimator chosen.


Asunto(s)
Programas Informáticos , Humanos , Teorema de Bayes , Método de Montecarlo , Simulación por Computador , Sesgo
17.
BMC Med Res Methodol ; 23(1): 277, 2023 11 24.
Artículo en Inglés | MEDLINE | ID: mdl-38001462

RESUMEN

The interrupted time series (ITS) design is widely used to examine the effects of large-scale public health interventions and has the highest level of evidence validity. However, there is a notable gap regarding methods that account for lag effects of interventions.To address this, we introduced activation functions (ReLU and Sigmoid) to into the classic segmented regression (CSR) of the ITS design during the lag period. This led to the proposal of proposed an optimized segmented regression (OSR), namely, OSR-ReLU and OSR-Sig. To compare the performance of the models, we simulated data under multiple scenarios, including positive or negative impacts of interventions, linear or nonlinear lag patterns, different lag lengths, and different fluctuation degrees of the outcome time series. Based on the simulated data, we examined the bias, mean relative error (MRE), mean square error (MSE), mean width of the 95% confidence interval (CI), and coverage rate of the 95% CI for the long-term impact estimates of interventions among different models.OSR-ReLU and OSR-Sig yielded approximately unbiased estimates of the long-term impacts across all scenarios, whereas CSR did not. In terms of accuracy, OSR-ReLU and OSR-Sig outperformed CSR, exhibiting lower values in MRE and MSE. With increasing lag length, the optimized models provided robust estimates of long-term impacts. Regarding precision, OSR-ReLU and OSR-Sig surpassed CSR, demonstrating narrower mean widths of 95% CI and higher coverage rates.Our optimized models are powerful tools, as they can model the lag effects of interventions and provide more accurate and precise estimates of the long-term impact of interventions. The introduction of an activation function provides new ideas for improving of the CSR model.


Asunto(s)
Aneurisma de la Aorta Abdominal , Humanos , Factores de Tiempo , Análisis de Series de Tiempo Interrumpido , Resultado del Tratamiento
18.
BMC Med Res Methodol ; 23(1): 287, 2023 12 07.
Artículo en Inglés | MEDLINE | ID: mdl-38062377

RESUMEN

BACKGROUND: Case-cohort studies are conducted within cohort studies, with the defining feature that collection of exposure data is limited to a subset of the cohort, leading to a large proportion of missing data by design. Standard analysis uses inverse probability weighting (IPW) to address this intended missing data, but little research has been conducted into how best to perform analysis when there is also unintended missingness. Multiple imputation (MI) has become a default standard for handling unintended missingness and is typically used in combination with IPW to handle the intended missingness due to the case-control sampling. Alternatively, MI could be used to handle both the intended and unintended missingness. While the performance of an MI-only approach has been investigated in the context of a case-cohort study with a time-to-event outcome, it is unclear how this approach performs with a binary outcome. METHODS: We conducted a simulation study to assess and compare the performance of approaches using only MI, only IPW, and a combination of MI and IPW, for handling intended and unintended missingness in the case-cohort setting. We also applied the approaches to a case study. RESULTS: Our results show that the combined approach is approximately unbiased for estimation of the exposure effect when the sample size is large, and was the least biased with small sample sizes, while MI-only and IPW-only exhibited larger biases in both sample size settings. CONCLUSIONS: These findings suggest that a combined MI/IPW approach should be preferred to handle intended and unintended missing data in case-cohort studies with binary outcomes.


Asunto(s)
Estudios de Cohortes , Humanos , Interpretación Estadística de Datos , Probabilidad , Sesgo , Simulación por Computador
19.
BMC Med Res Methodol ; 23(1): 168, 2023 07 13.
Artículo en Inglés | MEDLINE | ID: mdl-37442979

RESUMEN

Safety is an essential part of the evaluation of new medications and competing risks that occur in most clinical trials are a well identified challenge in the analysis of adverse events. Two statistical frameworks exist to consider competing risks: the cause-specific and the subdistribution framework. To date, the application of the cause-specific framework is the standard practice in safety analyses. Here we analyze how the safety analysis results of new medications would be affected if instead of the cause-specific the subdistribution framework was chosen. We conducted a simulation study with 600 participants, equally allocated to verum and control groups and a 30 months follow-up period. Simulated trials were analyzed for safety in a competing risk (death) setting using both the cause-specific and subdistribution frameworks. Results show that comparing safety profiles in a subdistribution setting is always more pessimistic than in a cause-specific setting. For the group with the longest survival and a safety advantage in a cause-specific setting, the advantage either disappeared or a disadvantage was found in the subdistribution analysis setting. These observations are not contradictory but show different perspectives. To evaluate the safety of a new medication over its comparator, one needs to understand the origin of both the risks and the benefits associated with each therapy. These requirements are best met with a cause-specific framework. The subdistribution framework seems better suited for clinical prediction, and therefore more relevant for providers or payers, for example.


Asunto(s)
Simulación por Computador , Humanos , Modelos de Riesgos Proporcionales , Ensayos Clínicos como Asunto
20.
BMC Med Res Methodol ; 23(1): 191, 2023 08 21.
Artículo en Inglés | MEDLINE | ID: mdl-37605171

RESUMEN

BACKGROUND: The aggregation of a series of N-of-1 trials presents an innovative and efficient study design, as an alternative to traditional randomized clinical trials. Challenges for the statistical analysis arise when there is carry-over or complex dependencies of the treatment effect of interest. METHODS: In this study, we evaluate and compare methods for the analysis of aggregated N-of-1 trials in different scenarios with carry-over and complex dependencies of treatment effects on covariates. For this, we simulate data of a series of N-of-1 trials for Chronic Nonspecific Low Back Pain based on assumed causal relationships parameterized by directed acyclic graphs. In addition to existing statistical methods such as regression models, Bayesian Networks, and G-estimation, we introduce a carry-over adjusted parametric model (COAPM). RESULTS: The results show that all evaluated existing models have a good performance when there is no carry-over and no treatment dependence. When there is carry-over, COAPM yields unbiased and more efficient estimates while all other methods show some bias in the estimation. When there is known treatment dependence, all approaches that are capable to model it yield unbiased estimates. Finally, the efficiency of all methods decreases slightly when there are missing values, and the bias in the estimates can also increase. CONCLUSIONS: This study presents a systematic evaluation of existing and novel approaches for the statistical analysis of a series of N-of-1 trials. We derive practical recommendations which methods may be best in which scenarios.


Asunto(s)
Proyectos de Investigación , Humanos , Modelos Lineales , Teorema de Bayes , Causalidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA