Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 72
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Eur J Epidemiol ; 39(6): 587-603, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38879863

RESUMEN

Epidemiological researchers often examine associations between risk factors and health outcomes in non-experimental designs. Observed associations may be causal or confounded by unmeasured factors. Sibling and co-twin control studies account for familial confounding by comparing exposure levels among siblings (or twins). If the exposure-outcome association is causal, the siblings should also differ regarding the outcome. However, such studies may sometimes introduce more bias than they alleviate. Measurement error in the exposure may bias results and lead to erroneous conclusions that truly causal exposure-outcome associations are confounded by familial factors. The current study used Monte Carlo simulations to examine bias due to measurement error in sibling control models when the observed exposure-outcome association is truly causal. The results showed that decreasing exposure reliability and increasing sibling-correlations in the exposure led to deflated exposure-outcome associations and inflated associations between the family mean of the exposure and the outcome. The risk of falsely concluding that causal associations were confounded was high in many situations. For example, when exposure reliability was 0.7 and the observed sibling-correlation was r = 0.4, about 30-90% of the samples (n = 2,000) provided results supporting a false conclusion of confounding, depending on how p-values were interpreted as evidence for a family effect on the outcome. The current results have practical importance for epidemiological researchers conducting or reviewing sibling and co-twin control studies and may improve our understanding of observed associations between risk factors and health outcomes. We have developed an app (SibSim) providing simulations of many situations not presented in this paper.


Asunto(s)
Sesgo , Factores de Confusión Epidemiológicos , Método de Montecarlo , Hermanos , Humanos , Gemelos/estadística & datos numéricos , Reproducibilidad de los Resultados , Factores de Riesgo , Estudios en Gemelos como Asunto , Femenino , Causalidad
2.
Biom J ; 66(1): e2200107, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36932050

RESUMEN

Developing new imputation methodology has become a very active field. Unfortunately, there is no consensus on how to perform simulation studies to evaluate the properties of imputation methods. In part, this may be due to different aims between fields and studies. For example, when evaluating imputation techniques aimed at prediction, different aims may be formulated than when statistical inference is of interest. The lack of consensus may also stem from different personal preferences or scientific backgrounds. All in all, the lack of common ground in evaluating imputation methodology may lead to suboptimal use in practice. In this paper, we propose a move toward a standardized evaluation of imputation methodology. To demonstrate the need for standardization, we highlight a set of possible pitfalls that bring forth a chain of potential problems in the objective assessment of the performance of imputation routines. Additionally, we suggest a course of action for simulating and evaluating missing data problems. Our suggested course of action is by no means meant to serve as a complete cookbook, but rather meant to incite critical thinking and a move to objective and fair evaluations of imputation methodology. We invite the readers of this paper to contribute to the suggested course of action.


Asunto(s)
Simulación por Computador
3.
Biom J ; 66(1): e2200212, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36810737

RESUMEN

Method comparisons are essential to provide recommendations and guidance for applied researchers, who often have to choose from a plethora of available approaches. While many comparisons exist in the literature, these are often not neutral but favor a novel method. Apart from the choice of design and a proper reporting of the findings, there are different approaches concerning the underlying data for such method comparison studies. Most manuscripts on statistical methodology rely on simulation studies and provide a single real-world data set as an example to motivate and illustrate the methodology investigated. In the context of supervised learning, in contrast, methods are often evaluated using so-called benchmarking data sets, that is, real-world data that serve as gold standard in the community. Simulation studies, on the other hand, are much less common in this context. The aim of this paper is to investigate differences and similarities between these approaches, to discuss their advantages and disadvantages, and ultimately to develop new approaches to the evaluation of methods picking the best of both worlds. To this aim, we borrow ideas from different contexts such as mixed methods research and Clinical Scenario Evaluation.


Asunto(s)
Benchmarking , Simulación por Computador
4.
Biom J ; 66(1): e2200102, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36642800

RESUMEN

When comparing the performance of two or more competing tests, simulation studies commonly focus on statistical power. However, if the size of the tests being compared are either different from one another or from the nominal size, comparing tests based on power alone may be misleading. By analogy with diagnostic accuracy studies, we introduce relative positive and negative likelihood ratios to factor in both power and size in the comparison of multiple tests. We derive sample size formulas for a comparative simulation study. As an example, we compared the performance of six statistical tests for small-study effects in meta-analyses of randomized controlled trials: Begg's rank correlation, Egger's regression, Schwarzer's method for sparse data, the trim-and-fill method, the arcsine-Thompson test, and Lin and Chu's combined test. We illustrate that comparing power alone, or power adjusted or penalized for size, can be misleading, and how the proposed likelihood ratio approach enables accurate comparison of the trade-off between power and size between competing tests.


Asunto(s)
Sesgo de Publicación , Simulación por Computador , Tamaño de la Muestra
5.
Biom J ; 66(1): e2200095, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36642811

RESUMEN

Statistical simulation studies are becoming increasingly popular to demonstrate the performance or superiority of new computational procedures and algorithms. Despite this status quo, previous surveys of the literature have shown that the reporting of statistical simulation studies often lacks relevant information and structure. The latter applies in particular to Bayesian simulation studies, and in this paper the Bayesian simulation study framework (BASIS) is presented as a step towards improving the situation. The BASIS framework provides a structured skeleton for planning, coding, executing, analyzing, and reporting Bayesian simulation studies in biometrical research and computational statistics. It encompasses various features of previous proposals and recommendations in the methodological literature and aims to promote neutral comparison studies in statistical research. Computational aspects covered in the BASIS include algorithmic choices, Markov-chain-Monte-Carlo convergence diagnostics, sensitivity analyses, and Monte Carlo standard error calculations for Bayesian simulation studies. Although the BASIS framework focuses primarily on methodological research, it also provides useful guidance for researchers who rely on the results of Bayesian simulation studies or analyses, as current state-of-the-art guidelines for Bayesian analyses are incorporated into the BASIS.


Asunto(s)
Algoritmos , Teorema de Bayes , Simulación por Computador , Cadenas de Markov , Método de Montecarlo
6.
Brief Bioinform ; 22(3)2021 05 20.
Artículo en Inglés | MEDLINE | ID: mdl-34020546

RESUMEN

Gene regulatory network is a complicated set of interactions between genetic materials, which dictates how cells develop in living organisms and react to their surrounding environment. Robust comprehension of these interactions would help explain how cells function as well as predict their reactions to external factors. This knowledge can benefit both developmental biology and clinical research such as drug development or epidemiology research. Recently, the rapid advance of single-cell sequencing technologies, which pushed the limit of transcriptomic profiling to the individual cell level, opens up an entirely new area for regulatory network research. To exploit this new abundant source of data and take advantage of data in single-cell resolution, a number of computational methods have been proposed to uncover the interactions hidden by the averaging process in standard bulk sequencing. In this article, we review 15 such network inference methods developed for single-cell data. We discuss their underlying assumptions, inference techniques, usability, and pros and cons. In an extensive analysis using simulation, we also assess the methods' performance, sensitivity to dropout and time complexity. The main objective of this survey is to assist not only life scientists in selecting suitable methods for their data and analysis purposes but also computational scientists in developing new methods by highlighting outstanding challenges in the field that remain to be addressed in the future development.


Asunto(s)
Biología Computacional/métodos , Perfilación de la Expresión Génica/métodos , Redes Reguladoras de Genes , Análisis de Secuencia de ARN/métodos , Análisis de la Célula Individual/métodos , Algoritmos , Humanos , Modelos Genéticos , Reproducibilidad de los Resultados , Programas Informáticos
7.
BMC Med Res Methodol ; 23(1): 300, 2023 12 16.
Artículo en Inglés | MEDLINE | ID: mdl-38104108

RESUMEN

INTRODUCTION: Non-compliance is a common challenge for researchers and may reduce the power of an intention-to-treat analysis. Whilst a per protocol approach attempts to deal with this issue, it can result in biased estimates. Several methods to resolve this issue have been identified in previous reviews, but there is limited evidence supporting their use. This review aimed to identify simulation studies which compare such methods, assess the extent to which certain methods have been investigated and determine their performance under various scenarios. METHODS: A systematic search of several electronic databases including MEDLINE and Scopus was carried out from conception to 30th November 2022. Included papers were published in a peer-reviewed journal, readily available in the English language and focused on comparing relevant methods in a superiority randomised controlled trial under a simulation study. Articles were screened using these criteria and a predetermined extraction form used to identify relevant information. A quality assessment appraised the risk of bias in individual studies. Extracted data was synthesised using tables, figures and a narrative summary. Both screening and data extraction were performed by two independent reviewers with disagreements resolved by consensus. RESULTS: Of 2325 papers identified, 267 full texts were screened and 17 studies finally included. Twelve methods were identified across papers. Instrumental variable methods were commonly considered, but many authors found them to be biased in some settings. Non-compliance was generally assumed to be all-or-nothing and only occurring in the intervention group, although some methods considered it as time-varying. Simulation studies commonly varied the level and type of non-compliance and factors such as effect size and strength of confounding. The quality of papers was generally good, although some lacked detail and justification. Therefore, their conclusions were deemed to be less reliable. CONCLUSIONS: It is common for papers to consider instrumental variable methods but more studies are needed that consider G-methods and compare a wide range of methods in realistic scenarios. It is difficult to make conclusions about the best method to deal with non-compliance due to a limited body of evidence and the difficulty in combining results from independent simulation studies. PROSPERO REGISTRATION NUMBER: CRD42022370910.


Asunto(s)
Sesgo , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto
8.
Int J Mol Sci ; 24(14)2023 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-37511239

RESUMEN

Cytochromes CYP1A1, CYP1A2, and CYP1B1, the members of the cytochrome P450 family 1, catalyze the metabolism of endogenous compounds, drugs, and non-drug xenobiotics which include substances involved in the process of carcinogenesis, cancer chemoprevention, and therapy. In the present study, the interactions of three selected polymethoxy-trans-stilbenes, analogs of a bioactive polyphenol trans-resveratrol (3,5,4'-trihydroxy-trans-stilbene) with the binding sites of CYP1 isozymes were investigated with molecular dynamics (MD) simulations. The most pronounced structural changes in the CYP1 binding sites were observed in two substrate recognition sites (SRS): SRS2 (helix F) and SRS3 (helix G). MD simulations show that the number and position of water molecules occurring in CYP1 APO and in the structures complexed with ligands are diverse. The presence of water in binding sites results in the formation of water-protein, water-ligand, and bridging ligand-water-protein hydrogen bonds. Analysis of the solvent and substrate channels opening during the MD simulation showed significant differences between cytochromes in relation to the solvent channel and the substrate channels 2c, 2ac, and 2f. The results of this investigation lead to a deeper understanding of the molecular processes that occur in the CYP1 binding sites and may be useful for further molecular studies of CYP1 functions.


Asunto(s)
Citocromo P-450 CYP1A1 , Citocromo P-450 CYP1A2 , Humanos , Citocromo P-450 CYP1A1/metabolismo , Citocromo P-450 CYP1A2/metabolismo , Simulación de Dinámica Molecular , Dominio Catalítico , Ligandos , Citocromo P-450 CYP1B1/metabolismo
9.
Behav Res Methods ; 55(6): 3218-3240, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36085545

RESUMEN

Longitudinal processes often unfold concurrently where the growth patterns of two or more longitudinal outcomes are associated. Additionally, if the study under investigation is long, the growth curves may exhibit nonconstant change with respect to time. Multiple existing studies have developed multivariate growth models with nonlinear functional forms to explore joint development where two longitudinal records are correlated over time. However, the relationship between multiple longitudinal outcomes may also be unidirectional. Accordingly, it is of interest to estimate regression coefficients of such unidirectional paths. One statistical tool for such analyses is longitudinal mediation models. In this study, we develop two models to evaluate mediational processes where the linear-linear piecewise functional form is utilized to capture the change patterns. We define the mediational process as either the baseline covariate or the change in covariate influencing the change in the mediator, which, in turn, affects the change in the outcome. We present the proposed models through simulation studies and real-world data analyses. Our simulation studies demonstrate that the proposed mediational models can provide unbiased and accurate point estimates with target coverage probabilities with a 95% confidence interval. The empirical analyses demonstrate that the proposed models can estimate covariates' direct and indirect effects on the change in the outcome. We also provide the corresponding code for the proposed models.


Asunto(s)
Modelos Estadísticos , Humanos , Modelos Lineales , Simulación por Computador , Probabilidad , Estudios Longitudinales
10.
Behav Res Methods ; 2023 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-37580631

RESUMEN

Growth mixture modeling (GMM) is an analytical tool for identifying multiple unobserved sub-populations in longitudinal processes. In particular, it describes change patterns within each latent sub-population and investigates between-individual differences in within-individual change for each sub-group. A key research interest in using GMMs is examining how covariates influence the heterogeneity in change patterns. Liu & Perera (2022b) extended mixture-of-experts (MoE) models, which primarily focus on time-invariant covariates, to allow covariates to account for both within-group and between-group differences and investigate the heterogeneity in nonlinear trajectories. The present study further extends Liu & Perera, 2022b by examining the effects of time-varying covariates (TVCs) on trajectory heterogeneity. Specifically, we propose methods to decompose a TVC into an initial trait (the baseline value of the TVC) and a set of temporal states (interval-specific slopes or changes of the TVC). The initial trait is allowed to account for within-group differences in growth factors of trajectories (i.e., baseline effect), while the temporal states are allowed to impact observed values of a longitudinal process (i.e., temporal effects). We evaluate the proposed models using a simulation study and real-world data analysis. The simulation study demonstrates that the proposed models are capable of separating trajectories into several clusters and generally producing unbiased and accurate estimates with target coverage probabilities. The proposed models reveal the heterogeneity in initial trait and temporal states of reading ability across latent classes of students' mathematics performance. Additionally, the baseline and temporal effects on mathematics development of reading ability are also heterogeneous across the clusters of students.

11.
Am J Epidemiol ; 191(1): 173-181, 2022 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-34642734

RESUMEN

Use of computed tomography (CT) scanning has increased substantially since its introduction in the 1990s. Several authors have reported increased risk of leukemia and brain tumors associated with radiation exposure from CT scans. However, reverse causation is a concern, particularly for brain cancer; in other words, the CT scan may have been taken because of preexisting cancer and therefore not have been a cause. We assessed the possibility of reverse causation via a simulation study focused on brain tumors, using a simplified version of the data structure for recent CT studies. Five-year-lagged and unlagged analyses implied an observed excess risk per scan up to 70% lower than the true excess risk per scan, particularly when more than 10% of persons with latent cancer had increased numbers of scans or the extra scanning rate after development of latent cancer was greater than 2 scans/year; less extreme values of these parameters imply little risk attenuation. Without a lag and when more than 20% of persons with latent cancer had increased scans-an arguably implausible scenario-the excess risk per scan was increased over the true excess risk per scan by up to 35%-40%. This study suggests that with a realistic lag, reverse causation results in downwardly biased risk, a result of induced classical measurement error, and is therefore unlikely to produce a spurious positive association between cancer and radiation dose from CT scans.


Asunto(s)
Neoplasias Encefálicas/etiología , Causalidad , Neoplasias Inducidas por Radiación/etiología , Tomografía Computarizada por Rayos X/efectos adversos , Simulación por Computador , Métodos Epidemiológicos , Humanos , Medición de Riesgo
12.
Eur J Epidemiol ; 37(5): 477-494, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35347538

RESUMEN

BACKGROUND: Several studies have examined maternal health behavior during pregnancy and child outcomes. Negative control variables have been used to address unobserved confounding in such studies. This approach assumes that confounders affect the exposure and the negative control to the same degree. The current study introduces a novel latent variable approach that relaxes this assumption by accommodating repeated measures of maternal health behavior during pregnancy. METHODS: Monte Carlo simulations were used to examine the performance of the latent variable approach. A real-life example is also provided, using data from the Norwegian Mother, Father, and Child Study (MoBa). RESULTS: Simulations: Regular regression analyses without a negative control variable worked poorly in the presence of unobserved confounding. Including a negative control variable improved result substantially. The latent variable approach provided unbiased results in several situations where the other analysis models worked poorly. Real-life data: Maternal alcohol use in the first trimester was associated with increased ADHD symptoms in the child in the standard regression model. This association was not present in the latent variable approach. CONCLUSION: The current study showed that a latent variable approach with a negative control provided unbiased estimates of causal associations between repeated measures of maternal health behavior during pregnancy and child outcomes, even when the effect of the confounder differed in magnitude between the negative control and the exposures. The real-life example showed that inferences from the latent variable approach were incompatible with those from the standard regression approach. Limitations of the approach are discussed.


Asunto(s)
Madres , Efectos Tardíos de la Exposición Prenatal , Consumo de Bebidas Alcohólicas/efectos adversos , Consumo de Bebidas Alcohólicas/epidemiología , Causalidad , Niño , Femenino , Humanos , Embarazo , Efectos Tardíos de la Exposición Prenatal/epidemiología , Análisis de Regresión , Factores de Riesgo
13.
Biom J ; 2022 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-36053253

RESUMEN

Many methodological comparison studies aim at identifying a single or a few "best performing" methods over a certain range of data sets. In this paper we take a different viewpoint by asking whether the research question of identifying the best performing method is what we should be striving for in the first place. We will argue that this research question implies assumptions which we do not consider warranted in methodological research, that a different research question would be more informative, and how this research question can be fruitfully investigated.

14.
Molecules ; 27(24)2022 Dec 08.
Artículo en Inglés | MEDLINE | ID: mdl-36557829

RESUMEN

In the present work, a series of new 1-{5-[2,5-bis(2,2,2-trifluoroethoxy)phenyl]-1,3,4-oxadiazol-3-acetyl-2-aryl-2H/methyl derivatives were synthesized through a multistep reaction sequence. The compounds were synthesized by the condensation of various aldehydes and acetophenones with the laboratory-synthesized acid hydrazide, which afforded the Schiff's bases. Cyclization of the Schiff bases yielded 1,3,4-oxadiazole derivatives. By spectral analysis, the structures of the newly synthesized compounds were elucidated, and further, their anti-cancer and anti-diabetic properties were investigated. To examine the dynamic behavior of the candidates at the binding site of the protein, molecular docking experiments on the synthesized compounds were performed, followed by a molecular dynamic simulation. ADMET (chemical absorption, distribution, metabolism, excretion, and toxicity) prediction revealed that most of the synthesized compounds follow Lipinski's rule of 5. The results were further correlated with biological studies. Using a cytotoxic assay, the newly synthesized 1,3,4-Oxadiazoles were screened for their in vitro cytotoxic efficacy against the LN229 Glioblastoma cell line. From the cytotoxic assay, the compounds 5b, 5d, and 5m were taken for colony formation assay and tunnel assay have shown significant cell apoptosis by damaging the DNA of cancer cells. The in vivo studies using a genetically modified diabetic model, Drosophila melanogaster, indicated that compounds 5d and 5f have better anti-diabetic activity among the different synthesized compounds. These compounds lowered the glucose levels significantly in the tested model.


Asunto(s)
Antineoplásicos , Oxadiazoles , Animales , Simulación del Acoplamiento Molecular , Estructura Molecular , Oxadiazoles/química , Drosophila melanogaster , Antineoplásicos/química , Hipoglucemiantes/farmacología , Relación Estructura-Actividad
15.
BMC Med Res Methodol ; 21(1): 130, 2021 06 24.
Artículo en Inglés | MEDLINE | ID: mdl-34162350

RESUMEN

BACKGROUND: An increasing number of randomized controlled trials (RCTs) have measured the impact of interventions on work productivity loss. Productivity loss outcome is inflated at zero and max loss values. Our study was to compare the performance of five commonly used methods in analysis of productivity loss outcomes in RCTs. METHODS: We conducted a simulation study to compare Ordinary Least Squares (OLS), Negative Binominal (NB), two-part models (the non-zero part following truncated NB distribution or gamma distribution) and three-part model (the middle part between zero and max values following Beta distribution). The main number of observations each arm, Nobs, that we considered were 50, 100 and 200. Baseline productivity loss was included as a covariate. RESULTS: All models performed similarly well when baseline productivity loss was set at the mean value. When baseline productivity loss was set at other values and Nobs = 50 with ≤5 subjects having max loss, two-part models performed best if the proportion of zero loss> 50% in at least one arm and otherwise, OLS performed best. When Nobs = 100 or 200, the three-part model performed best if the two arms had equal scale parameters for their productivity loss outcome distributions between zero and max values. CONCLUSIONS: Our findings suggest that when treatment effect at any given values of one single covariate is of interest, the model selection depends on the sample size, the proportions of zero loss and max loss, and the scale parameter for the productivity loss outcome distribution between zero and max loss in each arm of RCTs.


Asunto(s)
Absentismo , Eficiencia , Simulación por Computador , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto , Tamaño de la Muestra
16.
Sensors (Basel) ; 21(3)2021 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-33504025

RESUMEN

This paper presents a methodology for assessing co-channel interference that arises in multi-beam transmitting and receiving antennas used in fifth-generation (5G) systems. This evaluation is essential for minimizing spectral resources, which allows for using the same frequency bands in angularly separated antenna beams of a 5G-based station (gNodeB). In the developed methodology, a multi-ellipsoidal propagation model (MPM) provides a mapping of the multipath propagation phenomenon and considers the directivity of antenna beams. To demonstrate the designation procedure of interference level we use simulation tests. For exemplary scenarios in downlink and uplink, we showed changes in a signal-to-interference ratio versus a separation angle between the serving (useful) and interfering beams and the distance between the gNodeB and user equipment. This evaluation is the basis for determining the minimum separation angle for which an acceptable interference level is ensured. The analysis was carried out for the lower millimeter-wave band, which is planned to use in 5G micro-cells base stations.

17.
BMC Med Res Methodol ; 20(1): 276, 2020 11 12.
Artículo en Inglés | MEDLINE | ID: mdl-33183230

RESUMEN

BACKGROUND: Growth Mixture Modeling (GMM) is commonly used to group individuals on their development over time, but convergence issues and impossible values are common. This can result in unreliable model estimates. Constraining variance parameters across classes or over time can solve these issues, but can also seriously bias estimates if variances differ. We aimed to determine which variance parameters can best be constrained in Growth Mixture Modeling. METHODS: To identify the variance constraints that lead to the best performance for different sample sizes, we conducted a simulation study and next verified our results with the TRacking Adolescent Individuals' Lives Survey (TRAILS) cohort. RESULTS: If variance parameters differed across classes and over time, fitting a model without constraints led to the best results. No constrained model consistently performed well. However, the model that constrained the random effect variance and residual variances across classes consistently performed very poorly. For a small sample size (N = 100) all models showed issues. In TRAILS, the same model showed substantially different results from the other models and performed poorly in terms of model fit. CONCLUSIONS: If possible, a Growth Mixture Model should be fit without any constraints on variance parameters. If not, we recommend to try different variance specifications and to not solely rely on the default model, which constrains random effect variances and residual variances across classes. The variance structure must always be reported Researchers should carefully follow the GRoLTS-Checklist when analyzing and reporting trajectory analyses.


Asunto(s)
Simulación por Computador , Adolescente , Estudios de Cohortes , Humanos , Tamaño de la Muestra , Encuestas y Cuestionarios
18.
J Biopharm Stat ; 30(1): 197-215, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31246135

RESUMEN

In this paper, we assess the effect of tuberculosis pericarditis treatment (prednisolone) on CD4 count changes over time and draw inferences in the presence of missing data. We accounted for the missing data and performed sensitivity analyses to assess robustness of inferences, from a model that assumes that the data are missing at random, to models that assume that the data are not missing at random. Our sensitivity approaches are within the shared-parameter model framework. We implemented the approach by Creemers and colleagues to the CD4 count data and performed simulation studies to evaluate the performance of this approach. We also assessed the influence of potentially influential subjects, on parameter estimates, via the global influence approach. Our results revealed that inferences from missing at random analysis model are robust to not missing at random models and influential subjects did not overturn the study conclusions about prednisolone effect and missing data mechanism. Prednisolone was found to have no significant effect on CD4 count changes over time and also did not interact with anti-retroviral therapy. The simulation studies produced unbiased estimates of prednisolone effect with lower mean square errors and coverage probabilities approximately equal the nominal coverage probability.


Asunto(s)
Estudios Multicéntricos como Asunto/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Proyectos de Investigación/estadística & datos numéricos , Recuento de Linfocito CD4 , Interpretación Estadística de Datos , Glucocorticoides/uso terapéutico , Humanos , Estudios Longitudinales , Modelos Estadísticos , Pericarditis Tuberculosa/tratamiento farmacológico , Pericarditis Tuberculosa/inmunología , Factores de Tiempo , Resultado del Tratamiento
19.
J Am Soc Nephrol ; 30(9): 1756-1769, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-31292198

RESUMEN

BACKGROUND: Randomized trials of CKD treatments traditionally use clinical events late in CKD progression as end points. This requires costly studies with large sample sizes and long follow-up. Surrogate end points like GFR slope may speed up the evaluation of new therapies by enabling smaller studies with shorter follow-up. METHODS: We used statistical simulations to identify trial situations where GFR slope provides increased statistical power compared with the clinical end point of doubling of serum creatinine or kidney failure. We simulated GFR trajectories based on data from 47 randomized treatment comparisons. We evaluated the sample size required for adequate statistical power based on GFR slopes calculated from baseline and from 3 months follow-up. RESULTS: In most scenarios where the treatment has no acute effect, analyses of GFR slope provided similar or improved statistical power compared with the clinical end point, often allowing investigators to shorten follow-up by at least half while simultaneously reducing sample size. When patients' GFRs are higher, the power advantages of GFR slope increase. However, acute treatment effects within several months of randomization can increase the risk of false conclusions about therapies based on GFR slope. Care is needed in study design and analysis to avoid such false conclusions. CONCLUSIONS: Use of GFR slope can substantially increase statistical power compared with the clinical end point, particularly when baseline GFR is high and there is no acute effect. The optimum GFR-based end point depends on multiple factors including the rate of GFR decline, type of treatment effect and study design.


Asunto(s)
Tasa de Filtración Glomerular , Modelos Estadísticos , Insuficiencia Renal Crónica/fisiopatología , Biomarcadores , Simulación por Computador , Progresión de la Enfermedad , Determinación de Punto Final , Humanos , Fallo Renal Crónico/fisiopatología , Ensayos Clínicos Controlados Aleatorios como Asunto , Insuficiencia Renal Crónica/terapia , Factores de Tiempo
20.
BMC Med Genet ; 20(1): 9, 2019 01 11.
Artículo en Inglés | MEDLINE | ID: mdl-30634949

RESUMEN

BACKGROUND: The interactive effect of the IGF pathway genes with the environment may contribute to childhood obesity. Such gene-environment interactions can take on complex forms. Detecting those relationships using longitudinal family studies requires simultaneously accounting for correlations within individuals and families. METHODS: We studied three methods for detecting interaction effects in longitudinal family studies. The twin model and the nonparametric partition-based score test utilized individual outcome averages, whereas the linear mixed model used all available longitudinal data points. Simulation experiments were performed to evaluate the methods' power to detect different gene-environment interaction relationships. These methods were applied to the Quebec Newborn Twin Study data to test for interaction effects between the IGF pathway genes (IGF-1, IGFALS) and environmental factors (physical activity, daycare attendance and sleep duration) on body mass index outcomes. RESULTS: For the simulated data, the twin model with the mean time summary statistic yielded good performance overall. Modelling an interaction as linear when the true model had a different relationship influenced power; for certain non-linear interactions, none of the three methods were effective. Our analysis of the IGF pathway genes showed suggestive association for the joint effect of IGF-1 variant at position 102,791,894 of chromosome 12 and physical activity. However, this association was not statistically significant after multiple testing correction. CONCLUSIONS: The analytical approaches considered in this study were not robust to different gene-environment interactions. Methodological innovations are needed to improve the current methods' performances for detecting non-linear interactions. More studies are needed in order to better understand the IGF pathway's role in childhood obesity development.


Asunto(s)
Proteínas Portadoras/genética , Proteínas Portadoras/metabolismo , Interacción Gen-Ambiente , Glicoproteínas/genética , Glicoproteínas/metabolismo , Factor I del Crecimiento Similar a la Insulina/genética , Factor I del Crecimiento Similar a la Insulina/metabolismo , Obesidad Infantil/genética , Obesidad Infantil/metabolismo , Índice de Masa Corporal , Niño , Preescolar , Cromosomas Humanos Par 12 , Femenino , Estudios de Seguimiento , Humanos , Lactante , Recién Nacido , Modelos Lineales , Estudios Longitudinales , Masculino , Quebec , Estadísticas no Paramétricas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA