Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 74
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Eur J Epidemiol ; 39(6): 587-603, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38879863

RESUMO

Epidemiological researchers often examine associations between risk factors and health outcomes in non-experimental designs. Observed associations may be causal or confounded by unmeasured factors. Sibling and co-twin control studies account for familial confounding by comparing exposure levels among siblings (or twins). If the exposure-outcome association is causal, the siblings should also differ regarding the outcome. However, such studies may sometimes introduce more bias than they alleviate. Measurement error in the exposure may bias results and lead to erroneous conclusions that truly causal exposure-outcome associations are confounded by familial factors. The current study used Monte Carlo simulations to examine bias due to measurement error in sibling control models when the observed exposure-outcome association is truly causal. The results showed that decreasing exposure reliability and increasing sibling-correlations in the exposure led to deflated exposure-outcome associations and inflated associations between the family mean of the exposure and the outcome. The risk of falsely concluding that causal associations were confounded was high in many situations. For example, when exposure reliability was 0.7 and the observed sibling-correlation was r = 0.4, about 30-90% of the samples (n = 2,000) provided results supporting a false conclusion of confounding, depending on how p-values were interpreted as evidence for a family effect on the outcome. The current results have practical importance for epidemiological researchers conducting or reviewing sibling and co-twin control studies and may improve our understanding of observed associations between risk factors and health outcomes. We have developed an app (SibSim) providing simulations of many situations not presented in this paper.


Assuntos
Viés , Fatores de Confusão Epidemiológicos , Método de Monte Carlo , Irmãos , Humanos , Gêmeos/estatística & dados numéricos , Reprodutibilidade dos Testes , Fatores de Risco , Estudos em Gêmeos como Assunto , Feminino , Causalidade
2.
Bioorg Chem ; 151: 107666, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39067420

RESUMO

Design and virtual screening of a set of non-acidic 4-methyl-4-phenyl-benzenesulfonate-based aldose reductase 2 inhibitors had been developed followed by chemical synthesis. Based on the results, the synthesized compounds 2, 4a,b, 7a-c, 9a-c, 10a-c, 11b,c and 14a-c inhibited the ALR2 enzymatic activity in a submicromolar range (99.29-417 nM) and among them, the derivatives 2, 9b, 10a and 14b were able to inhibit ALR2 by IC50 of 160.40, 165.20, 99.29 and 120.6 nM, respectively. Moreover, kinetic analyses using Lineweaver-Burk plot revealed that the most active candidate 10a inhibited ALR2 potently via a non-competitive mechanism. In vivo studies showed that 10 mg/kg of compound 10a significantly lowered blood glucose levels in alloxan-induced diabetic mice by 46.10 %. Moreover, compound 10a showed no toxicity up to a concentration of 50 mg/kg and had no adverse effects on liver and kidney functions. It significantly increased levels of GSH and SOD while decreasing MDA levels, thereby mitigating oxidative stress associated with diabetes and potentially attenuating diabetic complications. Furthermore, the binding mode of compound 10a was confirmed through MD simulation. Noteworthy, compounds 2 and 14b showed moderate antimicrobial activity against the two fungi Aspergillus fumigatus and Aspergillus niger. Finally, we report the thiazole derivative 10a as a new promising non-acidic aldose reductase inhibitor that may be beneficial in treating diabetic complications.


Assuntos
Aldeído Redutase , Desenho de Fármacos , Inibidores Enzimáticos , Aldeído Redutase/antagonistas & inibidores , Aldeído Redutase/metabolismo , Animais , Inibidores Enzimáticos/farmacologia , Inibidores Enzimáticos/síntese química , Inibidores Enzimáticos/química , Camundongos , Relação Estrutura-Atividade , Estrutura Molecular , Diabetes Mellitus Experimental/tratamento farmacológico , Diabetes Mellitus Experimental/induzido quimicamente , Relação Dose-Resposta a Droga , Simulação de Acoplamento Molecular , Masculino , Humanos , Benzenossulfonatos/farmacologia , Benzenossulfonatos/química , Benzenossulfonatos/síntese química , Hipoglicemiantes/farmacologia , Hipoglicemiantes/síntese química , Hipoglicemiantes/química
3.
Biom J ; 66(1): e2200107, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36932050

RESUMO

Developing new imputation methodology has become a very active field. Unfortunately, there is no consensus on how to perform simulation studies to evaluate the properties of imputation methods. In part, this may be due to different aims between fields and studies. For example, when evaluating imputation techniques aimed at prediction, different aims may be formulated than when statistical inference is of interest. The lack of consensus may also stem from different personal preferences or scientific backgrounds. All in all, the lack of common ground in evaluating imputation methodology may lead to suboptimal use in practice. In this paper, we propose a move toward a standardized evaluation of imputation methodology. To demonstrate the need for standardization, we highlight a set of possible pitfalls that bring forth a chain of potential problems in the objective assessment of the performance of imputation routines. Additionally, we suggest a course of action for simulating and evaluating missing data problems. Our suggested course of action is by no means meant to serve as a complete cookbook, but rather meant to incite critical thinking and a move to objective and fair evaluations of imputation methodology. We invite the readers of this paper to contribute to the suggested course of action.


Assuntos
Simulação por Computador
4.
Biom J ; 66(1): e2200212, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36810737

RESUMO

Method comparisons are essential to provide recommendations and guidance for applied researchers, who often have to choose from a plethora of available approaches. While many comparisons exist in the literature, these are often not neutral but favor a novel method. Apart from the choice of design and a proper reporting of the findings, there are different approaches concerning the underlying data for such method comparison studies. Most manuscripts on statistical methodology rely on simulation studies and provide a single real-world data set as an example to motivate and illustrate the methodology investigated. In the context of supervised learning, in contrast, methods are often evaluated using so-called benchmarking data sets, that is, real-world data that serve as gold standard in the community. Simulation studies, on the other hand, are much less common in this context. The aim of this paper is to investigate differences and similarities between these approaches, to discuss their advantages and disadvantages, and ultimately to develop new approaches to the evaluation of methods picking the best of both worlds. To this aim, we borrow ideas from different contexts such as mixed methods research and Clinical Scenario Evaluation.


Assuntos
Benchmarking , Simulação por Computador
5.
Biom J ; 66(1): e2200102, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36642800

RESUMO

When comparing the performance of two or more competing tests, simulation studies commonly focus on statistical power. However, if the size of the tests being compared are either different from one another or from the nominal size, comparing tests based on power alone may be misleading. By analogy with diagnostic accuracy studies, we introduce relative positive and negative likelihood ratios to factor in both power and size in the comparison of multiple tests. We derive sample size formulas for a comparative simulation study. As an example, we compared the performance of six statistical tests for small-study effects in meta-analyses of randomized controlled trials: Begg's rank correlation, Egger's regression, Schwarzer's method for sparse data, the trim-and-fill method, the arcsine-Thompson test, and Lin and Chu's combined test. We illustrate that comparing power alone, or power adjusted or penalized for size, can be misleading, and how the proposed likelihood ratio approach enables accurate comparison of the trade-off between power and size between competing tests.


Assuntos
Viés de Publicação , Simulação por Computador , Tamanho da Amostra
6.
Biom J ; 66(1): e2200095, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36642811

RESUMO

Statistical simulation studies are becoming increasingly popular to demonstrate the performance or superiority of new computational procedures and algorithms. Despite this status quo, previous surveys of the literature have shown that the reporting of statistical simulation studies often lacks relevant information and structure. The latter applies in particular to Bayesian simulation studies, and in this paper the Bayesian simulation study framework (BASIS) is presented as a step towards improving the situation. The BASIS framework provides a structured skeleton for planning, coding, executing, analyzing, and reporting Bayesian simulation studies in biometrical research and computational statistics. It encompasses various features of previous proposals and recommendations in the methodological literature and aims to promote neutral comparison studies in statistical research. Computational aspects covered in the BASIS include algorithmic choices, Markov-chain-Monte-Carlo convergence diagnostics, sensitivity analyses, and Monte Carlo standard error calculations for Bayesian simulation studies. Although the BASIS framework focuses primarily on methodological research, it also provides useful guidance for researchers who rely on the results of Bayesian simulation studies or analyses, as current state-of-the-art guidelines for Bayesian analyses are incorporated into the BASIS.


Assuntos
Algoritmos , Teorema de Bayes , Simulação por Computador , Cadeias de Markov , Método de Monte Carlo
7.
Brief Bioinform ; 22(3)2021 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-34020546

RESUMO

Gene regulatory network is a complicated set of interactions between genetic materials, which dictates how cells develop in living organisms and react to their surrounding environment. Robust comprehension of these interactions would help explain how cells function as well as predict their reactions to external factors. This knowledge can benefit both developmental biology and clinical research such as drug development or epidemiology research. Recently, the rapid advance of single-cell sequencing technologies, which pushed the limit of transcriptomic profiling to the individual cell level, opens up an entirely new area for regulatory network research. To exploit this new abundant source of data and take advantage of data in single-cell resolution, a number of computational methods have been proposed to uncover the interactions hidden by the averaging process in standard bulk sequencing. In this article, we review 15 such network inference methods developed for single-cell data. We discuss their underlying assumptions, inference techniques, usability, and pros and cons. In an extensive analysis using simulation, we also assess the methods' performance, sensitivity to dropout and time complexity. The main objective of this survey is to assist not only life scientists in selecting suitable methods for their data and analysis purposes but also computational scientists in developing new methods by highlighting outstanding challenges in the field that remain to be addressed in the future development.


Assuntos
Biologia Computacional/métodos , Perfilação da Expressão Gênica/métodos , Redes Reguladoras de Genes , Análise de Sequência de RNA/métodos , Análise de Célula Única/métodos , Algoritmos , Humanos , Modelos Genéticos , Reprodutibilidade dos Testes , Software
8.
BMC Med Res Methodol ; 23(1): 300, 2023 12 16.
Artigo em Inglês | MEDLINE | ID: mdl-38104108

RESUMO

INTRODUCTION: Non-compliance is a common challenge for researchers and may reduce the power of an intention-to-treat analysis. Whilst a per protocol approach attempts to deal with this issue, it can result in biased estimates. Several methods to resolve this issue have been identified in previous reviews, but there is limited evidence supporting their use. This review aimed to identify simulation studies which compare such methods, assess the extent to which certain methods have been investigated and determine their performance under various scenarios. METHODS: A systematic search of several electronic databases including MEDLINE and Scopus was carried out from conception to 30th November 2022. Included papers were published in a peer-reviewed journal, readily available in the English language and focused on comparing relevant methods in a superiority randomised controlled trial under a simulation study. Articles were screened using these criteria and a predetermined extraction form used to identify relevant information. A quality assessment appraised the risk of bias in individual studies. Extracted data was synthesised using tables, figures and a narrative summary. Both screening and data extraction were performed by two independent reviewers with disagreements resolved by consensus. RESULTS: Of 2325 papers identified, 267 full texts were screened and 17 studies finally included. Twelve methods were identified across papers. Instrumental variable methods were commonly considered, but many authors found them to be biased in some settings. Non-compliance was generally assumed to be all-or-nothing and only occurring in the intervention group, although some methods considered it as time-varying. Simulation studies commonly varied the level and type of non-compliance and factors such as effect size and strength of confounding. The quality of papers was generally good, although some lacked detail and justification. Therefore, their conclusions were deemed to be less reliable. CONCLUSIONS: It is common for papers to consider instrumental variable methods but more studies are needed that consider G-methods and compare a wide range of methods in realistic scenarios. It is difficult to make conclusions about the best method to deal with non-compliance due to a limited body of evidence and the difficulty in combining results from independent simulation studies. PROSPERO REGISTRATION NUMBER: CRD42022370910.


Assuntos
Viés , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto
9.
Int J Mol Sci ; 24(14)2023 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-37511239

RESUMO

Cytochromes CYP1A1, CYP1A2, and CYP1B1, the members of the cytochrome P450 family 1, catalyze the metabolism of endogenous compounds, drugs, and non-drug xenobiotics which include substances involved in the process of carcinogenesis, cancer chemoprevention, and therapy. In the present study, the interactions of three selected polymethoxy-trans-stilbenes, analogs of a bioactive polyphenol trans-resveratrol (3,5,4'-trihydroxy-trans-stilbene) with the binding sites of CYP1 isozymes were investigated with molecular dynamics (MD) simulations. The most pronounced structural changes in the CYP1 binding sites were observed in two substrate recognition sites (SRS): SRS2 (helix F) and SRS3 (helix G). MD simulations show that the number and position of water molecules occurring in CYP1 APO and in the structures complexed with ligands are diverse. The presence of water in binding sites results in the formation of water-protein, water-ligand, and bridging ligand-water-protein hydrogen bonds. Analysis of the solvent and substrate channels opening during the MD simulation showed significant differences between cytochromes in relation to the solvent channel and the substrate channels 2c, 2ac, and 2f. The results of this investigation lead to a deeper understanding of the molecular processes that occur in the CYP1 binding sites and may be useful for further molecular studies of CYP1 functions.


Assuntos
Citocromo P-450 CYP1A1 , Citocromo P-450 CYP1A2 , Humanos , Citocromo P-450 CYP1A1/metabolismo , Citocromo P-450 CYP1A2/metabolismo , Simulação de Dinâmica Molecular , Domínio Catalítico , Ligantes , Citocromo P-450 CYP1B1/metabolismo
10.
Behav Res Methods ; 55(6): 3218-3240, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36085545

RESUMO

Longitudinal processes often unfold concurrently where the growth patterns of two or more longitudinal outcomes are associated. Additionally, if the study under investigation is long, the growth curves may exhibit nonconstant change with respect to time. Multiple existing studies have developed multivariate growth models with nonlinear functional forms to explore joint development where two longitudinal records are correlated over time. However, the relationship between multiple longitudinal outcomes may also be unidirectional. Accordingly, it is of interest to estimate regression coefficients of such unidirectional paths. One statistical tool for such analyses is longitudinal mediation models. In this study, we develop two models to evaluate mediational processes where the linear-linear piecewise functional form is utilized to capture the change patterns. We define the mediational process as either the baseline covariate or the change in covariate influencing the change in the mediator, which, in turn, affects the change in the outcome. We present the proposed models through simulation studies and real-world data analyses. Our simulation studies demonstrate that the proposed mediational models can provide unbiased and accurate point estimates with target coverage probabilities with a 95% confidence interval. The empirical analyses demonstrate that the proposed models can estimate covariates' direct and indirect effects on the change in the outcome. We also provide the corresponding code for the proposed models.


Assuntos
Modelos Estatísticos , Humanos , Modelos Lineares , Simulação por Computador , Probabilidade , Estudos Longitudinais
11.
Behav Res Methods ; 2023 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-37580631

RESUMO

Growth mixture modeling (GMM) is an analytical tool for identifying multiple unobserved sub-populations in longitudinal processes. In particular, it describes change patterns within each latent sub-population and investigates between-individual differences in within-individual change for each sub-group. A key research interest in using GMMs is examining how covariates influence the heterogeneity in change patterns. Liu & Perera (2022b) extended mixture-of-experts (MoE) models, which primarily focus on time-invariant covariates, to allow covariates to account for both within-group and between-group differences and investigate the heterogeneity in nonlinear trajectories. The present study further extends Liu & Perera, 2022b by examining the effects of time-varying covariates (TVCs) on trajectory heterogeneity. Specifically, we propose methods to decompose a TVC into an initial trait (the baseline value of the TVC) and a set of temporal states (interval-specific slopes or changes of the TVC). The initial trait is allowed to account for within-group differences in growth factors of trajectories (i.e., baseline effect), while the temporal states are allowed to impact observed values of a longitudinal process (i.e., temporal effects). We evaluate the proposed models using a simulation study and real-world data analysis. The simulation study demonstrates that the proposed models are capable of separating trajectories into several clusters and generally producing unbiased and accurate estimates with target coverage probabilities. The proposed models reveal the heterogeneity in initial trait and temporal states of reading ability across latent classes of students' mathematics performance. Additionally, the baseline and temporal effects on mathematics development of reading ability are also heterogeneous across the clusters of students.

12.
Am J Epidemiol ; 191(1): 173-181, 2022 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-34642734

RESUMO

Use of computed tomography (CT) scanning has increased substantially since its introduction in the 1990s. Several authors have reported increased risk of leukemia and brain tumors associated with radiation exposure from CT scans. However, reverse causation is a concern, particularly for brain cancer; in other words, the CT scan may have been taken because of preexisting cancer and therefore not have been a cause. We assessed the possibility of reverse causation via a simulation study focused on brain tumors, using a simplified version of the data structure for recent CT studies. Five-year-lagged and unlagged analyses implied an observed excess risk per scan up to 70% lower than the true excess risk per scan, particularly when more than 10% of persons with latent cancer had increased numbers of scans or the extra scanning rate after development of latent cancer was greater than 2 scans/year; less extreme values of these parameters imply little risk attenuation. Without a lag and when more than 20% of persons with latent cancer had increased scans-an arguably implausible scenario-the excess risk per scan was increased over the true excess risk per scan by up to 35%-40%. This study suggests that with a realistic lag, reverse causation results in downwardly biased risk, a result of induced classical measurement error, and is therefore unlikely to produce a spurious positive association between cancer and radiation dose from CT scans.


Assuntos
Neoplasias Encefálicas/etiologia , Causalidade , Neoplasias Induzidas por Radiação/etiologia , Tomografia Computadorizada por Raios X/efeitos adversos , Simulação por Computador , Métodos Epidemiológicos , Humanos , Medição de Risco
13.
Eur J Epidemiol ; 37(5): 477-494, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35347538

RESUMO

BACKGROUND: Several studies have examined maternal health behavior during pregnancy and child outcomes. Negative control variables have been used to address unobserved confounding in such studies. This approach assumes that confounders affect the exposure and the negative control to the same degree. The current study introduces a novel latent variable approach that relaxes this assumption by accommodating repeated measures of maternal health behavior during pregnancy. METHODS: Monte Carlo simulations were used to examine the performance of the latent variable approach. A real-life example is also provided, using data from the Norwegian Mother, Father, and Child Study (MoBa). RESULTS: Simulations: Regular regression analyses without a negative control variable worked poorly in the presence of unobserved confounding. Including a negative control variable improved result substantially. The latent variable approach provided unbiased results in several situations where the other analysis models worked poorly. Real-life data: Maternal alcohol use in the first trimester was associated with increased ADHD symptoms in the child in the standard regression model. This association was not present in the latent variable approach. CONCLUSION: The current study showed that a latent variable approach with a negative control provided unbiased estimates of causal associations between repeated measures of maternal health behavior during pregnancy and child outcomes, even when the effect of the confounder differed in magnitude between the negative control and the exposures. The real-life example showed that inferences from the latent variable approach were incompatible with those from the standard regression approach. Limitations of the approach are discussed.


Assuntos
Mães , Efeitos Tardios da Exposição Pré-Natal , Consumo de Bebidas Alcoólicas/efeitos adversos , Consumo de Bebidas Alcoólicas/epidemiologia , Causalidade , Criança , Feminino , Humanos , Gravidez , Efeitos Tardios da Exposição Pré-Natal/epidemiologia , Análise de Regressão , Fatores de Risco
14.
Biom J ; 2022 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-36053253

RESUMO

Many methodological comparison studies aim at identifying a single or a few "best performing" methods over a certain range of data sets. In this paper we take a different viewpoint by asking whether the research question of identifying the best performing method is what we should be striving for in the first place. We will argue that this research question implies assumptions which we do not consider warranted in methodological research, that a different research question would be more informative, and how this research question can be fruitfully investigated.

15.
Molecules ; 27(24)2022 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-36557829

RESUMO

In the present work, a series of new 1-{5-[2,5-bis(2,2,2-trifluoroethoxy)phenyl]-1,3,4-oxadiazol-3-acetyl-2-aryl-2H/methyl derivatives were synthesized through a multistep reaction sequence. The compounds were synthesized by the condensation of various aldehydes and acetophenones with the laboratory-synthesized acid hydrazide, which afforded the Schiff's bases. Cyclization of the Schiff bases yielded 1,3,4-oxadiazole derivatives. By spectral analysis, the structures of the newly synthesized compounds were elucidated, and further, their anti-cancer and anti-diabetic properties were investigated. To examine the dynamic behavior of the candidates at the binding site of the protein, molecular docking experiments on the synthesized compounds were performed, followed by a molecular dynamic simulation. ADMET (chemical absorption, distribution, metabolism, excretion, and toxicity) prediction revealed that most of the synthesized compounds follow Lipinski's rule of 5. The results were further correlated with biological studies. Using a cytotoxic assay, the newly synthesized 1,3,4-Oxadiazoles were screened for their in vitro cytotoxic efficacy against the LN229 Glioblastoma cell line. From the cytotoxic assay, the compounds 5b, 5d, and 5m were taken for colony formation assay and tunnel assay have shown significant cell apoptosis by damaging the DNA of cancer cells. The in vivo studies using a genetically modified diabetic model, Drosophila melanogaster, indicated that compounds 5d and 5f have better anti-diabetic activity among the different synthesized compounds. These compounds lowered the glucose levels significantly in the tested model.


Assuntos
Antineoplásicos , Oxidiazóis , Animais , Simulação de Acoplamento Molecular , Estrutura Molecular , Oxidiazóis/química , Drosophila melanogaster , Antineoplásicos/química , Hipoglicemiantes/farmacologia , Relação Estrutura-Atividade
16.
BMC Med Res Methodol ; 21(1): 130, 2021 06 24.
Artigo em Inglês | MEDLINE | ID: mdl-34162350

RESUMO

BACKGROUND: An increasing number of randomized controlled trials (RCTs) have measured the impact of interventions on work productivity loss. Productivity loss outcome is inflated at zero and max loss values. Our study was to compare the performance of five commonly used methods in analysis of productivity loss outcomes in RCTs. METHODS: We conducted a simulation study to compare Ordinary Least Squares (OLS), Negative Binominal (NB), two-part models (the non-zero part following truncated NB distribution or gamma distribution) and three-part model (the middle part between zero and max values following Beta distribution). The main number of observations each arm, Nobs, that we considered were 50, 100 and 200. Baseline productivity loss was included as a covariate. RESULTS: All models performed similarly well when baseline productivity loss was set at the mean value. When baseline productivity loss was set at other values and Nobs = 50 with ≤5 subjects having max loss, two-part models performed best if the proportion of zero loss> 50% in at least one arm and otherwise, OLS performed best. When Nobs = 100 or 200, the three-part model performed best if the two arms had equal scale parameters for their productivity loss outcome distributions between zero and max values. CONCLUSIONS: Our findings suggest that when treatment effect at any given values of one single covariate is of interest, the model selection depends on the sample size, the proportions of zero loss and max loss, and the scale parameter for the productivity loss outcome distribution between zero and max loss in each arm of RCTs.


Assuntos
Absenteísmo , Eficiência , Simulação por Computador , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Tamanho da Amostra
17.
Sensors (Basel) ; 21(3)2021 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-33504025

RESUMO

This paper presents a methodology for assessing co-channel interference that arises in multi-beam transmitting and receiving antennas used in fifth-generation (5G) systems. This evaluation is essential for minimizing spectral resources, which allows for using the same frequency bands in angularly separated antenna beams of a 5G-based station (gNodeB). In the developed methodology, a multi-ellipsoidal propagation model (MPM) provides a mapping of the multipath propagation phenomenon and considers the directivity of antenna beams. To demonstrate the designation procedure of interference level we use simulation tests. For exemplary scenarios in downlink and uplink, we showed changes in a signal-to-interference ratio versus a separation angle between the serving (useful) and interfering beams and the distance between the gNodeB and user equipment. This evaluation is the basis for determining the minimum separation angle for which an acceptable interference level is ensured. The analysis was carried out for the lower millimeter-wave band, which is planned to use in 5G micro-cells base stations.

18.
BMC Med Res Methodol ; 20(1): 276, 2020 11 12.
Artigo em Inglês | MEDLINE | ID: mdl-33183230

RESUMO

BACKGROUND: Growth Mixture Modeling (GMM) is commonly used to group individuals on their development over time, but convergence issues and impossible values are common. This can result in unreliable model estimates. Constraining variance parameters across classes or over time can solve these issues, but can also seriously bias estimates if variances differ. We aimed to determine which variance parameters can best be constrained in Growth Mixture Modeling. METHODS: To identify the variance constraints that lead to the best performance for different sample sizes, we conducted a simulation study and next verified our results with the TRacking Adolescent Individuals' Lives Survey (TRAILS) cohort. RESULTS: If variance parameters differed across classes and over time, fitting a model without constraints led to the best results. No constrained model consistently performed well. However, the model that constrained the random effect variance and residual variances across classes consistently performed very poorly. For a small sample size (N = 100) all models showed issues. In TRAILS, the same model showed substantially different results from the other models and performed poorly in terms of model fit. CONCLUSIONS: If possible, a Growth Mixture Model should be fit without any constraints on variance parameters. If not, we recommend to try different variance specifications and to not solely rely on the default model, which constrains random effect variances and residual variances across classes. The variance structure must always be reported Researchers should carefully follow the GRoLTS-Checklist when analyzing and reporting trajectory analyses.


Assuntos
Simulação por Computador , Adolescente , Estudos de Coortes , Humanos , Tamanho da Amostra , Inquéritos e Questionários
19.
J Biopharm Stat ; 30(1): 197-215, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31246135

RESUMO

In this paper, we assess the effect of tuberculosis pericarditis treatment (prednisolone) on CD4 count changes over time and draw inferences in the presence of missing data. We accounted for the missing data and performed sensitivity analyses to assess robustness of inferences, from a model that assumes that the data are missing at random, to models that assume that the data are not missing at random. Our sensitivity approaches are within the shared-parameter model framework. We implemented the approach by Creemers and colleagues to the CD4 count data and performed simulation studies to evaluate the performance of this approach. We also assessed the influence of potentially influential subjects, on parameter estimates, via the global influence approach. Our results revealed that inferences from missing at random analysis model are robust to not missing at random models and influential subjects did not overturn the study conclusions about prednisolone effect and missing data mechanism. Prednisolone was found to have no significant effect on CD4 count changes over time and also did not interact with anti-retroviral therapy. The simulation studies produced unbiased estimates of prednisolone effect with lower mean square errors and coverage probabilities approximately equal the nominal coverage probability.


Assuntos
Estudos Multicêntricos como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Projetos de Pesquisa/estatística & dados numéricos , Contagem de Linfócito CD4 , Interpretação Estatística de Dados , Glucocorticoides/uso terapêutico , Humanos , Estudos Longitudinais , Modelos Estatísticos , Pericardite Tuberculosa/tratamento farmacológico , Pericardite Tuberculosa/imunologia , Fatores de Tempo , Resultado do Tratamento
20.
J Am Soc Nephrol ; 30(9): 1756-1769, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31292198

RESUMO

BACKGROUND: Randomized trials of CKD treatments traditionally use clinical events late in CKD progression as end points. This requires costly studies with large sample sizes and long follow-up. Surrogate end points like GFR slope may speed up the evaluation of new therapies by enabling smaller studies with shorter follow-up. METHODS: We used statistical simulations to identify trial situations where GFR slope provides increased statistical power compared with the clinical end point of doubling of serum creatinine or kidney failure. We simulated GFR trajectories based on data from 47 randomized treatment comparisons. We evaluated the sample size required for adequate statistical power based on GFR slopes calculated from baseline and from 3 months follow-up. RESULTS: In most scenarios where the treatment has no acute effect, analyses of GFR slope provided similar or improved statistical power compared with the clinical end point, often allowing investigators to shorten follow-up by at least half while simultaneously reducing sample size. When patients' GFRs are higher, the power advantages of GFR slope increase. However, acute treatment effects within several months of randomization can increase the risk of false conclusions about therapies based on GFR slope. Care is needed in study design and analysis to avoid such false conclusions. CONCLUSIONS: Use of GFR slope can substantially increase statistical power compared with the clinical end point, particularly when baseline GFR is high and there is no acute effect. The optimum GFR-based end point depends on multiple factors including the rate of GFR decline, type of treatment effect and study design.


Assuntos
Taxa de Filtração Glomerular , Modelos Estatísticos , Insuficiência Renal Crônica/fisiopatologia , Biomarcadores , Simulação por Computador , Progressão da Doença , Determinação de Ponto Final , Humanos , Falência Renal Crônica/fisiopatologia , Ensaios Clínicos Controlados Aleatórios como Assunto , Insuficiência Renal Crônica/terapia , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA