Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 362
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Am J Epidemiol ; 2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-39160637

RESUMO

The test-negative design (TND) is a popular method for evaluating vaccine effectiveness (VE). A "classical" TND study includes symptomatic individuals tested for the disease targeted by the vaccine to estimate VE against symptomatic infection. However, recent applications of the TND have attempted to estimate VE against infection by including all tested individuals, regardless of their symptoms. In this article, we use directed acyclic graphs and simulations to investigate potential biases in TND studies of COVID-19 VE arising from the use of this "alternative" approach, particularly when applied during periods of widespread testing. We show that the inclusion of asymptomatic individuals can potentially lead to collider stratification bias, uncontrolled confounding by health and healthcare-seeking behaviors (HSBs), and differential outcome misclassification. While our focus is on the COVID-19 setting, the issues discussed here may also be relevant in the context of other infectious diseases. This may be particularly true in scenarios where there is either a high baseline prevalence of infection, a strong correlation between HSBs and vaccination, different testing practices for vaccinated and unvaccinated individuals, or settings where both the vaccine under study attenuates symptoms of infection and diagnostic accuracy is modified by the presence of symptoms.

2.
BMC Plant Biol ; 24(1): 306, 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38644480

RESUMO

Linkage maps are essential for genetic mapping of phenotypic traits, gene map-based cloning, and marker-assisted selection in breeding applications. Construction of a high-quality saturated map requires high-quality genotypic data on a large number of molecular markers. Errors in genotyping cannot be completely avoided, no matter what platform is used. When genotyping error reaches a threshold level, it will seriously affect the accuracy of the constructed map and the reliability of consequent genetic studies. In this study, repeated genotyping of two recombinant inbred line (RIL) populations derived from crosses Yangxiaomai × Zhongyou 9507 and Jingshuang 16 × Bainong 64 was used to investigate the effect of genotyping errors on linkage map construction. Inconsistent data points between the two replications were regarded as genotyping errors, which were classified into three types. Genotyping errors were treated as missing values, and therefore the non-erroneous data set was generated. Firstly, linkage maps were constructed using the two replicates as well as the non-erroneous data set. Secondly, error correction methods implemented in software packages QTL IciMapping (EC) and Genotype-Corrector (GC) were applied to the two replicates. Linkage maps were therefore constructed based on the corrected genotypes and then compared with those from the non-erroneous data set. Simulation study was performed by considering different levels of genotyping errors to investigate the impact of errors and the accuracy of error correction methods. Results indicated that map length and marker order differed among the two replicates and the non-erroneous data sets in both RIL populations. For both actual and simulated populations, map length was expanded as the increase in error rate, and the correlation coefficient between linkage and physical maps became lower. Map quality can be improved by repeated genotyping and error correction algorithm. When it is impossible to genotype the whole mapping population repeatedly, 30% would be recommended in repeated genotyping. The EC method had a much lower false positive rate than did the GC method under different error rates. This study systematically expounded the impact of genotyping errors on linkage analysis, providing potential guidelines for improving the accuracy of linkage maps in the presence of genotyping errors.


Assuntos
Mapeamento Cromossômico , Genótipo , Triticum , Triticum/genética , Mapeamento Cromossômico/métodos , Locos de Características Quantitativas , Ligação Genética , Técnicas de Genotipagem/métodos , Análise de Sequência com Séries de Oligonucleotídeos/métodos
3.
Stat Med ; 2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39233370

RESUMO

Many clinical trials involve partially clustered data, where some observations belong to a cluster and others can be considered independent. For example, neonatal trials may include infants from single or multiple births. Sample size and analysis methods for these trials have received limited attention. A simulation study was conducted to (1) assess whether existing power formulas based on generalized estimating equations (GEEs) provide an adequate approximation to the power achieved by mixed effects models, and (2) compare the performance of mixed models vs GEEs in estimating the effect of treatment on a continuous outcome. We considered clusters that exist prior to randomization with a maximum cluster size of 2, three methods of randomizing the clustered observations, and simulated datasets with uninformative cluster size and the sample size required to achieve 80% power according to GEE-based formulas with an independence or exchangeable working correlation structure. The empirical power of the mixed model approach was close to the nominal level when sample size was calculated using the exchangeable GEE formula, but was often too high when the sample size was based on the independence GEE formula. The independence GEE always converged and performed well in all scenarios. Performance of the exchangeable GEE and mixed model was also acceptable under cluster randomization, though under-coverage and inflated type I error rates could occur with other methods of randomization. Analysis of partially clustered trials using GEEs with an independence working correlation structure may be preferred to avoid the limitations of mixed models and exchangeable GEEs.

4.
BMC Med Res Methodol ; 24(1): 5, 2024 01 06.
Artigo em Inglês | MEDLINE | ID: mdl-38184529

RESUMO

BACKGROUND: In the last decades, medical research fields studying rare conditions such as spinal cord injury (SCI) have made extensive efforts to collect large-scale data. However, most analysis methods rely on complete data. This is particularly troublesome when studying clinical data as they are prone to missingness. Often, researchers mitigate this problem by removing patients with missing data from the analyses. Less commonly, imputation methods to infer likely values are applied. OBJECTIVE: Our objective was to study how handling missing data influences the results reported, taking the example of SCI registries. We aimed to raise awareness on the effects of missing data and provide guidelines to be applied for future research projects, in SCI research and beyond. METHODS: Using the Sygen clinical trial data (n = 797), we analyzed the impact of the type of variable in which data is missing, the pattern according to which data is missing, and the imputation strategy (e.g. mean imputation, last observation carried forward, multiple imputation). RESULTS: Our simulations show that mean imputation may lead to results strongly deviating from the underlying expected results. For repeated measures missing at late stages (> = 6 months after injury in this simulation study), carrying the last observation forward seems the preferable option for the imputation. This simulation study could show that a one-size-fit-all imputation strategy falls short in SCI data sets. CONCLUSIONS: Data-tailored imputation strategies are required (e.g., characterisation of the missingness pattern, last observation carried forward for repeated measures evolving to a plateau over time). Therefore, systematically reporting the extent, kind and decisions made regarding missing data will be essential to improve the interpretation, transparency, and reproducibility of the research presented.


Assuntos
Pesquisa Biomédica , Traumatismos da Medula Espinal , Humanos , Reprodutibilidade dos Testes , Traumatismos da Medula Espinal/epidemiologia , Traumatismos da Medula Espinal/terapia , Simulação por Computador , Doenças Raras
5.
BMC Med Res Methodol ; 24(1): 2, 2024 01 03.
Artigo em Inglês | MEDLINE | ID: mdl-38172688

RESUMO

Estimation of mortality rates and mortality rate ratios (MRR) of diseased and non-diseased individuals is a core metric of disease impact used in chronic disease epidemiology. Estimation of mortality rates is often conducted through retrospective linkage of information from nationwide surveys such as the National Health Interview Survey (NHIS) and death registries. These surveys usually collect information on disease status during only one study visit. This infrequency leads to missing disease information (with right censored survival times) for deceased individuals who were disease-free at study participation, and a possibly biased estimation of the MRR because of possible undetected disease onset after study participation. This occurrence is called "misclassification of disease status at death (MicDaD)" and it is a potentially common source of bias in epidemiologic studies. In this study, we conducted a simulation analysis with a high and a low incidence setting to assess the extent of MicDaD-bias in the estimated mortality. For the simulated populations, MRR for diseased and non-diseased individuals with and without MicDaD were calculated and compared. Magnitude of MicDaD-bias depends on and is driven by the incidence of the chronic disease under consideration; our analysis revealed a noticeable shift towards underestimation for high incidences when MicDaD is present. Impact of MicDaD was smaller for lower incidence (but associated with greater uncertainty in the estimation of MRR in general). Further research can consider the amount of missing information and potential influencers such as duration and risk factors of the disease.


Assuntos
Estudos Retrospectivos , Humanos , Viés , Fatores de Risco , Sistema de Registros , Doença Crônica
6.
BMC Med Res Methodol ; 24(1): 188, 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39198744

RESUMO

BACKGROUND AND OBJECTIVES: Comprehending the research dataset is crucial for obtaining reliable and valid outcomes. Health analysts must have a deep comprehension of the data being analyzed. This comprehension allows them to suggest practical solutions for handling missing data, in a clinical data source. Accurate handling of missing values is critical for producing precise estimates and making informed decisions, especially in crucial areas like clinical research. With data's increasing diversity and complexity, numerous scholars have developed a range of imputation techniques. To address this, we conducted a systematic review to introduce various imputation techniques based on tabular dataset characteristics, including the mechanism, pattern, and ratio of missingness, to identify the most appropriate imputation methods in the healthcare field. MATERIALS AND METHODS: We searched four information databases namely PubMed, Web of Science, Scopus, and IEEE Xplore, for articles published up to September 20, 2023, that discussed imputation methods for addressing missing values in a clinically structured dataset. Our investigation of selected articles focused on four key aspects: the mechanism, pattern, ratio of missingness, and various imputation strategies. By synthesizing insights from these perspectives, we constructed an evidence map to recommend suitable imputation methods for handling missing values in a tabular dataset. RESULTS: Out of 2955 articles, 58 were included in the analysis. The findings from the development of the evidence map, based on the structure of the missing values and the types of imputation methods used in the extracted items from these studies, revealed that 45% of the studies employed conventional statistical methods, 31% utilized machine learning and deep learning methods, and 24% applied hybrid imputation techniques for handling missing values. CONCLUSION: Considering the structure and characteristics of missing values in a clinical dataset is essential for choosing the most appropriate data imputation technique, especially within conventional statistical methods. Accurately estimating missing values to reflect reality enhances the likelihood of obtaining high-quality and reusable data, contributing significantly to precise medical decision-making processes. Performing this review study creates a guideline for choosing the most appropriate imputation methods in data preprocessing stages to perform analytical processes on structured clinical datasets.


Assuntos
Pesquisa Biomédica , Humanos , Interpretação Estatística de Dados , Pesquisa Biomédica/métodos , Pesquisa Biomédica/normas , Pesquisa Biomédica/estatística & dados numéricos , Conjuntos de Dados como Assunto
7.
BMC Med Res Methodol ; 24(1): 49, 2024 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-38413862

RESUMO

BACKGROUND: Several approaches are commonly used to estimate the effect of diet on changes of various intermediate disease markers in prospective studies, including "change-score analysis", "concurrent change-change analysis" and "lagged change-change analysis". Although empirical evidence suggests that concurrent change-change analysis is most robust, consistent, and biologically plausible, in-depth dissection and comparison of these approaches from a causal inference perspective is lacking. We intend to explicitly elucidate and compare the underlying causal model, causal estimand and interpretation of these approaches, intuitively illustrate it with directed acyclic graph (DAG), and further clarify strengths and limitations of the recommended concurrent change-change analysis through simulations. METHODS: Causal model and DAG are deployed to clarify the causal estimand and interpretation of each approach theoretically. Monte Carlo simulation is used to explore the performance of distinct approaches under different extents of time-invariant heterogeneity and the performance of concurrent change-change analysis when its causal identification assumptions are violated. RESULTS: Concurrent change-change analysis targets the contemporaneous effect of exposure on outcome (measured at the same survey wave), which is more relevant and plausible in studying the associations of diet and intermediate biomarkers in prospective studies, while change-score analysis and lagged change-change analysis target the effect of exposure on outcome after one-period timespan (typically several years). Concurrent change-change analysis always yields unbiased estimates even with severe unobserved time-invariant confounding, while the other two approaches are always biased even without time-invariant heterogeneity. However, concurrent change-change analysis produces almost linearly increasing estimation bias with violation of its causal identification assumptions becoming more serious. CONCLUSIONS: Concurrent change-change analysis might be the most superior method in studying the diet and intermediate biomarkers in prospective studies, which targets the most plausible estimand and circumvents the bias from unobserved individual heterogeneity. Importantly, careful examination of the vital identification assumptions behind it should be underscored before applying this promising method.


Assuntos
Modelos Teóricos , Humanos , Estudos Prospectivos , Causalidade , Viés , Biomarcadores
8.
Mol Divers ; 2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38775995

RESUMO

The remarkable conservation of the FtsZ among Gram-positive and Gram-negative bacteria, a crucial GTPase in bacterial cell division, has emerged as a promising antibacterial drug target to combat antibacterial resistance. There have been several coordinated efforts to develop inhibitors against FtsZ which can also serve as potential candidates for future antibiotics. In the present study, a natural product-like library (≈50,000 compounds) was employed to conduct HTVS against Staphylococcus aureus FtsZ protein (PDB Id: 6KVP). Additionally, molecular docking was carried out in two modes, SP and XP docking, using the Schrödinger suite. The glide scores of ligands obtained by XP docking were further summarized and compared with the control ligands (ZI1- co-crystal and PC190723-a compound undergoing clinical trial). Using the Prime-MM-GBSA approach, BFE calculations were performed on the top XP-scored ligands (≈598 compounds). These hits were also evaluated for ADMET parameters using the Qikprop algorithm, SwissADME, and in silico carcinogenicity testing using Carcinopred-El. Based on the results, ligand 4-FtsZ complex was considered for the 300 ns MDS analysis to get insights into its binding modes within the catalytic pocket of FtsZ protein. The analysis revealed that the amide linkage sandwiched between the triazole and 1-oxa-8-azaspirodecan-8-ium moiety (Val203) as well as the aminoethyl group present at 1st position on the triazole moiety (Leu209, Leu200, Asp210, and Ala202) were responsible for the FtsZ inhibitory activity, owing to their crucial interactions with key amino acid residues. Further, the complex also displayed good protein-ligand stability, ultimately predicting ligand 4 as a potent lead compound for the inhibition of FtsZ. Thus, our in silico findings will serve as a framework for in-depth in-vitro and in-vivo investigations encouraging the development of FtsZ inhibitors as a new generation of antibacterial agents.

9.
Knee Surg Sports Traumatol Arthrosc ; 32(5): 1332-1343, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38520187

RESUMO

PURPOSE: This study aimed to elucidate the characteristics of varus knee deformities in the Japanese population, prevalence of various around knee osteotomy procedures and influence of femoral and tibial bowing. METHODS: Varus knee deformity was defined as a weight-bearing line ratio of <50%. A total of 1010 varus knees were selected from 1814 varus knees with weight-bearing full-length radiographs, obtained at two facilities, based on exclusion criteria. Various parameters were measured, and around knee osteotomy simulations based on the deformity centre were conducted using digital planning tools. Bowing of the femoral and tibial shafts was measured, with bowing defined as follows: ≤ -0.6° indicating lateral bowing and ≥ 0.6° indicating medial bowing. Statistical analysis was performed to investigate age-related correlations and their impact on surgical techniques. RESULTS: The study revealed that the proximal tibia was the centre of deformity in Japanese varus knees (42.8%), and high tibial osteotomy was frequently indicated (81.6%). Age demonstrated a mild correlation with femoral shaft bowing (r = -0.29), leading to an increase in the mechanical lateral distal femoral angle and to a decrease in the hip-knee-ankle angle and weight-bearing line ratio (r = -0.29, 0.221, 0.219). The tibial shaft bowing was unaffected by age (r = -0.022). CONCLUSION: A significant proportion of Japanese individuals with varus knees exhibit a deformity centre located in the proximal tibia, making them suitable candidates for high tibial osteotomy. No age-related alterations were discerned in tibial morphology, indicating that the occurrence of constitutional varus knees is attributable to tibial deformities in the Japanese patient cohort. LEVEL OF EVIDENCE: Level IV.


Assuntos
Articulação do Joelho , Osteotomia , Tíbia , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , População do Leste Asiático , Fêmur/cirurgia , Fêmur/anormalidades , Fêmur/diagnóstico por imagem , Japão , Articulação do Joelho/cirurgia , Articulação do Joelho/diagnóstico por imagem , Articulação do Joelho/anormalidades , Osteotomia/métodos , Radiografia , Tíbia/cirurgia , Tíbia/anormalidades , Tíbia/diagnóstico por imagem , Suporte de Carga , Idoso de 80 Anos ou mais
10.
Multivariate Behav Res ; 59(3): 461-481, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38247019

RESUMO

Network analysis has gained popularity as an approach to investigate psychological constructs. However, there are currently no guidelines for applied researchers when encountering missing values. In this simulation study, we compared the performance of a two-step EM algorithm with separated steps for missing handling and regularization, a combined direct EM algorithm, and pairwise deletion. We investigated conditions with varying network sizes, numbers of observations, missing data mechanisms, and percentages of missing values. These approaches are evaluated with regard to recovering population networks in terms of loss in the precision matrix, edge set identification and network statistics. The simulation showed adequate performance only in conditions with large samples (n≥500) or small networks (p = 10). Comparing the missing data approaches, the direct EM appears to be more sensitive and superior in nearly all chosen conditions. The two-step EM yields better results when the ratio of n/p is very large - being less sensitive but more specific. Pairwise deletion failed to converge across numerous conditions and yielded inferior results overall. Overall, direct EM is recommended in most cases, as it is able to mitigate the impact of missing data quite well, while modifications to two-step EM could improve its performance.


Assuntos
Algoritmos , Simulação por Computador , Humanos , Simulação por Computador/estatística & dados numéricos , Interpretação Estatística de Dados , Modelos Estatísticos
11.
Multivariate Behav Res ; 59(2): 187-205, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37524119

RESUMO

Propensity score analyses (PSA) of continuous treatments often operationalize the treatment as a multi-indicator composite, and its composite reliability is unreported. Latent variables or factor scores accounting for this unreliability are seldom used as alternatives to composites. This study examines the effects of the unreliability of indicators of a latent treatment in PSA using the generalized propensity score (GPS). A Monte Carlo simulation study was conducted varying composite reliability, continuous treatment representation, variability of factor loadings, sample size, and number of treatment indicators to assess whether Average Treatment Effect (ATE) estimates differed in their relative bias, Root Mean Squared Error, and coverage rates. Results indicate that low composite reliability leads to underestimation of the ATE of latent continuous treatments, while the number of treatment indicators and variability of factor loadings show little effect on ATE estimates, after controlling for overall composite reliability. The results also show that, in correctly specified GPS models, the effects of low composite reliability can be somewhat ameliorated by using factor scores that were estimated including covariates. An illustrative example is provided using survey data to estimate the effect of teacher adoption of a workbook related to a virtual learning environment in the classroom.


Assuntos
Pontuação de Propensão , Reprodutibilidade dos Testes , Simulação por Computador , Viés , Método de Monte Carlo
12.
Biom J ; 66(4): e2200334, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38747086

RESUMO

Many data sets exhibit a natural group structure due to contextual similarities or high correlations of variables, such as lipid markers that are interrelated based on biochemical principles. Knowledge of such groupings can be used through bi-level selection methods to identify relevant feature groups and highlight their predictive members. One of the best known approaches of this kind combines the classical Least Absolute Shrinkage and Selection Operator (LASSO) with the Group LASSO, resulting in the Sparse Group LASSO. We propose the Sparse Group Penalty (SGP) framework, which allows for a flexible combination of different SGL-style shrinkage conditions. Analogous to SGL, we investigated the combination of the Smoothly Clipped Absolute Deviation (SCAD), the Minimax Concave Penalty (MCP) and the Exponential Penalty (EP) with their group versions, resulting in the Sparse Group SCAD, the Sparse Group MCP, and the novel Sparse Group EP (SGE). Those shrinkage operators provide refined control of the effect of group formation on the selection process through a tuning parameter. In simulation studies, SGPs were compared with other bi-level selection methods (Group Bridge, composite MCP, and Group Exponential LASSO) for variable and group selection evaluated with the Matthews correlation coefficient. We demonstrated the advantages of the new SGE in identifying parsimonious models, but also identified scenarios that highlight the limitations of the approach. The performance of the techniques was further investigated in a real-world use case for the selection of regulated lipids in a randomized clinical trial.


Assuntos
Biometria , Biometria/métodos , Humanos
13.
Biom J ; 66(1): e2200095, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36642811

RESUMO

Statistical simulation studies are becoming increasingly popular to demonstrate the performance or superiority of new computational procedures and algorithms. Despite this status quo, previous surveys of the literature have shown that the reporting of statistical simulation studies often lacks relevant information and structure. The latter applies in particular to Bayesian simulation studies, and in this paper the Bayesian simulation study framework (BASIS) is presented as a step towards improving the situation. The BASIS framework provides a structured skeleton for planning, coding, executing, analyzing, and reporting Bayesian simulation studies in biometrical research and computational statistics. It encompasses various features of previous proposals and recommendations in the methodological literature and aims to promote neutral comparison studies in statistical research. Computational aspects covered in the BASIS include algorithmic choices, Markov-chain-Monte-Carlo convergence diagnostics, sensitivity analyses, and Monte Carlo standard error calculations for Bayesian simulation studies. Although the BASIS framework focuses primarily on methodological research, it also provides useful guidance for researchers who rely on the results of Bayesian simulation studies or analyses, as current state-of-the-art guidelines for Bayesian analyses are incorporated into the BASIS.


Assuntos
Algoritmos , Teorema de Bayes , Simulação por Computador , Cadeias de Markov , Método de Monte Carlo
14.
Biom J ; 66(6): e202300271, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39132909

RESUMO

Many clinical trials assess time-to-event endpoints. To describe the difference between groups in terms of time to event, we often employ hazard ratios. However, the hazard ratio is only informative in the case of proportional hazards (PHs) over time. There exist many other effect measures that do not require PHs. One of them is the average hazard ratio (AHR). Its core idea is to utilize a time-dependent weighting function that accounts for time variation. Though propagated in methodological research papers, the AHR is rarely used in practice. To facilitate its application, we unfold approaches for sample size calculation of an AHR test. We assess the reliability of the sample size calculation by extensive simulation studies covering various survival and censoring distributions with proportional as well as nonproportional hazards (N-PHs). The findings suggest that a simulation-based sample size calculation approach can be useful for designing clinical trials with N-PHs. Using the AHR can result in increased statistical power to detect differences between groups with more efficient sample sizes.


Assuntos
Modelos de Riscos Proporcionais , Tamanho da Amostra , Humanos , Ensaios Clínicos como Assunto , Biometria/métodos
15.
J Environ Manage ; 356: 120692, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38547828

RESUMO

Accurate characterization of soil contaminant concentrations is often crucial for assessing risks to human and ecological health. However, fine-scale assessments of large tracts of land can be cost prohibitive due to the number of samples needed. One solution to this problem is to extrapolate sampling results from one area to another unsampled area. In the absence of a validated extrapolation methodology, regulatory agencies have employed policy-based techniques for large sites, but the likelihood of decision errors resulting from these extrapolations is largely unexplored. This study describes the results of a simulation study aimed at guiding environmental sampling for sites where extrapolation concepts are of interest. The objective of this study is to provide practical recommendations to regulatory agencies for extrapolating sampling results on large tracts of land while minimizing errors that are detrimental to human health. A variety of site investigation scenarios representative of environmental conditions and sampling schemes were tested using adaptive sampling when collecting discrete samples or applying incremental sampling methodology (ISM). These simulations address extrapolation uncertainty in cases where a Pilot Study might result in either false noncompliance or false compliance conclusions. A wide range of plausible scenarios were used that reflect the variety of heterogeneity seen at large sites. This simulation study demonstrates that ISM can be reliably applied in a Pilot Study for purposes of extrapolating the outcome to a large area site because it decreases the likelihood of false non-compliance errors while also providing reliable estimates of true compliance across unsampled areas. The results demonstrate how errors depend on the magnitude of the 95% upper confidence limit for the mean concentration (95UCL) relative to the applicable action level, and that error rates are highest when the 95UCL is within 10%-40% of the action level. The false compliance rate can be reduced to less than 5% when 30% or more of the site is characterized with ISM. False compliance error rates using ISM are insensitive to the fraction of the decision units (DUs) that are characterized with three replicates (with a minimum of 10 percent), so long as 95UCLs are calculated for the DUs with one replicate using the average coefficient of variation from the three replicate DUs.


Assuntos
Incerteza , Humanos , Projetos Piloto
16.
Linacre Q ; 91(3): 315-328, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39104463

RESUMO

Fertility awareness-based methods (FABMs), also known as natural family planning (NFP), enable couples to identify the days of the menstrual cycle when intercourse may result in pregnancy ("fertile days"), and to avoid intercourse on fertile days if they wish to avoid pregnancy. Thus, these methods are fully dependent on user behavior for effectiveness to avoid pregnancy. For couples and clinicians considering the use of an FABM, one important metric to consider is the highest expected effectiveness (lowest possible pregnancy rate) during the correct use of the method to avoid pregnancy. To assess this, most studies of FABMs have reported a method-related pregnancy rate (a cumulative proportion), which is calculated based on all cycles (or months) in the study. In contrast, the correct use to avoid pregnancy rate (also a cumulative proportion) has the denominator of cycles with the correct use of the FABM to avoid pregnancy. The relationship between these measures has not been evaluated quantitatively. We conducted a series of simulations demonstrating that the method-related pregnancy rate is artificially decreased in direct proportion to the proportion of cycles with intermediate use (any use other than correct use to avoid or targeted use to conceive), which also increases the total pregnancy rate. Thus, as the total pregnancy rate rises (related to intermediate use), the method-related pregnancy rate falls artificially while the correct use pregnancy rate remains constant. For practical application, we propose the core elements needed to assess correct use cycles in FABM studies. Summary: Fertility awareness-based methods (FABMs) can be used by couples to avoid pregnancy, by avoiding intercourse on fertile days. Users want to know what the highest effectiveness (lowest pregnancy rate) would be if they use an FABM correctly and consistently to avoid pregnancy. In this simulation study, we compare two different measures: (1) the method-related pregnancy rate; and (2) the correct use pregnancy rate. We show that the method-related pregnancy rate is biased too low if some users in the study are not using the method consistently to avoid pregnancy, while the correct use pregnancy rate obtains an accurate estimate. Short Summary: In FABM studies, the method-related pregnancy rate is biased too low, but the correct use pregnancy rate is unbiased.

17.
Emerg Infect Dis ; 29(11): 2292-2297, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37877559

RESUMO

Earlier global detection of novel SARS-CoV-2 variants gives governments more time to respond. However, few countries can implement timely national surveillance, resulting in gaps in monitoring. The United Kingdom implemented large-scale community and hospital surveillance, but experience suggests it might be faster to detect new variants through testing England arrivals for surveillance. We developed simulations of emergence and importation of novel variants with a range of infection hospitalization rates to the United Kingdom. We compared time taken to detect the variant though testing arrivals at England borders, hospital admissions, and the general community. We found that sampling 10%-50% of arrivals at England borders could confer a speed advantage of 3.5-6 weeks over existing community surveillance and 1.5-5 weeks (depending on infection hospitalization rates) over hospital testing. Directing limited global capacity for surveillance to highly connected ports could speed up global detection of novel SARS-CoV-2 variants.


Assuntos
COVID-19 , Humanos , COVID-19/diagnóstico , SARS-CoV-2/genética , Inglaterra/epidemiologia , Reino Unido/epidemiologia
18.
Stat Med ; 42(3): 331-352, 2023 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-36546512

RESUMO

This review condenses the knowledge on variable selection methods implemented in R and appropriate for datasets with grouped features. The focus is on regularized regressions identified through a systematic review of the literature, following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A total of 14 methods are discussed, most of which use penalty terms to perform group variable selection. Depending on how the methods account for the group structure, they can be classified into knowledge and data-driven approaches. The first encompass group-level and bi-level selection methods, while two-step approaches and collinearity-tolerant methods constitute the second category. The identified methods are briefly explained and their performance compared in a simulation study. This comparison demonstrated that group-level selection methods, such as the group minimax concave penalty, are superior to other methods in selecting relevant variable groups but are inferior in identifying important individual variables in scenarios where not all variables in the groups are predictive. This can be better achieved by bi-level selection methods such as group bridge. Two-step and collinearity-tolerant approaches such as elastic net and ordered homogeneity pursuit least absolute shrinkage and selection operator are inferior to knowledge-driven methods but provide results without requiring prior knowledge. Possible applications in proteomics are considered, leading to suggestions on which method to use depending on existing prior knowledge and research question.


Assuntos
Simulação por Computador , Humanos
19.
BMC Med Res Methodol ; 23(1): 255, 2023 10 31.
Artigo em Inglês | MEDLINE | ID: mdl-37907863

RESUMO

BACKGROUND: Looking for treatment-by-subset interaction on a right-censored outcome based on observational data using propensity-score (PS) modeling is of interest. However, there are still issues regarding its implementation, notably when the subsets are very imbalanced in terms of prognostic features and treatment prevalence. METHODS: We conducted a simulation study to compare two main PS estimation strategies, performed either once on the whole sample ("across subset") or in each subset separately ("within subsets"). Several PS models and estimands are also investigated. We then illustrated those approaches on the motivating example, namely, evaluating the benefits of facial nerve resection in patients with parotid cancer in contact with the nerve, according to pretreatment facial palsy. RESULTS: Our simulation study demonstrated that both strategies provide close results in terms of bias and variance of the estimated treatment effect, with a slight advantage for the "across subsets" strategy in very small samples, provided that interaction terms between the subset variable and other covariates influencing the choice of treatment are incorporated. PS matching without replacement resulted in biased estimates and should be avoided in the case of very imbalanced subsets. CONCLUSIONS: When assessing heterogeneity in the treatment effect in small samples, the "across subsets" strategy of PS estimation is preferred. Then, either a PS matching with replacement or a weighting method must be used to estimate the average treatment effect in the treated or in the overlap population. In contrast, PS matching without replacement should be avoided in this setting.


Assuntos
Pontuação de Propensão , Humanos , Método de Monte Carlo , Simulação por Computador , Viés
20.
BMC Med Res Methodol ; 23(1): 19, 2023 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-36650428

RESUMO

BACKGROUND: Advantages of meta-analysis depend on the assumptions underlying the statistical procedures used being met. One of the main assumptions that is usually taken for granted is the normality underlying the population of true effects in a random-effects model, even though the available evidence suggests that this assumption is often not met. This paper examines how 21 frequentist and 24 Bayesian methods, including several novel procedures, for computing a point estimate of the heterogeneity parameter ([Formula: see text]) perform when the distribution of random effects departs from normality compared to normal scenarios in meta-analysis of standardized mean differences. METHODS: A Monte Carlo simulation was carried out using the R software, generating data for meta-analyses using the standardized mean difference. The simulation factors were the number and average sample size of primary studies, the amount of heterogeneity, as well as the shape of the random-effects distribution. The point estimators were compared in terms of absolute bias and variance, although results regarding mean squared error were also discussed. RESULTS: Although not all the estimators were affected to the same extent, there was a general tendency to obtain lower and more variable [Formula: see text] estimates as the random-effects distribution departed from normality. However, the estimators ranking in terms of their absolute bias and variance did not change: Those estimators that obtained lower bias also showed greater variance. Finally, a large number and sample size of primary studies acted as a bias-protective factor against a lack of normality for several procedures, whereas only a high number of studies was a variance-protective factor for most of the estimators analyzed. CONCLUSIONS: Although the estimation and inference of the combined effect have proven to be sufficiently robust, our work highlights the role that the deviation from normality may be playing in the meta-analytic conclusions from the simulation results and the numerical examples included in this work. With the aim to exercise caution in the interpretation of the results obtained from random-effects models, the tau2() R function is made available for obtaining the range of [Formula: see text] values computed from the 45 estimators analyzed in this work, as well as to assess how the pooled effect, its confidence and prediction intervals vary according to the estimator chosen.


Assuntos
Software , Humanos , Teorema de Bayes , Método de Monte Carlo , Simulação por Computador , Viés
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA