Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.888
Filtrar
Más filtros

Intervalo de año de publicación
1.
Brief Bioinform ; 24(1)2023 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-36585786

RESUMEN

Quantifying an individual's risk for common diseases is an important goal of precision health. The polygenic risk score (PRS), which aggregates multiple risk alleles of candidate diseases, has emerged as a standard approach for identifying high-risk individuals. Although several studies have been performed to benchmark the PRS calculation tools and assess their potential to guide future clinical applications, some issues remain to be further investigated, such as lacking (i) various simulated data with different genetic effects; (ii) evaluation of machine learning models and (iii) evaluation on multiple ancestries studies. In this study, we systematically validated and compared 13 statistical methods, 5 machine learning models and 2 ensemble models using simulated data with additive and genetic interaction models, 22 common diseases with internal training sets, 4 common diseases with external summary statistics and 3 common diseases for trans-ancestry studies in UK Biobank. The statistical methods were better in simulated data from additive models and machine learning models have edges for data that include genetic interactions. Ensemble models are generally the best choice by integrating various statistical methods. LDpred2 outperformed the other standalone tools, whereas PRS-CS, lassosum and DBSLMM showed comparable performance. We also identified that disease heritability strongly affected the predictive performance of all methods. Both the number and effect sizes of risk SNPs are important; and sample size strongly influences the performance of all methods. For the trans-ancestry studies, we found that the performance of most methods became worse when training and testing sets were from different populations.


Asunto(s)
Aprendizaje Automático , Herencia Multifactorial , Humanos , Factores de Riesgo , Genómica , Predisposición Genética a la Enfermedad , Estudio de Asociación del Genoma Completo/métodos
2.
Stroke ; 55(3): 779-784, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38235584

RESUMEN

Rigorous evidence generation with randomized controlled trials has lagged for aneurysmal subarachnoid hemorrhage (SAH) compared with other forms of acute stroke. Besides its lower incidence compared with other stroke subtypes, the presentation and outcome of patients with SAH also differ. This must be considered and adjusted for in designing pivotal randomized controlled trials of patients with SAH. Here, we show the effect of the unique expected distribution of the SAH severity at presentation (World Federation of Neurological Surgeons grade) on the outcome most used in pivotal stroke randomized controlled trials (modified Rankin Scale) and, consequently, on the sample size. Furthermore, we discuss the advantages and disadvantages of different options to analyze the outcome and control the expected distribution of the World Federation of Neurological Surgeons grades in addition to showing their effects on the sample size. Finally, we offer methods that investigators can adapt to more precisely understand the effect of common modified Rankin Scale analysis methods and trial eligibility pertaining to the World Federation of Neurological Surgeons grade in designing their large-scale SAH randomized controlled trials.


Asunto(s)
Accidente Cerebrovascular , Hemorragia Subaracnoidea , Humanos , Hemorragia Subaracnoidea/terapia , Hemorragia Subaracnoidea/cirugía , Resultado del Tratamiento , Procedimientos Neuroquirúrgicos , Neurocirujanos , Accidente Cerebrovascular/cirugía
3.
Stroke ; 55(8): 1962-1972, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38920051

RESUMEN

BACKGROUND: A recent review of randomization methods used in large multicenter clinical trials within the National Institutes of Health Stroke Trials Network identified preservation of treatment allocation randomness, achievement of the desired group size balance between treatment groups, achievement of baseline covariate balance, and ease of implementation in practice as critical properties required for optimal randomization designs. Common-scale minimal sufficient balance (CS-MSB) adaptive randomization effectively controls for covariate imbalance between treatment groups while preserving allocation randomness but does not balance group sizes. This study extends the CS-MSB adaptive randomization method to achieve both group size and covariate balance while preserving allocation randomness in hyperacute stroke trials. METHODS: A full factorial in silico simulation study evaluated the performance of the proposed new CSSize-MSB adaptive randomization method in achieving group size balance, covariate balance, and allocation randomness compared with the original CS-MSB method. Data from 4 existing hyperacute stroke trials were used to investigate the performance of CSSize-MSB for a range of sample sizes and covariate numbers and types. A discrete-event simulation model created with AnyLogic was used to dynamically visualize the decision logic of the CSSize-MSB randomization process for communication with clinicians. RESULTS: The proposed new CSSize-MSB algorithm uniformly outperformed the CS-MSB algorithm in controlling for group size imbalance while maintaining comparable levels of covariate balance and allocation randomness in hyperacute stroke trials. This improvement was consistent across a distribution of simulated trials with varying levels of imbalance but was increasingly pronounced for trials with extreme cases of imbalance. The results were consistent across a range of trial data sets of different sizes and covariate numbers and types. CONCLUSIONS: The proposed adaptive CSSize-MSB algorithm successfully controls for group size imbalance in hyperacute stroke trials under various settings, and its logic can be readily explained to clinicians using dynamic visualization.


Asunto(s)
Accidente Cerebrovascular , Humanos , Tamaño de la Muestra , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Simulación por Computador , Distribución Aleatoria , Proyectos de Investigación
4.
Neuroimage ; 292: 120604, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38604537

RESUMEN

Despite its widespread use, resting-state functional magnetic resonance imaging (rsfMRI) has been criticized for low test-retest reliability. To improve reliability, researchers have recommended using extended scanning durations, increased sample size, and advanced brain connectivity techniques. However, longer scanning runs and larger sample sizes may come with practical challenges and burdens, especially in rare populations. Here we tested if an advanced brain connectivity technique, dynamic causal modeling (DCM), can improve reliability of fMRI effective connectivity (EC) metrics to acceptable levels without extremely long run durations or extremely large samples. Specifically, we employed DCM for EC analysis on rsfMRI data from the Human Connectome Project. To avoid bias, we assessed four distinct DCMs and gradually increased sample sizes in a randomized manner across ten permutations. We employed pseudo true positive and pseudo false positive rates to assess the efficacy of shorter run durations (3.6, 7.2, 10.8, 14.4 min) in replicating the outcomes of the longest scanning duration (28.8 min) when the sample size was fixed at the largest (n = 160 subjects). Similarly, we assessed the efficacy of smaller sample sizes (n = 10, 20, …, 150 subjects) in replicating the outcomes of the largest sample (n = 160 subjects) when the scanning duration was fixed at the longest (28.8 min). Our results revealed that the pseudo false positive rate was below 0.05 for all the analyses. After the scanning duration reached 10.8 min, which yielded a pseudo true positive rate of 92%, further extensions in run time showed no improvements in pseudo true positive rate. Expanding the sample size led to enhanced pseudo true positive rate outcomes, with a plateau at n = 70 subjects for the targeted top one-half of the largest ECs in the reference sample, regardless of whether the longest run duration (28.8 min) or the viable run duration (10.8 min) was employed. Encouragingly, smaller sample sizes exhibited pseudo true positive rates of approximately 80% for n = 20, and 90% for n = 40 subjects. These data suggest that advanced DCM analysis may be a viable option to attain reliable metrics of EC when larger sample sizes or run times are not feasible.


Asunto(s)
Encéfalo , Conectoma , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Tamaño de la Muestra , Conectoma/métodos , Conectoma/normas , Reproducibilidad de los Resultados , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Adulto , Femenino , Masculino , Descanso/fisiología , Factores de Tiempo
5.
Am J Epidemiol ; 2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-38918039

RESUMEN

There is a dearth of safety data on maternal outcomes after perinatal medication exposure. Data-mining for unexpected adverse event occurrence in existing datasets is a potentially useful approach. One method, the Poisson tree-based scan statistic (TBSS), assumes that the expected outcome counts, based on incidence of outcomes in the control group, are estimated without error. This assumption may be difficult to satisfy with a small control group. Our simulation study evaluated the effect of imprecise incidence proportions from the control group on TBSS' ability to identify maternal outcomes in pregnancy research. We simulated base case analyses with "true" expected incidence proportions and compared these to imprecise incidence proportions derived from sparse control samples. We varied parameters impacting Type I error and statistical power (exposure group size, outcome's incidence proportion, and effect size). We found that imprecise incidence proportions generated by a small control group resulted in inaccurate alerting, inflation of Type I error, and removal of very rare outcomes for TBSS analysis due to "zero" background counts. Ideally, the control size should be at least several times larger than the exposure size to limit the number of false positive alerts and retain statistical power for true alerts.

6.
Oncologist ; 2024 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-38934301

RESUMEN

BACKGROUND: Clinical studies are often limited by resources available, which results in constraints on sample size. We use simulated data to illustrate study implications when the sample size is too small. METHODS AND RESULTS: Using 2 theoretical populations each with N = 1000, we randomly sample 10 from each population and conduct a statistical comparison, to help make a conclusion about whether the 2 populations are different. This exercise is repeated for a total of 4 studies: 2 concluded that the 2 populations are statistically significantly different, while 2 showed no statistically significant difference. CONCLUSIONS: Our simulated examples demonstrate that sample sizes play important roles in clinical research. The results and conclusions, in terms of estimates of means, medians, Pearson correlations, chi-square test, and P values, are unreliable with small samples.

7.
Biostatistics ; 24(4): 1000-1016, 2023 10 18.
Artículo en Inglés | MEDLINE | ID: mdl-35993875

RESUMEN

Basket trials are increasingly used for the simultaneous evaluation of a new treatment in various patient subgroups under one overarching protocol. We propose a Bayesian approach to sample size determination in basket trials that permit borrowing of information between commensurate subsets. Specifically, we consider a randomized basket trial design where patients are randomly assigned to the new treatment or control within each trial subset ("subtrial" for short). Closed-form sample size formulae are derived to ensure that each subtrial has a specified chance of correctly deciding whether the new treatment is superior to or not better than the control by some clinically relevant difference. Given prespecified levels of pairwise (in)commensurability, the subtrial sample sizes are solved simultaneously. The proposed Bayesian approach resembles the frequentist formulation of the problem in yielding comparable sample sizes for circumstances of no borrowing. When borrowing is enabled between commensurate subtrials, a considerably smaller trial sample size is required compared to the widely implemented approach of no borrowing. We illustrate the use of our sample size formulae with two examples based on real basket trials. A comprehensive simulation study further shows that the proposed methodology can maintain the true positive and false positive rates at desired levels.


Asunto(s)
Proyectos de Investigación , Humanos , Tamaño de la Muestra , Teorema de Bayes , Simulación por Computador
8.
Brief Bioinform ; 23(1)2022 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-34472591

RESUMEN

Missing values are common in high-throughput mass spectrometry data. Two strategies are available to address missing values: (i) eliminate or impute the missing values and apply statistical methods that require complete data and (ii) use statistical methods that specifically account for missing values without imputation (imputation-free methods). This study reviews the effect of sample size and percentage of missing values on statistical inference for multiple methods under these two strategies. With increasing missingness, the ability of imputation and imputation-free methods to identify differentially and non-differentially regulated compounds in a two-group comparison study declined. Random forest and k-nearest neighbor imputation combined with a Wilcoxon test performed well in statistical testing for up to 50% missingness with little bias in estimating the effect size. Quantile regression imputation accompanied with a Wilcoxon test also had good statistical testing outcomes but substantially distorted the difference in means between groups. None of the imputation-free methods performed consistently better for statistical testing than imputation methods.


Asunto(s)
Proyectos de Investigación , Sesgo , Análisis por Conglomerados , Espectrometría de Masas/métodos
9.
Brief Bioinform ; 23(1)2022 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-34545927

RESUMEN

Quantitative trait locus (QTL) analyses of multiomic molecular traits, such as gene transcription (eQTL), DNA methylation (mQTL) and histone modification (haQTL), have been widely used to infer the functional effects of genome variants. However, the QTL discovery is largely restricted by the limited study sample size, which demands higher threshold of minor allele frequency and then causes heavy missing molecular trait-variant associations. This happens prominently in single-cell level molecular QTL studies because of sample availability and cost. It is urgent to propose a method to solve this problem in order to enhance discoveries of current molecular QTL studies with small sample size. In this study, we presented an efficient computational framework called xQTLImp to impute missing molecular QTL associations. In the local-region imputation, xQTLImp uses multivariate Gaussian model to impute the missing associations by leveraging known association statistics of variants and the linkage disequilibrium (LD) around. In the genome-wide imputation, novel procedures are implemented to improve efficiency, including dynamically constructing a reused LD buffer, adopting multiple heuristic strategies and parallel computing. Experiments on various multiomic bulk and single-cell sequencing-based QTL datasets have demonstrated high imputation accuracy and novel QTL discovery ability of xQTLImp. Finally, a C++ software package is freely available at https://github.com/stormlovetao/QTLIMP.


Asunto(s)
Estudio de Asociación del Genoma Completo , Sitios de Carácter Cuantitativo , Estudio de Asociación del Genoma Completo/métodos , Genotipo , Desequilibrio de Ligamiento , Fenotipo , Polimorfismo de Nucleótido Simple , Tamaño de la Muestra
10.
Syst Biol ; 72(5): 1136-1153, 2023 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-37458991

RESUMEN

Divergence time estimation is crucial to provide temporal signals for dating biologically important events from species divergence to viral transmissions in space and time. With the advent of high-throughput sequencing, recent Bayesian phylogenetic studies have analyzed hundreds to thousands of sequences. Such large-scale analyses challenge divergence time reconstruction by requiring inference on highly correlated internal node heights that often become computationally infeasible. To overcome this limitation, we explore a ratio transformation that maps the original $N-1$ internal node heights into a space of one height parameter and $N-2$ ratio parameters. To make the analyses scalable, we develop a collection of linear-time algorithms to compute the gradient and Jacobian-associated terms of the log-likelihood with respect to these ratios. We then apply Hamiltonian Monte Carlo sampling with the ratio transform in a Bayesian framework to learn the divergence times in 4 pathogenic viruses (West Nile virus, rabies virus, Lassa virus, and Ebola virus) and the coralline red algae. Our method both resolves a mixing issue in the West Nile virus example and improves inference efficiency by at least 5-fold for the Lassa and rabies virus examples as well as for the algae example. Our method now also makes it computationally feasible to incorporate mixed-effects molecular clock models for the Ebola virus example, confirms the findings from the original study, and reveals clearer multimodal distributions of the divergence times of some clades of interest.


Asunto(s)
Algoritmos , Filogenia , Teorema de Bayes , Factores de Tiempo , Método de Montecarlo
11.
Biometrics ; 80(1)2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-38386359

RESUMEN

In clinical studies of chronic diseases, the effectiveness of an intervention is often assessed using "high cost" outcomes that require long-term patient follow-up and/or are invasive to obtain. While much progress has been made in the development of statistical methods to identify surrogate markers, that is, measurements that could replace such costly outcomes, they are generally not applicable to studies with a small sample size. These methods either rely on nonparametric smoothing which requires a relatively large sample size or rely on strict model assumptions that are unlikely to hold in practice and empirically difficult to verify with a small sample size. In this paper, we develop a novel rank-based nonparametric approach to evaluate a surrogate marker in a small sample size setting. The method developed in this paper is motivated by a small study of children with nonalcoholic fatty liver disease (NAFLD), a diagnosis for a range of liver conditions in individuals without significant history of alcohol intake. Specifically, we examine whether change in alanine aminotransferase (ALT; measured in blood) is a surrogate marker for change in NAFLD activity score (obtained by biopsy) in a trial, which compared Vitamin E ($n=50$) versus placebo ($n=46$) among children with NAFLD.


Asunto(s)
Enfermedad del Hígado Graso no Alcohólico , Niño , Humanos , Enfermedad del Hígado Graso no Alcohólico/diagnóstico , Biomarcadores , Biopsia , Tamaño de la Muestra
12.
Stat Med ; 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38980954

RESUMEN

In clinical settings with no commonly accepted standard-of-care, multiple treatment regimens are potentially useful, but some treatments may not be appropriate for some patients. A personalized randomized controlled trial (PRACTical) design has been proposed for this setting. For a network of treatments, each patient is randomized only among treatments which are appropriate for them. The aim is to produce treatment rankings that can inform clinical decisions about treatment choices for individual patients. Here we propose methods for determining sample size in a PRACTical design, since standard power-based methods are not applicable. We derive a sample size by evaluating information gained from trials of varying sizes. For a binary outcome, we quantify how many adverse outcomes would be prevented by choosing the top-ranked treatment for each patient based on trial results rather than choosing a random treatment from the appropriate personalized randomization list. In simulations, we evaluate three performance measures: mean reduction in adverse outcomes using sample information, proportion of simulated patients for whom the top-ranked treatment performed as well or almost as well as the best appropriate treatment, and proportion of simulated trials in which the top-ranked treatment performed better than a randomly chosen treatment. We apply the methods to a trial evaluating eight different combination antibiotic regimens for neonatal sepsis (NeoSep1), in which a PRACTical design addresses varying patterns of antibiotic choice based on disease characteristics and resistance. Our proposed approach produces results that are more relevant to complex decision making by clinicians and policy makers.

13.
Stat Med ; 43(16): 3062-3072, 2024 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-38803150

RESUMEN

This article is concerned with sample size determination methodology for prediction models. We propose to combine the individual calculations via learning-type curves. We suggest two distinct ways of doing so, a deterministic skeleton of a learning curve and a Gaussian process centered upon its deterministic counterpart. We employ several learning algorithms for modeling the primary endpoint and distinct measures for trial efficacy. We find that the performance may vary with the sample size, but borrowing information across sample size universally improves the performance of such calculations. The Gaussian process-based learning curve appears more robust and statistically efficient, while computational efficiency is comparable. We suggest that anchoring against historical evidence when extrapolating sample sizes should be adopted when such data are available. The methods are illustrated on binary and survival endpoints.


Asunto(s)
Algoritmos , Modelos Estadísticos , Humanos , Tamaño de la Muestra , Curva de Aprendizaje , Distribución Normal , Simulación por Computador , Análisis de Supervivencia
14.
Stat Med ; 43(15): 2944-2956, 2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-38747112

RESUMEN

Sample size formulas have been proposed for comparing two sensitivities (specificities) in the presence of verification bias under a paired design. However, the existing sample size formulas involve lengthy calculations of derivatives and are too complicated to implement. In this paper, we propose alternative sample size formulas for each of three existing tests, two Wald tests and one weighted McNemar's test. The proposed sample size formulas are more intuitive and simpler to implement than their existing counterparts. Furthermore, by comparing the sample sizes calculated based on the three tests, we can show that the three tests have similar sample sizes even though the weighted McNemar's test only use the data from discordant pairs whereas the two Wald tests also use the additional data from accordant pairs.


Asunto(s)
Sensibilidad y Especificidad , Tamaño de la Muestra , Humanos , Modelos Estadísticos , Sesgo , Simulación por Computador
15.
Stat Med ; 43(18): 3383-3402, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-38845095

RESUMEN

The US FDA's Project Optimus initiative that emphasizes dose optimization prior to marketing approval represents a pivotal shift in oncology drug development. It has a ripple effect for rethinking what changes may be made to conventional pivotal trial designs to incorporate a dose optimization component. Aligned with this initiative, we propose a novel seamless phase II/III design with dose optimization (SDDO framework). The proposed design starts with dose optimization in a randomized setting, leading to an interim analysis focused on optimal dose selection, trial continuation decisions, and sample size re-estimation (SSR). Based on the decision at interim analysis, patient enrollment continues for both the selected dose arm and control arm, and the significance of treatment effects will be determined at final analysis. The SDDO framework offers increased flexibility and cost-efficiency through sample size adjustment, while stringently controlling the Type I error. This proposed design also facilitates both accelerated approval (AA) and regular approval in a "one-trial" approach. Extensive simulation studies confirm that our design reliably identifies the optimal dosage and makes preferable decisions with a reduced sample size while retaining statistical power.


Asunto(s)
Antineoplásicos , Ensayos Clínicos Fase II como Asunto , Ensayos Clínicos Fase III como Asunto , Desarrollo de Medicamentos , Humanos , Ensayos Clínicos Fase II como Asunto/métodos , Antineoplásicos/administración & dosificación , Antineoplásicos/uso terapéutico , Desarrollo de Medicamentos/métodos , Tamaño de la Muestra , Simulación por Computador , Relación Dosis-Respuesta a Droga , Proyectos de Investigación , Estados Unidos , United States Food and Drug Administration , Aprobación de Drogas , Ensayos Clínicos Controlados Aleatorios como Asunto , Neoplasias/tratamiento farmacológico
16.
Stat Med ; 43(10): 1973-1992, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38634314

RESUMEN

The expected value of the standard power function of a test, computed with respect to a design prior distribution, is often used to evaluate the probability of success of an experiment. However, looking only at the expected value might be reductive. Instead, the whole probability distribution of the power function induced by the design prior can be exploited. In this article we consider one-sided testing for the scale parameter of exponential families and we derive general unifying expressions for cumulative distribution and density functions of the random power. Sample size determination criteria based on alternative summaries of these functions are discussed. The study sheds light on the relevance of the choice of the design prior in order to construct a successful experiment.


Asunto(s)
Teorema de Bayes , Humanos , Probabilidad , Tamaño de la Muestra
17.
Stat Med ; 43(2): 358-378, 2024 01 30.
Artículo en Inglés | MEDLINE | ID: mdl-38009329

RESUMEN

Individually randomized group treatment (IRGT) trials, in which the clustering of outcome is induced by group-based treatment delivery, are increasingly popular in public health research. IRGT trials frequently incorporate longitudinal measurements, of which the proper sample size calculations should account for correlation structures reflecting both the treatment-induced clustering and repeated outcome measurements. Given the relatively sparse literature on designing longitudinal IRGT trials, we propose sample size procedures for continuous and binary outcomes based on the generalized estimating equations approach, employing the block exchangeable correlation structures with different correlation parameters for the treatment arm and for the control arm, and surveying five marginal mean models with different assumptions of time effect: no-time constant treatment effect, linear-time constant treatment effect, categorical-time constant treatment effect, linear time by treatment interaction, and categorical time by treatment interaction. Closed-form sample size formulas are derived for continuous outcomes, which depends on the eigenvalues of the correlation matrices; detailed numerical sample size procedures are proposed for binary outcomes. Through simulations, we demonstrate that the empirical power agrees well with the predicted power, for as few as eight groups formed in the treatment arm, when data are analyzed using the matrix-adjusted estimating equations for the correlation parameters with a bias-corrected sandwich variance estimator.


Asunto(s)
Modelos Estadísticos , Proyectos de Investigación , Humanos , Tamaño de la Muestra , Sesgo , Análisis por Conglomerados , Simulación por Computador
18.
Stat Med ; 43(11): 2203-2215, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38545849

RESUMEN

This study is to give a systematic account of sample size adaptation designs (SSADs) and to provide direct proof of the efficiency advantage of general SSADs over group sequential designs (GSDs) from a different perspective. For this purpose, a class of sample size mapping functions to define SSADs is introduced. Under the two-stage adaptive clinical trial setting, theorems are developed to describe the properties of SSADs. Sufficient conditions are derived and used to prove analytically that SSADs based on the weighted combination test can be uniformly more efficient than GSDs in a range of likely values of the true treatment difference δ $$ \delta $$ . As shown in various scenarios, given a GSD, a fully adaptive SSAD can be obtained that has sufficient statistical power similar to that of the GSD but has a smaller average sample size for all δ $$ \delta $$ in the range. The associated sample size savings can be substantial. A practical design example and suggestions on the steps to find efficient SSADs are also provided.


Asunto(s)
Proyectos de Investigación , Tamaño de la Muestra , Humanos , Modelos Estadísticos , Ensayos Clínicos Adaptativos como Asunto/estadística & datos numéricos , Ensayos Clínicos Adaptativos como Asunto/métodos , Simulación por Computador , Ensayos Clínicos como Asunto/métodos
19.
Biol Lett ; 20(5): 20240002, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38689558

RESUMEN

Group living may entail local resource competition (LRC) which can be reduced if the birth sex ratio (BSR) is biased towards members of the dispersing sex who leave the group and no longer compete locally with kin. In primates, the predicted relationship between dispersal and BSR is generally supported although data for female dispersal species are rare and primarily available from captivity. Here, we present BSR data for Phayre's leaf monkeys (Trachypithecus phayrei crepusculus) at the Phu Khieo Wildlife Sanctuary, Thailand (N = 104). In this population, nearly all natal females dispersed, while natal males stayed or formed new groups nearby. The slower reproductive rate in larger groups suggests that food can be a limiting resource. In accordance with LRC, significantly more females than males were born (BSR 0.404 males/all births) thus reducing future competition with kin. This bias was similar in 2-year-olds (no sex-differential mortality). It became stronger in adults, supporting our impression of particularly fierce competition among males. To better evaluate the importance of BSR, more studies should report sex ratios throughout the life span, and more data for female dispersal primates need to be collected, ideally for multiple groups of different sizes and for several years.


Asunto(s)
Conducta Competitiva , Razón de Masculinidad , Animales , Femenino , Masculino , Tailandia , Conducta Competitiva/fisiología , Distribución Animal , Reproducción/fisiología
20.
Value Health ; 2024 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-38977192

RESUMEN

OBJECTIVE: Probabilistic sensitivity analysis (PSA) is conducted to account for the uncertainty in cost and effect of decision options under consideration. PSA involves obtaining a large sample of input parameter values (N) to estimate the expected cost and effect of each alternative in the presence of parameter uncertainty. When the analysis involves using stochastic models (e.g., individual-level models), the model is further replicated P times for each sampled parameter set. We study how N and P should be determined. METHODS: We show that PSA could be structured such that P can be an arbitrary number (say, P=1). To determine N, we derive a formula based on Chebyshev's inequality such that the error in estimating the incremental cost-effectiveness ratio (ICER) of alternatives (or equivalently, the willingness-to-pay value at which the optimal decision option changes) is within a desired level of accuracy. We described two methods to confirmed, visually and quantitatively, that the N informed by this method results in ICER estimates within the specified level of accuracy. RESULTS: When N is arbitrarily selected, the estimated ICERs could be substantially different from the true ICER (even as P increases), which could lead misleading conclusions. Using a simple resource allocation model, we demonstrate that the proposed approach can minimize the potential for this error. CONCLUSIONS: The number of parameter samples in probabilistic CEAs should not be arbitrarily selected. We describe three methods to ensure that enough parameter samples are used in probabilistic CEAs.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA