Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 68
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
BMC Med Res Methodol ; 24(1): 32, 2024 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-38341552

RESUMEN

BACKGROUND: When studying the association between treatment and a clinical outcome, a parametric multivariable model of the conditional outcome expectation is often used to adjust for covariates. The treatment coefficient of the outcome model targets a conditional treatment effect. Model-based standardization is typically applied to average the model predictions over the target covariate distribution, and generate a covariate-adjusted estimate of the marginal treatment effect. METHODS: The standard approach to model-based standardization involves maximum-likelihood estimation and use of the non-parametric bootstrap. We introduce a novel, general-purpose, model-based standardization method based on multiple imputation that is easily applicable when the outcome model is a generalized linear model. We term our proposed approach multiple imputation marginalization (MIM). MIM consists of two main stages: the generation of synthetic datasets and their analysis. MIM accommodates a Bayesian statistical framework, which naturally allows for the principled propagation of uncertainty, integrates the analysis into a probabilistic framework, and allows for the incorporation of prior evidence. RESULTS: We conduct a simulation study to benchmark the finite-sample performance of MIM in conjunction with a parametric outcome model. The simulations provide proof-of-principle in scenarios with binary outcomes, continuous-valued covariates, a logistic outcome model and the marginal log odds ratio as the target effect measure. When parametric modeling assumptions hold, MIM yields unbiased estimation in the target covariate distribution, valid coverage rates, and similar precision and efficiency than the standard approach to model-based standardization. CONCLUSION: We demonstrate that multiple imputation can be used to marginalize over a target covariate distribution, providing appropriate inference with a correctly specified parametric outcome model and offering statistical performance comparable to that of the standard approach to model-based standardization.


Asunto(s)
Modelos Estadísticos , Humanos , Teorema de Bayes , Modelos Lineales , Simulación por Computador , Modelos Logísticos , Estándares de Referencia
2.
Clin Trials ; : 17407745241251812, 2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38771021

RESUMEN

BACKGROUND/AIMS: Multi-arm, multi-stage trials frequently include a standard care to which all interventions are compared. This may increase costs and hinders comparisons among the experimental arms. Furthermore, the standard care may not be evident, particularly when there is a large variation in standard practice. Thus, we aimed to develop an adaptive clinical trial that drops ineffective interventions following an interim analysis before selecting the best intervention at the final stage without requiring a standard care. METHODS: We used Bayesian methods to develop a multi-arm, two-stage adaptive trial and evaluated two different methods for ranking interventions, the probability that each intervention was optimal (Pbest) and using the surface under the cumulative ranking curve (SUCRA), at both the interim and final analysis. The proposed trial design determines the maximum sample size for each intervention using the Average Length Criteria. The interim analysis takes place at approximately half the pre-specified maximum sample size and aims to drop interventions for futility if either Pbest or the SUCRA is below a pre-specified threshold. The final analysis compares all remaining interventions at the maximum sample size to conclude superiority based on either Pbest or the SUCRA. The two ranking methods were compared across 12 scenarios that vary the number of interventions and the assumed differences between the interventions. The thresholds for futility and superiority were chosen to control type 1 error, and then the predictive power and expected sample size were evaluated across scenarios. A trial comparing three interventions that aim to reduce anxiety for children undergoing a laceration repair in the emergency department was then designed, known as the Anxiolysis for Laceration Repair in Children Trial (ALICE) trial. RESULTS: As the number of interventions increases, the SUCRA results in a higher predictive power compared with Pbest. Using Pbest results in a lower expected sample size when there is an effective intervention. Using the Average Length Criterion, the ALICE trial has a maximum sample size for each arm of 100 patients. This sample size results in a 86% and 85% predictive power using Pbest and the SUCRA, respectively. Thus, we chose Pbest as the ranking method for the ALICE trial. CONCLUSION: Bayesian ranking methods can be used in multi-arm, multi-stage trials with no clear control intervention. When more interventions are included, the SUCRA results in a higher power than Pbest. Future work should consider whether other ranking methods may also be relevant for clinical trial design.

3.
Clin Trials ; : 17407745241247334, 2024 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-38752434

RESUMEN

BACKGROUND: Clinical trials are increasingly using Bayesian methods for their design and analysis. Inference in Bayesian trials typically uses simulation-based approaches such as Markov Chain Monte Carlo methods. Markov Chain Monte Carlo has high computational cost and can be complex to implement. The Integrated Nested Laplace Approximations algorithm provides approximate Bayesian inference without the need for computationally complex simulations, making it more efficient than Markov Chain Monte Carlo. The practical properties of Integrated Nested Laplace Approximations compared to Markov Chain Monte Carlo have not been considered for clinical trials. Using data from a published clinical trial, we aim to investigate whether Integrated Nested Laplace Approximations is a feasible and accurate alternative to Markov Chain Monte Carlo and provide practical guidance for trialists interested in Bayesian trial design. METHODS: Data from an international Bayesian multi-platform adaptive trial that compared therapeutic-dose anticoagulation with heparin to usual care in non-critically ill patients hospitalized for COVID-19 were used to fit Bayesian hierarchical generalized mixed models. Integrated Nested Laplace Approximations was compared to two Markov Chain Monte Carlo algorithms, implemented in the software JAGS and stan, using packages available in the statistical software R. Seven outcomes were analysed: organ-support free days (an ordinal outcome), five binary outcomes related to survival and length of hospital stay, and a time-to-event outcome. The posterior distributions for the treatment and sex effects and the variances for the hierarchical effects of age, site and time period were obtained. We summarized these posteriors by calculating the mean, standard deviations and the 95% equitailed credible intervals and presenting the results graphically. The computation time for each algorithm was recorded. RESULTS: The average overlap of the 95% credible interval for the treatment and sex effects estimated using Integrated Nested Laplace Approximations was 96% and 97.6% compared with stan, respectively. The graphical posterior densities for these effects overlapped for all three algorithms. The posterior mean for the variance of the hierarchical effects of age, site and time estimated using Integrated Nested Laplace Approximations are within the 95% credible interval estimated using Markov Chain Monte Carlo but the average overlap of the credible interval is lower, 77%, 85.6% and 91.3%, respectively, for Integrated Nested Laplace Approximations compared to stan. Integrated Nested Laplace Approximations and stan were easily implemented in clear, well-established packages in R, while JAGS required the direct specification of the model. Integrated Nested Laplace Approximations was between 85 and 269 times faster than stan and 26 and 1852 times faster than JAGS. CONCLUSION: Integrated Nested Laplace Approximations could reduce the computational complexity of Bayesian analysis in clinical trials as it is easy to implement in R, substantially faster than Markov Chain Monte Carlo methods implemented in JAGS and stan, and provides near identical approximations to the posterior distributions for the treatment effect. Integrated Nested Laplace Approximations was less accurate when estimating the posterior distribution for the variance of hierarchical effects, particularly for the proportional odds model, and future work should determine if the Integrated Nested Laplace Approximations algorithm can be adjusted to improve this estimation.

4.
Pediatr Emerg Care ; 40(2): 88-97, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-37487548

RESUMEN

OBJECTIVES: To systematically appraise the literature on the relative effectiveness of pharmacologic procedural distress management agents for children undergoing laceration repair. METHODS: Six databases were searched in August 2021, and the search was updated in January 2023. We included completed randomized or quasi-randomized trials involving ( a ) children younger than 15 years undergoing laceration repair in the emergency department; ( b ) randomization to at least one anxiolytic, sedative, and/or analgesic agent versus any comparator agent or placebo; ( c ) efficacy of procedural distress management measured on any scale. Secondary outcomes were pain during the procedure, administration acceptance, sedation duration, additional sedation, length of stay, and stakeholder satisfaction. Cochrane Collaboration's risk-of-bias tool assessed individual studies. Ranges and proportions summarized results where applicable. RESULTS: Among 21 trials (n = 1621 participants), the most commonly studied anxiolytic agents were midazolam, ketamine, and N 2 O. Oral midazolam, oral ketamine, and N 2 O were found to reduce procedural distress more effectively than their comparators in 4, 3, and 2 studies, respectively. Eight studies comparing routes, doses, or volumes of administration of the same agent led to indeterminate results. Meta-analysis was not performed because of heterogeneity in comparators, routes, and outcome measures across studies. CONCLUSIONS: Based on procedural distress reduction, this study favors oral midazolam and oral ketamine. However, this finding should be interpreted with caution because of heterogeneous comparators across studies and minor conflicting results. An optimal agent for procedural distress management cannot be recommended based on the limited evidence. Future research should seek to identify the minimal, essential measures of patient distress during pharmacologic anxiolysis and/or sedation in laceration repair to guide future trials and reviews.


Asunto(s)
Ketamina , Laceraciones , Niño , Humanos , Midazolam/uso terapéutico , Ketamina/uso terapéutico , Laceraciones/cirugía , Hipnóticos y Sedantes/uso terapéutico , Analgésicos/uso terapéutico
5.
Value Health ; 26(10): 1461-1473, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37414276

RESUMEN

OBJECTIVES: Although the ISPOR Value of Information (VOI) Task Force's reports outline VOI concepts and provide good-practice recommendations, there is no guidance for reporting VOI analyses. VOI analyses are usually performed alongside economic evaluations for which the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) 2022 Statement provides reporting guidelines. Thus, we developed the CHEERS-VOI checklist to provide reporting guidance and checklist to support the transparent, reproducible, and high-quality reporting of VOI analyses. METHODS: A comprehensive literature review generated a list of 26 candidate reporting items. These candidate items underwent a Delphi procedure with Delphi participants through 3 survey rounds. Participants rated each item on a 9-point Likert scale to indicate its relevance when reporting the minimal, essential information about VOI methods and provided comments. The Delphi results were reviewed at 2-day consensus meetings and the checklist was finalized using anonymous voting. RESULTS: We had 30, 25, and 24 Delphi respondents in rounds 1, 2, and 3, respectively. After incorporating revisions recommended by the Delphi participants, all 26 candidate items proceeded to the 2-day consensus meetings. The final CHEERS-VOI checklist includes all CHEERS items, but 7 items require elaboration when reporting VOI. Further, 6 new items were added to report information relevant only to VOI (eg, VOI methods applied). CONCLUSIONS: The CHEERS-VOI checklist should be used when a VOI analysis is performed alongside economic evaluations. The CHEERS-VOI checklist will help decision makers, analysts and peer reviewers in the assessment and interpretation of VOI analyses and thereby increase transparency and rigor in decision making.


Asunto(s)
Lista de Verificación , Informe de Investigación , Humanos , Análisis Costo-Beneficio , Estándares de Referencia , Consenso
6.
Ann Emerg Med ; 82(2): 179-190, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-36870890

RESUMEN

STUDY OBJECTIVE: To determine the optimal sedative dose of intranasal dexmedetomidine for children undergoing laceration repair. METHODS: This dose-ranging study employing the Bayesian Continual Reassessment Method enrolled children aged 0 to 10 years with a single laceration (<5 cm), requiring single-layer closure, who received topical anesthetic. Children were administered 1, 2, 3, or 4 mcg/kg intranasal dexmedetomidine. The primary outcome was the proportion with adequate sedation (Pediatric Sedation State Scale score of 2 or 3 for ≥90% of the time from sterile preparation to tying of the last suture). Secondary outcomes included the Observational Scale of Behavior Distress-Revised (range: 0 [no distress] to 23.5 [maximal distress]), postprocedure length of stay, and adverse events. RESULTS: We enrolled 55 children (35/55 [64%] males; median [interquartile range {IQR}] age 4 [2, 6] years). At 1, 2, 3, and 4 mcg/kg intranasal dexmedetomidine, respectively, the proportion of participants "adequately" sedated was 1/3 (33%), 2/9 (22%), 13/21 (62%), and 12/21 (57%); the posterior mean (95% equitailed credible intervals) for the probability of adequate sedation was 0.38 (0.04, 0.82), 0.25 (0.05, 0.54), 0.61 (0.41, 0.80), and 0.57 (0.36, 0.76); the median (IQR) Observational Scale of Behavior Distress-Revised scores during suturing was 2.7 (0.3, 3), 0 (0, 3.8), 0.6 (0, 5), and 0 (0, 3.7); the median (IQR) postprocedure length of stay was 67 (60, 78), 76 (60, 100), 89 (76, 109), and 113 (76, 150) minutes. There was 1 adverse event, a decrease in oxygen saturation at 4 mcg/kg, which resolved with head repositioning. CONCLUSION: Despite limitations, such as our limited sample size and subjectivity in Pediatric Sedation State Scale scoring, sedation efficacy for 3 and 4 mcg/kg were similarly based on equitailed credible intervals suggesting either could be considered optimal.


Asunto(s)
Dexmedetomidina , Laceraciones , Masculino , Humanos , Niño , Femenino , Dexmedetomidina/efectos adversos , Laceraciones/cirugía , Teorema de Bayes , Hipnóticos y Sedantes , Administración Intranasal
7.
J Psychiatry Neurosci ; 46(2): E247-E257, 2021 03 17.
Artículo en Inglés | MEDLINE | ID: mdl-33729739

RESUMEN

Background: Bipolar disorder is a highly heritable psychiatric condition for which specific genetic factors remain largely unknown. In the present study, we used combined whole-exome sequencing and linkage analysis to identify risk loci and dissect the contribution of common and rare variants in families with a high density of illness. Methods: Overall, 117 participants from 15 Australian extended families with bipolar disorder (72 with affective disorder, including 50 with bipolar disorder type I or II, 13 with schizoaffective disorder-manic type and 9 with recurrent unipolar disorder) underwent whole-exome sequencing. We performed genome-wide linkage analysis using MERLIN and conditional linkage analysis using LAMP. We assessed the contribution of potentially functional rare variants using a genebased segregation test. Results: We identified a significant linkage peak on chromosome 10q11-q21 (maximal single nucleotide polymorphism = rs10761725; exponential logarithm of the odds [LODexp] = 3.03; empirical p = 0.046). The linkage interval spanned 36 protein-coding genes, including a gene associated with bipolar disorder, ankyrin 3 (ANK3). Conditional linkage analysis showed that common ANK3 risk variants previously identified in genome-wide association studies - or variants in linkage disequilibrium with those variants - did not explain the linkage signal (rs10994397 LOD = 0.63; rs9804190 LOD = 0.04). A family-based segregation test with 34 rare variants from 14 genes under the linkage interval suggested rare variant contributions of 3 brain-expressed genes: NRBF2 (p = 0.005), PCDH15 (p = 0.002) and ANK3 (p = 0.014). Limitations: We did not examine non-coding variants, but they may explain the remaining linkage signal. Conclusion: Combining family-based linkage analysis with next-generation sequencing data is effective for identifying putative disease genes and specific risk variants in complex disorders. We identified rare missense variants in ANK3, PCDH15 and NRBF2 that could confer disease risk, providing valuable targets for functional characterization.


Asunto(s)
Alelos , Ancirinas/genética , Trastorno Bipolar/genética , Cromosomas Humanos Par 10/genética , Exoma/genética , Ligamiento Genético , Predisposición Genética a la Enfermedad , Polimorfismo de Nucleótido Simple/genética , Femenino , Estudio de Asociación del Genoma Completo , Humanos , Masculino , Secuenciación del Exoma
8.
Stat Med ; 40(11): 2753-2758, 2021 05 20.
Artículo en Inglés | MEDLINE | ID: mdl-33963582

RESUMEN

In this commentary, we highlight the importance of: (1) carefully considering and clarifying whether a marginal or conditional treatment effect is of interest in a population-adjusted indirect treatment comparison; and (2) developing distinct methodologies for estimating the different measures of effect. The appropriateness of each methodology depends on the preferred target of inference.


Asunto(s)
Simulación por Computador , Humanos
9.
Conserv Biol ; 35(6): 1833-1849, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34289517

RESUMEN

Recognizing the imperative to evaluate species recovery and conservation impact, in 2012 the International Union for Conservation of Nature (IUCN) called for development of a "Green List of Species" (now the IUCN Green Status of Species). A draft Green Status framework for assessing species' progress toward recovery, published in 2018, proposed 2 separate but interlinked components: a standardized method (i.e., measurement against benchmarks of species' viability, functionality, and preimpact distribution) to determine current species recovery status (herein species recovery score) and application of that method to estimate past and potential future impacts of conservation based on 4 metrics (conservation legacy, conservation dependence, conservation gain, and recovery potential). We tested the framework with 181 species representing diverse taxa, life histories, biomes, and IUCN Red List categories (extinction risk). Based on the observed distribution of species' recovery scores, we propose the following species recovery categories: fully recovered, slightly depleted, moderately depleted, largely depleted, critically depleted, extinct in the wild, and indeterminate. Fifty-nine percent of tested species were considered largely or critically depleted. Although there was a negative relationship between extinction risk and species recovery score, variation was considerable. Some species in lower risk categories were assessed as farther from recovery than those at higher risk. This emphasizes that species recovery is conceptually different from extinction risk and reinforces the utility of the IUCN Green Status of Species to more fully understand species conservation status. Although extinction risk did not predict conservation legacy, conservation dependence, or conservation gain, it was positively correlated with recovery potential. Only 1.7% of tested species were categorized as zero across all 4 of these conservation impact metrics, indicating that conservation has, or will, play a role in improving or maintaining species status for the vast majority of these species. Based on our results, we devised an updated assessment framework that introduces the option of using a dynamic baseline to assess future impacts of conservation over the short term to avoid misleading results which were generated in a small number of cases, and redefines short term as 10 years to better align with conservation planning. These changes are reflected in the IUCN Green Status of Species Standard.


RESUMEN: Reconociendo que era imperativo evaluar la recuperación de especies y el impacto de la conservación, la Unión Internacional para la Conservación de la Naturaleza (UICN) convocó en 2012 al desarrollo de una "Lista Verde de Especies" (ahora el Estatus Verde de las Especies de la UICN). Un marco de referencia preliminar de una Lista Verde de Especies para evaluar el progreso de las especies hacia la recuperación, publicado en 2018, proponía 2 componentes separados pero interconectados: un método estandarizado (i.e., medición en relación con puntos de referencia de la viabilidad de especies, funcionalidad y distribución antes del impacto) para determinar el estatus de recuperación actual (puntuación de recuperación de la especie) y la aplicación de ese método para estimar impactos en el pasado y potenciales de conservación basados en 4 medidas (legado de conservación, dependencia de conservación, ganancia de conservación y potencial de recuperación). Probamos el marco de referencia con 181 especies representantes de diversos taxa, historias de vida, biomas, y categorías (riesgo de extinción) en la Lista Roja de la IUCN. Con base en la distribución observada de la puntuación de recuperación de las especies, proponemos las siguientes categorías de recuperación de la especie: totalmente recuperada, ligeramente mermada, moderadamente mermada, mayormente mermada, gravemente mermada, extinta en estado silvestre, e inderterminada. Cincuenta y nueve por ciento de las especies se consideraron mayormente o gravemente mermada. Aunque hubo una relación negativa entre el riesgo de extinción y la puntuación de recuperación de la especie, la variación fue considerable. Algunas especies en las categorías de riesgo bajas fueron evaluadas como más lejos de recuperarse que aquellas con alto riesgo. Esto enfatiza que la recuperación de especies es diferente conceptualmente al riesgo de extinción y refuerza la utilidad del Estado Verde de las Especies de la UICN para comprender integralmente el estatus de conservación de especies. Aunque el riesgo de extinción no predijo el legado de conservación, la dependencia de conservación o la ganancia de conservación, se correlacionó positivamente con la potencial de recuperación. Solo 1.7% de las especies probadas fue categorizado como cero en los 4 indicadores de impacto de la conservación, lo que indica que la conservación ha jugado, o jugará, un papel en la mejoría o mantenimiento del estatus de la especie la gran mayoría de ellas. Con base en nuestros resultados, diseñamos una versión actualizada del marco de referencia para la evaluación que introduce la opción de utilizar una línea de base dinámica para evaluar los impactos futuros de la conservación en el corto plazo y redefine corto plazo como 10 años.


Asunto(s)
Especies en Peligro de Extinción , Extinción Biológica , Animales , Biodiversidad , Conservación de los Recursos Naturales , Ecosistema , Riesgo
10.
Eur J Epidemiol ; 36(11): 1111-1121, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34091766

RESUMEN

Clinical trials require participation of numerous patients, enormous research resources and substantial public funding. Time-consuming trials lead to delayed implementation of beneficial interventions and to reduced benefit to patients. This manuscript discusses two methods for the allocation of research resources and reviews a framework for prioritisation and design of clinical trials. The traditional error-driven approach of clinical trial design controls for type I and II errors. However, controlling for those statistical errors has limited relevance to policy makers. Therefore, this error-driven approach can be inefficient, waste research resources and lead to research with limited impact on daily practice. The novel value-driven approach assesses the currently available evidence and focuses on designing clinical trials that directly inform policy and treatment decisions. Estimating the net value of collecting further information, prior to undertaking a trial, informs a decision maker whether a clinical or health policy decision can be made with current information or if collection of extra evidence is justified. Additionally, estimating the net value of new information guides study design, data collection choices, and sample size estimation. The value-driven approach ensures the efficient use of research resources, reduces unnecessary burden to trial participants, and accelerates implementation of beneficial healthcare interventions.


Asunto(s)
Ensayos Clínicos como Asunto , Proyectos de Investigación , Recolección de Datos , Política de Salud , Humanos , Investigación
11.
Clin Trials ; 18(1): 61-70, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33231105

RESUMEN

BACKGROUND/AIMS: Combinations of treatments that have already received regulatory approval can offer additional benefit over Each of the treatments individually. However, trials of these combinations are lower priority than those that develop novel therapies, which can restrict funding, timelines and patient availability. This article develops a novel trial design to facilitate the evaluation of New combination therapies. This trial design combines elements of phase II and phase III trials to reduce the burden of evaluating combination therapies, while also maintaining a feasible sample size. This design was developed for a randomised trial that compares the properties of three combination doses of ketamine and dexmedetomidine, given intranasally, to ketamine delivered intravenously for children undergoing a closed reduction for a fracture or dislocation. METHODS: This trial design uses response-adaptive randomisation to evaluate different dose combinations and increase the information collected for successful novel drug combinations. The design then uses Bayesian dose-response modelling to undertake a comparative effectiveness analysis for the most successful dose combination against a relevant comparator. We used simulation methods determine the thresholds for adapting the trial and making conclusions. We also used simulations to evaluate the probability of selecting the dose combination with the highest true effectiveness the operating characteristics of the design and its Bayesian predictive power. RESULTS: With 410 participants, five interim updates of the randomisation ratio and a probability of effectiveness of 0.93, 0.88 and 0.83 for the three dose combinations, we have an 83% chance of randomising the largest number of patients to the drug with the highest probability of effectiveness. Based on this adaptive randomisation procedure, the comparative effectiveness analysis has a type I error of less than 5% and a 93% chance of correcting concluding non-inferiority, when the probability of effectiveness for the optimal combination therapy is 0.9. In this case, the trial has a greater than 77% chance of meeting its dual aims of dose-finding and comparative effectiveness. Finally, the Bayesian predictive power of the trial is over 90%. CONCLUSIONS: By simultaneously determining the optimal dose and collecting data on the relative effectiveness of an intervention, we can minimise administrative burden and recruitment time for a trial. This will minimise the time required to get effective, safe combination therapies to patients quickly. The proposed trial has high potential to meet the dual study objectives within a feasible overall sample size.


Asunto(s)
Ensayos Clínicos Adaptativos como Asunto , Teorema de Bayes , Relación Dosis-Respuesta a Droga , Ensayos Clínicos Controlados Aleatorios como Asunto , Niño , Dexmedetomidina/administración & dosificación , Humanos , Ketamina/administración & dosificación , Tamaño de la Muestra
12.
PLoS Genet ; 14(12): e1007535, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-30586385

RESUMEN

The contactin-associated protein-like 2 (CNTNAP2) gene is a member of the neurexin superfamily. CNTNAP2 was first implicated in the cortical dysplasia-focal epilepsy (CDFE) syndrome, a recessive disease characterized by intellectual disability, epilepsy, language impairments and autistic features. Associated SNPs and heterozygous deletions in CNTNAP2 were subsequently reported in autism, schizophrenia and other psychiatric or neurological disorders. We aimed to comprehensively examine evidence for the role of CNTNAP2 in susceptibility to psychiatric disorders, by the analysis of multiple classes of genetic variation in large genomic datasets. In this study we used: i) summary statistics from the Psychiatric Genomics Consortium (PGC) GWAS for seven psychiatric disorders; ii) examined all reported CNTNAP2 structural variants in patients and controls; iii) performed cross-disorder analysis of functional or previously associated SNPs; and iv) conducted burden tests for pathogenic rare variants using sequencing data (4,483 ASD and 6,135 schizophrenia cases, and 13,042 controls). The distribution of CNVs across CNTNAP2 in psychiatric cases from previous reports was no different from controls of the database of genomic variants. Gene-based association testing did not implicate common variants in autism, schizophrenia or other psychiatric phenotypes. The association of proposed functional SNPs rs7794745 and rs2710102, reported to influence brain connectivity, was not replicated; nor did predicted functional SNPs yield significant results in meta-analysis across psychiatric disorders at either SNP-level or gene-level. Disrupting CNTNAP2 rare variant burden was not higher in autism or schizophrenia compared to controls. Finally, in a CNV mircroarray study of an extended bipolar disorder family with 5 affected relatives we previously identified a 131kb deletion in CNTNAP2 intron 1, removing a FOXP2 transcription factor binding site. Quantitative-PCR validation and segregation analysis of this CNV revealed imperfect segregation with BD. This large comprehensive study indicates that CNTNAP2 may not be a robust risk gene for psychiatric phenotypes.


Asunto(s)
Proteínas de la Membrana/genética , Trastornos Mentales/genética , Proteínas del Tejido Nervioso/genética , Trastorno del Espectro Autista/genética , Trastorno Bipolar/genética , Anomalías Craneofaciales/genética , Variaciones en el Número de Copia de ADN , Bases de Datos de Ácidos Nucleicos , Epilepsias Parciales/genética , Femenino , Predisposición Genética a la Enfermedad , Estudio de Asociación del Genoma Completo , Humanos , Intrones , Masculino , Malformaciones del Desarrollo Cortical/genética , Linaje , Polimorfismo de Nucleótido Simple , Factores de Riesgo , Esquizofrenia/genética , Eliminación de Secuencia
13.
Value Health ; 23(6): 734-742, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32540231

RESUMEN

Value of information (VOI) analyses can help policy makers make informed decisions about whether to conduct and how to design future studies. Historically a computationally expensive method to compute the expected value of sample information (EVSI) restricted the use of VOI to simple decision models and study designs. Recently, 4 EVSI approximation methods have made such analyses more feasible and accessible. Members of the Collaborative Network for Value of Information (ConVOI) compared the inputs, the analyst's expertise and skills, and the software required for the 4 recently developed EVSI approximation methods. Our report provides practical guidance and recommendations to help inform the choice between the 4 efficient EVSI estimation methods. More specifically, this report provides: (1) a step-by-step guide to the methods' use, (2) the expertise and skills required to implement the methods, and (3) method recommendations based on the features of decision-analytic problems.


Asunto(s)
Toma de Decisiones , Técnicas de Apoyo para la Decisión , Proyectos de Investigación , Investigación/economía , Humanos , Formulación de Políticas , Programas Informáticos
15.
Value Health ; 21(11): 1299-1304, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30442277

RESUMEN

OBJECTIVE: The expected value of sample information (EVSI) quantifies the economic benefit of reducing uncertainty in a health economic model by collecting additional information. This has the potential to improve the allocation of research budgets. Despite this, practical EVSI evaluations are limited partly due to the computational cost of estimating this value using the gold-standard nested simulation methods. Recently, however, Heath et al. developed an estimation procedure that reduces the number of simulations required for this gold-standard calculation. Up to this point, this new method has been presented in purely technical terms. STUDY DESIGN: This study presents the practical application of this new method to aid its implementation. We use a worked example to illustrate the key steps of the EVSI estimation procedure before discussing its optimal implementation using a practical health economic model. METHODS: The worked example is based on a three-parameter linear health economic model. The more realistic model evaluates the cost-effectiveness of a new chemotherapy treatment, which aims to reduce the number of side effects experienced by patients. We use a Markov model structure to evaluate the health economic profile of experiencing side effects. RESULTS: This EVSI estimation method offers accurate estimation within a feasible computation time, seconds compared to days, even for more complex model structures. The EVSI estimation is more accurate if a greater number of nested samples are used, even for a fixed computational cost. CONCLUSIONS: This new method reduces the computational cost of estimating the EVSI by nested simulation.


Asunto(s)
Análisis Costo-Beneficio , Modelos Económicos , Método de Montecarlo , Investigación/economía , Asignación de Recursos/economía , Presupuestos , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/economía , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/prevención & control , Humanos , Incertidumbre
16.
Conserv Biol ; 32(5): 1128-1138, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-29578251

RESUMEN

Stopping declines in biodiversity is critically important, but it is only a first step toward achieving more ambitious conservation goals. The absence of an objective and practical definition of species recovery that is applicable across taxonomic groups leads to inconsistent targets in recovery plans and frustrates reporting and maximization of conservation impact. We devised a framework for comprehensively assessing species recovery and conservation success. We propose a definition of a fully recovered species that emphasizes viability, ecological functionality, and representation; and use counterfactual approaches to quantify degree of recovery. This allowed us to calculate a set of 4 conservation metrics that demonstrate impacts of conservation efforts to date (conservation legacy); identify dependence of a species on conservation actions (conservation dependence); quantify expected gains resulting from conservation action in the medium term (conservation gain); and specify requirements to achieve maximum plausible recovery over the long term (recovery potential). These metrics can incentivize the establishment and achievement of ambitious conservation targets. We illustrate their use by applying the framework to a vertebrate, an invertebrate, and a woody and an herbaceous plant. Our approach is a preliminary framework for an International Union for Conservation of Nature (IUCN) Green List of Species, which was mandated by a resolution of IUCN members in 2012. Although there are several challenges in applying our proposed framework to a wide range of species, we believe its further development, implementation, and integration with the IUCN Red List of Threatened Species will help catalyze a positive and ambitious vision for conservation that will drive sustained conservation action.


Asunto(s)
Conservación de los Recursos Naturales , Especies en Peligro de Extinción , Animales , Biodiversidad , Recolección de Datos , Vertebrados
18.
Stat Med ; 35(23): 4264-80, 2016 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-27189534

RESUMEN

The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.


Asunto(s)
Método de Montecarlo , Incertidumbre , Análisis Costo-Beneficio , Toma de Decisiones , Humanos , Análisis de Regresión
20.
Am J Med Genet A ; 164A(3): 782-8, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24357335

RESUMEN

We present a patient with a behavioral disorder, epilepsy, and autism spectrum disorder who has a 520 kb chromosomal deletion at 15q26.1 encompassing three genes: ST8SIA2, C15orf32, and FAM174B. Alpha-2,8-Sialyltransferase 2 (ST8SIA2) is expressed in the developing brain and appears to play an important role in neuronal migration, axon guidance and synaptic plasticity. It has recently been implicated in a genome wide association study as a potential factor underlying autism, and has also been implicated in the pathogenesis of bipolar disorder and schizophrenia. This case provides supportive evidence that ST8SIA2 haploinsufficiency may play a role in neurobehavioral phenotypes.


Asunto(s)
Trastornos de la Conducta Infantil/genética , Trastornos Generalizados del Desarrollo Infantil/genética , Deleción Cromosómica , Cromosomas Humanos Par 15 , Epilepsia/genética , Sialiltransferasas/genética , Trastornos de la Conducta Infantil/diagnóstico , Trastornos Generalizados del Desarrollo Infantil/diagnóstico , Preescolar , Hibridación Genómica Comparativa , Epilepsia/diagnóstico , Facies , Haplotipos , Humanos , Desequilibrio de Ligamiento , Masculino , Linaje , Fenotipo , Polimorfismo de Nucleótido Simple
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA