RESUMEN
Germline mutation is the mechanism by which genetic variation in a population is created. Inferences derived from mutation rate models are fundamental to many population genetics methods. Previous models have demonstrated that nucleotides flanking polymorphic sites-the local sequence context-explain variation in the probability that a site is polymorphic. However, limitations to these models exist as the size of the local sequence context window expands. These include a lack of robustness to data sparsity at typical sample sizes, lack of regularization to generate parsimonious models and lack of quantified uncertainty in estimated rates to facilitate comparison between models. To address these limitations, we developed Baymer, a regularized Bayesian hierarchical tree model that captures the heterogeneous effect of sequence contexts on polymorphism probabilities. Baymer implements an adaptive Metropolis-within-Gibbs Markov Chain Monte Carlo sampling scheme to estimate the posterior distributions of sequence-context based probabilities that a site is polymorphic. We show that Baymer accurately infers polymorphism probabilities and well-calibrated posterior distributions, robustly handles data sparsity, appropriately regularizes to return parsimonious models, and scales computationally at least up to 9-mer context windows. We demonstrate application of Baymer in three ways-first, identifying differences in polymorphism probabilities between continental populations in the 1000 Genomes Phase 3 dataset, second, in a sparse data setting to examine the use of polymorphism models as a proxy for de novo mutation probabilities as a function of variant age, sequence context window size, and demographic history, and third, comparing model concordance between different great ape species. We find a shared context-dependent mutation rate architecture underlying our models, enabling a transfer-learning inspired strategy for modeling germline mutations. In summary, Baymer is an accurate polymorphism probability estimation algorithm that automatically adapts to data sparsity at different sequence context levels, thereby making efficient use of the available data.
Asunto(s)
Genoma Humano , Tasa de Mutación , Humanos , Genoma Humano/genética , Teorema de Bayes , Mutación , Polimorfismo Genético , Cadenas de Markov , Método de MontecarloRESUMEN
The expansion of machine learning to high-stakes application domains such as medicine, finance, and criminal justice, where making informed decisions requires clear understanding of the model, has increased the interest in interpretable machine learning. The widely used Classification and Regression Trees (CART) have played a major role in health sciences, due to their simple and intuitive explanation of predictions. Ensemble methods like gradient boosting can improve the accuracy of decision trees, but at the expense of the interpretability of the generated model. Additive models, such as those produced by gradient boosting, and full interaction models, such as CART, have been investigated largely in isolation. We show that these models exist along a spectrum, revealing previously unseen connections between these approaches. This paper introduces a rigorous formalization for the additive tree, an empirically validated learning technique for creating a single decision tree, and shows that this method can produce models equivalent to CART or gradient boosted stumps at the extremes by varying a single parameter. Although the additive tree is designed primarily to provide both the model interpretability and predictive performance needed for high-stakes applications like medicine, it also can produce decision trees represented by hybrid models between CART and boosted stumps that can outperform either of these approaches.
Asunto(s)
Algoritmos , Árboles de Decisión , Aprendizaje Automático , Bases de Datos Factuales , Modelos Estadísticos , Lenguajes de ProgramaciónRESUMEN
Objective: Place-based blight remediation programs have gained popularity in recent years as a crime reduction approach. This study estimated the impact of a citywide vacant lot greening program in Philadelphia on changes in crime over multiple years, and whether the effects were moderated by nearby land uses. Methods: The vacant lot greening program was assessed using quasi-experimental and experimental designs. Entropy distance weighting was used in the quasi-experimental analysis to match control lots to be comparable to greened lots on pre-existing crime trends. Fixed-effects difference-in-differences models were used to estimate the impact of the vacant lot greening program in quasi-experimental and experimental analyses. Results: Vacant lot greening was estimated to reduce total crime and multiple subcategories in both the quasi-experimental and experimental evaluations. Remediating vacant lots had a smaller effect on reducing crime when they were located nearby train stations and alcohol outlets. The crime reductions from vacant lot remediations were larger when they were located near areas of active businesses. There is some suggestive evidence that the effects of vacant lot greening are larger when located in neighborhoods with higher pre-intervention levels of social cohesion. Conclusions: The findings suggest that vacant lot greening provides a sustainable approache to reducing crime in disadvantaged neighborhoods, and the effects may vary by different surrounding land uses. To better understand the mechanisms through which place-based blight remediation interventions reduce crime, future research should measure human activities and neighborly socialization in and around places before and after remediation efforts are implemented.
RESUMEN
Hemorrhage is the most common mechanism of death in battlefield casualties with potentially survivable injuries. There is evidence that early blood product transfusion saves lives among combat casualties. When compared to component therapy, fresh whole blood transfusion improves outcomes in military settings. Cold-stored whole blood also improves outcomes in trauma patients. Whole blood has the advantage of providing red cells, plasma, and platelets together in a single unit, which simplifies and speeds the process of resuscitation, particularly in austere environments. The Joint Trauma System, the Defense Committee on Trauma, and the Armed Services Blood Program endorse the following: (1) whole blood should be used to treat hemorrhagic shock; (2) low-titer group O whole blood is the resuscitation product of choice for the treatment of hemorrhagic shock for all casualties at all roles of care; (3) whole blood should be available within 30 min of casualty wounding, on all medical evacuation platforms, and at all resuscitation and surgical team locations; (4) when whole blood is not available, component therapy should be available within 30 min of casualty wounding; (5) all prehospital medical providers should be trained and logistically supported to screen donors, collect fresh whole blood from designated donors, transfuse blood products, recognize and treat transfusion reactions, and complete the minimum documentation requirements; (6) all deploying military personnel should undergo walking blood bank prescreen laboratory testing for transfusion transmitted disease immediately prior to deployment. Those who are blood group O should undergo anti-A/anti-B antibody titer testing.
Asunto(s)
Transfusión Sanguínea/métodos , Resucitación/métodos , Choque Hemorrágico/terapia , Heridas y Lesiones/terapia , Almacenamiento de Sangre/métodos , Servicios Médicos de Urgencia/métodos , Humanos , Medicina Militar , Personal MilitarRESUMEN
Many health issues require adherence to recommended daily activities, such as taking medication to manage a chronic condition, walking a certain distance to promote weight loss, or measuring weights to assess fluid balance in heart failure. The cost of nonadherence can be high, with respect to both individual health outcomes and the healthcare system. Incentivizing adherence to daily activities can promote better health in patients and populations and potentially provide long-term cost savings. Multiple incentive structures are possible. We focus here on a daily lottery incentive in which payment occurs when both the participant's lottery number matches the number drawn and the participant adheres to the targeted daily behavior. Our objective is to model the lottery's effect on participants' probability to complete the targeted task, particularly over the short term. We combine two procedures for analyzing such binary time series: a parameter-driven regression model with an autocorrelated latent process and a comparative interrupted time series. We use the output of the regression model as the control generator for the comparative time series in order to create a quasi-experimental design.
Asunto(s)
Motivación , Cooperación del Paciente , Probabilidad , Simulación por Computador , Humanos , Análisis de Series de Tiempo Interrumpido , Análisis de RegresiónRESUMEN
BACKGROUND & AIMS: Intestinal homeostasis and regeneration after injury are controlled by 2 different types of cells: slow cycling, injury-resistant reserve intestinal stem cells (ISCs) and actively proliferative ISCs. Putative reserve ISCs have been identified using a variety of methods, including CreER insertions at Hopx or Bmi1 loci in mice and DNA label retention. Label-retaining cells (LRCs) include dormant stem cells in several tissues; in the intestine, LRCs appear to share some properties with reserve ISCs, which can be marked by reporter alleles. We investigated the relationships between these populations. METHODS: Studies were performed in Lgr5-EGFP-IRESCreERT2, Bmi1-CreERT2, Hopx-CreERT2, and TRE-H2BGFP::Hopx-CreERT2::lox-stop-lox-tdTomato mice. Intestinal epithelial cell populations were purified; we compared reporter allele-marked reserve ISCs and several LRC populations (marked by H2B-GFP retention) using histologic flow cytometry and functional and single-cell gene expression assays. RESULTS: LRCs were dynamic and their cellular composition changed with time. Short-term LRCs had properties of secretory progenitor cells undergoing commitment to the Paneth or enteroendocrine lineages, while retaining some stem cell activity. Long-term LRCs lost stem cell activity and were a homogenous population of terminally differentiated Paneth cells. Reserve ISCs marked with HopxCreER were primarily quiescent (in G0), with inactive Wnt signaling and robust stem cell activity. In contrast, most LRCs were in G1 arrest and expressed genes that are regulated by the Wnt pathway or are in the secretory lineage. CONCLUSIONS: LRCs are molecularly and functionally distinct from reporter-marked reserve ISCs. This information provides an important basis for future studies of relationships among ISC populations.
Asunto(s)
Diferenciación Celular , Intestinos/citología , Células Madre/fisiología , Animales , Citometría de Flujo , Expresión Génica , RatonesRESUMEN
BACKGROUND: Canonical Wnt pathway signaling is necessary for maintaining the proliferative capacity of mammalian intestinal crypt base columnar stem cells (CBCs). Furthermore, dysregulation of the Wnt pathway is a major contributor to disease, including oncogenic transformation of the intestinal epithelium. Given the critical importance of this pathway, numerous tools have been used as proxy measures for Wnt pathway activity, yet the relationship between Wnt target gene expression and reporter allele activity within individual cells at the crypt base remains unclear. RESULTS: Here, we describe a novel Axin2-CreERT2-tdTomato allele that efficiently marks both Wnt(High) CBCs and radioresistant reserve intestinal stem cells. We analyze the molecular and functional identity of Axin2-CreERT2-tdTomato-marked cells using single cell gene expression profiling and tissue regeneration assays and find that Axin2 reporter activity does not necessarily correlate with expression of Wnt target genes and, furthermore, that Wnt target genes themselves vary in their expression patterns at the crypt base. CONCLUSIONS: Wnt target genes and reporter alleles can vary greatly in their cell-type specificity, demonstrating that these proxies cannot be used interchangeably. Furthermore, Axin2-CreERT2-tdTomato is a robust marker of both active and reserve intestinal stem cells and is thus useful for understanding the intestinal stem cell compartment. Developmental Dynamics 245:822-833, 2016. © 2016 Wiley Periodicals, Inc.
Asunto(s)
Mucosa Intestinal/citología , Mucosa Intestinal/metabolismo , Células Madre/citología , Células Madre/metabolismo , Vía de Señalización Wnt/fisiología , Animales , Proteína Axina/genética , Proteína Axina/metabolismo , Diferenciación Celular/genética , Diferenciación Celular/fisiología , Proliferación Celular/genética , Proliferación Celular/fisiología , Citometría de Flujo , Técnica del Anticuerpo Fluorescente , Inmunoquímica , Mucosa Intestinal/fisiología , Ratones , Ratones Endogámicos C57BL , Células Madre/fisiología , Vía de Señalización Wnt/genéticaRESUMEN
IMPORTANCE: Evolutionary medicine may provide insights into human physiology and pathophysiology, including tumor biology. OBJECTIVE: To identify mechanisms for cancer resistance in elephants and compare cellular response to DNA damage among elephants, healthy human controls, and cancer-prone patients with Li-Fraumeni syndrome (LFS). DESIGN, SETTING, AND PARTICIPANTS: A comprehensive survey of necropsy data was performed across 36 mammalian species to validate cancer resistance in large and long-lived organisms, including elephants (n = 644). The African and Asian elephant genomes were analyzed for potential mechanisms of cancer resistance. Peripheral blood lymphocytes from elephants, healthy human controls, and patients with LFS were tested in vitro in the laboratory for DNA damage response. The study included African and Asian elephants (n = 8), patients with LFS (n = 10), and age-matched human controls (n = 11). Human samples were collected at the University of Utah between June 2014 and July 2015. EXPOSURES: Ionizing radiation and doxorubicin. MAIN OUTCOMES AND MEASURES: Cancer mortality across species was calculated and compared by body size and life span. The elephant genome was investigated for alterations in cancer-related genes. DNA repair and apoptosis were compared in elephant vs human peripheral blood lymphocytes. RESULTS: Across mammals, cancer mortality did not increase with body size and/or maximum life span (eg, for rock hyrax, 1% [95% CI, 0%-5%]; African wild dog, 8% [95% CI, 0%-16%]; lion, 2% [95% CI, 0%-7%]). Despite their large body size and long life span, elephants remain cancer resistant, with an estimated cancer mortality of 4.81% (95% CI, 3.14%-6.49%), compared with humans, who have 11% to 25% cancer mortality. While humans have 1 copy (2 alleles) of TP53, African elephants have at least 20 copies (40 alleles), including 19 retrogenes (38 alleles) with evidence of transcriptional activity measured by reverse transcription polymerase chain reaction. In response to DNA damage, elephant lymphocytes underwent p53-mediated apoptosis at higher rates than human lymphocytes proportional to TP53 status (ionizing radiation exposure: patients with LFS, 2.71% [95% CI, 1.93%-3.48%] vs human controls, 7.17% [95% CI, 5.91%-8.44%] vs elephants, 14.64% [95% CI, 10.91%-18.37%]; P < .001; doxorubicin exposure: human controls, 8.10% [95% CI, 6.55%-9.66%] vs elephants, 24.77% [95% CI, 23.0%-26.53%]; P < .001). CONCLUSIONS AND RELEVANCE: Compared with other mammalian species, elephants appeared to have a lower-than-expected rate of cancer, potentially related to multiple copies of TP53. Compared with human cells, elephant cells demonstrated increased apoptotic response following DNA damage. These findings, if replicated, could represent an evolutionary-based approach for understanding mechanisms related to cancer suppression.
Asunto(s)
Evolución Biológica , Daño del ADN , Resistencia a la Enfermedad/genética , Elefantes/genética , Neoplasias/genética , Animales , Apoptosis , Estudios de Casos y Controles , Reparación del ADN , Doxorrubicina , Genes p53 , Humanos , Síndrome de Li-Fraumeni/genética , Linfocitos , Mamíferos/genética , Neoplasias/mortalidad , Radiación IonizanteRESUMEN
Management of the patient with moderate to severe brain injury in any environment can be time consuming and resource intensive. These challenges are magnified while forward deployed in austere or hostile environments. This Joint Trauma System Clinical Practice Guideline provides recommendations for the treatment and medical management of casualties with moderate to severe head injuries in an environment where personnel, resources, and follow-on care are limited. These guidelines have been developed by acknowledging commonly recognized recommendations for neurosurgical and neuro-critical care patients and augmenting those evaluations and interventions based on the experience of neurosurgeons, trauma surgeons, and intensivists who have delivered care during recent coalition conflicts.
RESUMEN
BACKGROUND: Mortality reviews examine US military fatalities resulting from traumatic injuries during combat operations. These reviews are essential to the evolution of the military trauma system to improve individual, unit, and system-level trauma care delivery and inform trauma system protocols and guidelines. This study identifies specific prehospital and hospital interventions with the potential to provide survival benefits. METHODS: US Special Operations Command fatalities with battle injuries deemed potentially survivable (2001-2021) were extracted from previous mortality reviews. A military trauma review panel consisting of trauma surgeons, forensic pathologists, and prehospital and emergency medicine specialists conducted a methodical review to identify prehospital, hospital, and resuscitation interventions (e.g., laparotomy, blood transfusion) with the potential to have provided a survival benefit. RESULTS: Of 388 US Special Operations Command battle-injured fatalities, 100 were deemed potentially survivable. Of these (median age, 29 years; all male), 76.0% were injured in Afghanistan, and 75% died prehospital. Gunshot wounds were in 62.0%, followed by blast injury (37%), and blunt force injury (1.0%). Most had a Maximum Abbreviated Injury Scale severity classified as 4 (severe) (55.0%) and 5 (critical) (41.0%). The panel recommended 433 interventions (prehospital, 188; hospital, 315). The most recommended prehospital intervention was blood transfusion (95%), followed by finger/tube thoracostomy (47%). The most common hospital recommendations were thoracotomy and definitive vascular repair. Whole blood transfusion was assessed for each fatality: 74% would have required ≥10 U of blood, 20% would have required 5 to 10 U, 1% would have required 1 to 4 U, and 5% would not have required blood products to impact survival. Five may have benefited from a prehospital laparotomy. CONCLUSION: This study systematically identified capabilities needed to provide a survival benefit and examined interventions needed to inform trauma system efforts along the continuum of care. The determination was that blood transfusion and massive transfusion shortly after traumatic injury would impact survival the most. LEVEL OF EVIDENCE: Prognostic and Epidemiological; Level V.
Asunto(s)
Transfusión Sanguínea , Humanos , Masculino , Adulto , Estados Unidos/epidemiología , Transfusión Sanguínea/normas , Transfusión Sanguínea/estadística & datos numéricos , Transfusión Sanguínea/métodos , Consenso , Medicina Militar/normas , Medicina Militar/métodos , Servicios Médicos de Urgencia/normas , Heridas y Lesiones/terapia , Heridas y Lesiones/mortalidad , Personal Militar , Resucitación/métodos , Resucitación/normas , Puntaje de Gravedad del Traumatismo , Heridas por Arma de Fuego/terapia , Heridas por Arma de Fuego/mortalidad , Heridas no Penetrantes/terapia , Heridas no Penetrantes/mortalidad , Heridas no Penetrantes/diagnóstico , Traumatismos por Explosión/terapia , Traumatismos por Explosión/mortalidad , Heridas Relacionadas con la Guerra/terapia , Heridas Relacionadas con la Guerra/mortalidadRESUMEN
This report describes an algorithm developed to predict the pathogenicity of copy number variants (CNVs) in large sample cohorts. CNVs (genomic deletions and duplications) are found in healthy individuals and in individuals with genetic diagnoses, and differentiation of these two classes of CNVs can be challenging and usually requires extensive manual curation. We have developed PECONPI, an algorithm to assess the pathogenicity of CNVs based on gene content and CNV frequency. This software was applied to a large cohort of patients with genetically heterogeneous non-syndromic hearing loss to score and rank each CNV based on its relative pathogenicity. Of 636 individuals tested, we identified the likely underlying etiology of the hearing loss in 14 (2%) of the patients (1 with a homozygous deletion, 7 with a deletion of a known hearing loss gene and a point mutation on the trans allele and 6 with a deletion larger than 1 Mb). We also identified two probands with smaller deletions encompassing genes that may be functionally related to their hearing loss. The ability of PECONPI to determine the pathogenicity of CNVs was tested on a second genetically heterogeneous cohort with congenital heart defects (CHDs). It successfully identified a likely etiology in 6 of 355 individuals (2%). We believe this tool is useful for researchers with large genetically heterogeneous cohorts to help identify known pathogenic causes and novel disease genes.
Asunto(s)
Pérdida Auditiva Sensorineural/genética , Programas Informáticos , Variaciones en el Número de Copia de ADN , Proteínas de la Matriz Extracelular/genética , Eliminación de Gen , Genómica/métodos , Genotipo , Cardiopatías Congénitas/genética , Humanos , Hibridación Fluorescente in Situ , Polimorfismo de Nucleótido Simple , Reproducibilidad de los ResultadosRESUMEN
Transcription factor activity is largely regulated through post-translational modification. Here, we report the first integrative model of transcription that includes both interactions between transcription factors and promoters, and between transcription factors and modifying enzymes. Simulations indicate that our method is robust against noise. We validated our tool on a well-studied stress response network in yeast and on a STAT1-mediated regulatory network in human B cells. Our work represents a significant step toward a comprehensive model of gene transcription.
Asunto(s)
Regulación de la Expresión Génica , Redes Reguladoras de Genes , Modelos Genéticos , Programas Informáticos , Transcripción Genética , Linfocitos B/enzimología , Linfocitos B/metabolismo , Simulación por Computador , Proteínas de Unión al ADN/metabolismo , Humanos , Regiones Promotoras Genéticas , Proteínas Quinasas/metabolismo , Factor de Transcripción STAT1/metabolismo , Saccharomyces cerevisiae/enzimología , Saccharomyces cerevisiae/genética , Proteínas de Saccharomyces cerevisiae/metabolismo , Transducción de Señal , Factores de Transcripción/metabolismoRESUMEN
The highest lifetime risk for a motor vehicle crash is immediately after the point of licensure, with teen drivers most at risk. Comprehensive teen driver licensing policies that require completion of driver education and behind-the-wheel training along with Graduated Driver Licensing (GDL) are associated with lower young driver crash rates early in licensure. We hypothesize that lack of financial resources and travel time to driving schools reduce the likelihood for teens to complete driver training and gain a young driver's license before age 18. We utilize licensing data from the Ohio Bureau of Motor Vehicles on over 35,000 applicants between 15.5 and 25 years old collected between 2017 and 2019. This dataset of driving schools is maintained by the Ohio Department of Public Safety and is linked with Census tract-level socioeconomic data from the U.S. Census. Using logit models, we estimate the completion of driver training and license obtainment among young drivers in the Columbus, Ohio metro area. We find that young drivers in lower-income Census tracts have a lower likelihood to complete driver training and get licensed before age 18. As travel time to driving schools increases, teens in wealthier Census tracts are more likely to forgo driver training and licensure than teens in lower-income Census tracts. For jurisdictions aspiring to improve safe driving for young drivers, our findings help shape recommendations on policies to enhance access to driver training and licensure especially among teens living in lower-income Census tracts.
Asunto(s)
Accidentes de Tránsito , Conducción de Automóvil , Adolescente , Humanos , Adulto Joven , Adulto , Accidentes de Tránsito/prevención & control , Conducción de Automóvil/educación , Concesión de Licencias , Instituciones Académicas , PolíticasRESUMEN
Importance: Variation in outcomes across hospitals adversely affects surgical patients. The use of high-quality hospitals varies by population, which may contribute to surgical disparities. Objective: To simulate the implications of data-driven hospital selection for social welfare among patients who underwent colorectal cancer surgery. Design, Setting, and Participants: This economic evaluation used the hospital inpatient file from the Florida Agency for Health Care Administration. Surgical outcomes of patients who were treated between January 1, 2016, and December 31, 2018 (training cohort), were used to estimate hospital performance. Costs and benefits of care at alternative hospitals were assessed in patients who were treated between January 1, 2019, and December 31, 2019 (testing cohort). The cohorts comprised patients 18 years or older who underwent elective colorectal resection for benign or malignant neoplasms. Data were analyzed from March to October 2022. Exposures: Using hierarchical logistic regression, we estimated the implications of hospital selection for in-hospital mortality risk in patients in the training cohort. These estimates were applied to patients in the testing cohort using bayesian simulations to compare outcomes at each patient's highest-performing and chosen local hospitals. Analyses were stratified by race and ethnicity to evaluate the potential implications for equity. Main Outcomes and Measures: The primary outcome was the mean patient-level change in social welfare, a composite measure balancing the value of reduced mortality with associated costs of care at higher-performing hospitals. Results: A total of 21â¯098 patients (mean [SD] age, 67.3 [12.0] years; 10â¯782 males [51.1%]; 2232 Black [10.6%] and 18 866 White [89.4%] individuals) who were treated at 178 hospitals were included. A higher-quality local hospital was identified for 3057 of 5000 patients (61.1%) in the testing cohort. Selecting the highest-performing hospital was associated with a 26.5% (95% CI, 24.5%-29.0%) relative reduction and 0.24% (95% CI, 0.23%-0.25%) absolute reduction in mortality risk. A mean amount of $1953 (95% CI, $1744-$2162) was gained in social welfare per patient treated. Simulated reassignment to a higher-quality local hospital was associated with a 23.5% (95% CI, 19.3%-32.9%) relative reduction and 0.26% (95% CI, 0.21%-0.30%) absolute reduction in mortality risk for Black patients, with $2427 (95% CI, $1697-$3158) gained in social welfare. Conclusions and Relevance: In this economic evaluation, using procedure-specific hospital performance as the primary factor in the selection of a local hospital for colorectal cancer surgery was associated with improved outcomes for both patients and society. Surgical outcomes data can be used to transform care and guide policy in colorectal cancer.
Asunto(s)
Neoplasias Colorrectales , Procedimientos Quirúrgicos del Sistema Digestivo , Anciano , Humanos , Masculino , Teorema de Bayes , Población Negra , Neoplasias Colorrectales/cirugía , Hospitales , Población Blanca , Femenino , Persona de Mediana EdadRESUMEN
BACKGROUND: Experiences over the last three decades of war have demonstrated a high incidence of traumatic brain injury (TBI) resulting in a persistent need for a neurosurgical capability within the deployed theater of operations. Despite this, no doctrinal requirement for a deployed neurosurgical capability exists. Through an iterative process, the Joint Trauma System Committee on Surgical Combat Casualty Care (CoSCCC) developed a position statement to inform medical and nonmedical military leaders about the risks of the lack of a specialized neurosurgical capability. METHODS: The need for deployed neurosurgical capability position statement was identified during the spring 2021 CoSCCC meeting. A triservice working group of experienced forward-deployed caregivers developed a preliminary statement. An extensive iterative review process was then conducted to ensure that the intended messaging was clear to senior medical leaders and operational commanders. To provide additional context and a civilian perspective, statement commentaries were solicited from civilian clinical experts including a recently retired military trauma surgeon boarded in neurocritical care, a trauma surgeon instrumental in developing the Brain Injury Guidelines, a practicing neurosurgeon with world-renowned expertise in TBI, and the chair of the Committee on Trauma. RESULTS: After multiple revisions, the position statement was finalized, and approved by the CoSCCC membership in February 2023. Challenges identified include (1) military neurosurgeon attrition, (2) the lack of a doctrinal neurosurgical capabilities requirement during deployed combat operations, and (3) the need for neurosurgical telemedicine capability and in-theater computed tomography scans to triage TBI casualties requiring neurosurgical care. CONCLUSION: Challenges identified regarding neurosurgical capabilities within the deployed trauma system include military neurosurgeon attrition and the lack of a doctrinal requirement for neurosurgical capability during deployed combat operations. To mitigate risk to the force in a future peer-peer conflict, several evidence-based recommendations are made. The solicited civilian commentaries strengthen these recommendations by putting them into the context of civilian TBI management. This neurosurgical capabilities position statement is intended to be a forcing function and a communication tool to inform operational commanders and military medical leaders on the use of these teams on current and future battlefields. LEVEL OF EVIDENCE: Prognostic and Epidemiological; Level V.
Asunto(s)
Lesiones Traumáticas del Encéfalo , Lesiones Encefálicas , Medicina Militar , Personal Militar , Humanos , Lesiones Traumáticas del Encéfalo/cirugíaRESUMEN
Purpose of Review: The US Navy has a long history of responding to disasters around the globe. US Navy ships have unique characteristics and capabilities that determine their capacity for a disaster response. This paper discusses common considerations and lessons learned from three distinct disaster missions. Recent Findings: The 2010 earthquake in Haiti had a robust response with multiple US Navy ship platforms. It was best assessed in three phases: an initial mass casualty response, a subacute response, and a humanitarian response. The 2017 response to Hurricane Maria had a significant focus on treating patients with acute needs secondary to chronic illnesses to decrease the burden on the local healthcare system. The COVID-19 response brought distinctive challenges as it was the first mission where hospital ships were utilized in an infectious disease deployment. Summary: The first ships to respond to a disaster will need to focus on triage and acute traumatic injury. After this first phase, the ship's medical assets will need to focus on providing care in a disrupted health care system which most often includes acute exacerbations of chronic disease. Surgeons must be ready to be flexible in their responsibilities, be competent with end-of-life care, and negotiate technical and cultural communication challenges.
RESUMEN
Background: The Military Health System must develop and sustain experienced surgical trauma teams while facing decreased surgical volumes both during and between deployments. Military trauma resources may enhance local trauma systems by accepting civilian patients for care at military treatment facilities (MTFs). Some MTFs may be able to augment their regional trauma systems by developing trauma center (TC) capabilities. The aim of this study was to evaluate the geographical proximity of MTFs to the continental US (CONUS) population and relative to existing civilian adult TCs, and then to determine which MTFs might benefit most from TC development. Methods: Publicly available data were used to develop a list of CONUS adult civilian level 1 and level 2 TCs and also to generate a list of CONUS MTFs. Census data were used to estimate adult population densities across zip codes. Distances were calculated between zip codes and civilian TCs and MTFs. The affected population sizes and reductions in distance were tabulated for every zip code that was found to be closer to an MTF than an existing TC. Results: 562 civilian adult level 1 and level 2 TCs and 33 military medical centers and hospitals were identified. Compared with their closest civilian TCs, MTFs showed mean reductions in distance ranging from 0 to 30 miles, affecting populations ranging from 12 000 to over 900 000 adults. Seven MTFs were identified that would offer clinically significant reductions in distance to relatively large population centers. Discussion: Some MTFs may offer decreased transit times and improved care to large adult populations within their regional trauma systems by developing level 1 or level 2 TC capabilities. The results of this study provide recommendations to focus further study on seven MTFs to identify those that merit further development and integration with their local trauma systems. Level of evidence: IV.
RESUMEN
Objectives: The research question asked to what extent do self-rated performance scores of individual surgeons correspond to assessed procedural performance abilities and to peer ratings of procedural performance during a mass casualty (MASCAL) event? Background: Self-assessment using performance rating scales is ubiquitous in surgical education as a proxy for direct measurement of competence. The validity and reliability of self-ratings as competency measures are susceptible to cognitive biases such as Dunning-Kruger effects, which describe how individuals over/underestimate their own performance compared to assessments from independent sources. The ability of surgeons to accurately self-assess their procedural performance remains undetermined. Methods: A purposive sample of military surgeons (N = 13) who collectively cared for trauma patients during a MASCAL event participated in the study. Pre-event performance assessment scores for 32 trauma procedures were compared with post-event self and peer performance ratings using F tests (P < 0.05) and effect sizes (Cohen's d). Results: There were no significant differences between peer ratings and performance assessment scores. There were significant differences between self-ratings and both peer ratings (P < 0.001) and performance assessment scores (P < 0.001). Effect sizes were very large for self to peer rating comparison (Cohen's d = 2.34) and self to performance assessment comparison (Cohen's d = 2.77). Conclusions: The outcomes demonstrate that self-ratings were significantly lower than the independently determined assessment scores for each surgeon, revealing a Dunning-Kruger effect for highly skilled individuals underestimating their abilities. These outcomes underscore the limitations of self-assessment for measuring competence.
RESUMEN
The causative bacterium of Lyme disease, Borrelia burgdorferi, expanded from an undetected human pathogen into the etiologic agent of the most common vector-borne disease in the United States over the last several decades. Systematic field collections of the tick vector reveal increases in the geographic range and prevalence of B. burgdorferi-infected ticks that coincided with increases in human Lyme disease incidence across New York State.We investigate the impact of environmental features on the population dynamics of B. burgdorferi. Analytical models developed using field collections of nearly 19,000 nymphal Ixodes scapularis and spatially and temporally explicit environmental features accurately explained the variation in the nymphal infection prevalence of B. burgdorferi across space and time.Importantly, the model identified environmental features reflecting landscape ecology, vertebrate hosts, climatic metrics, climate anomalies and surveillance efforts that can be used to predict the biogeographical patterns of B. burgdorferi-infected ticks into future years and in previously unsampled areas.Forecasting the distribution and prevalence of a pathogen at fine geographic scales offers a powerful strategy to mitigate a serious public health threat. Synthesis and applications. A decade of environmental and tick data was collected to create a model that accurately predicts the infection prevalence of Borrelia burgdorferi over space and time. This predictive model can be extrapolated to create a high-resolution risk map of the Lyme disease pathogen for future years that offers an inexpensive approach to improve both ecological management and public health strategies to mitigate disease risk.