ABSTRACT
The Human Phenotype Ontology (HPO) is a widely used resource that comprehensively organizes and defines the phenotypic features of human disease, enabling computational inference and supporting genomic and phenotypic analyses through semantic similarity and machine learning algorithms. The HPO has widespread applications in clinical diagnostics and translational research, including genomic diagnostics, gene-disease discovery, and cohort analytics. In recent years, groups around the world have developed translations of the HPO from English to other languages, and the HPO browser has been internationalized, allowing users to view HPO term labels and in many cases synonyms and definitions in ten languages in addition to English. Since our last report, a total of 2239 new HPO terms and 49235 new HPO annotations were developed, many in collaboration with external groups in the fields of psychiatry, arthrogryposis, immunology and cardiology. The Medical Action Ontology (MAxO) is a new effort to model treatments and other measures taken for clinical management. Finally, the HPO consortium is contributing to efforts to integrate the HPO and the GA4GH Phenopacket Schema into electronic health records (EHRs) with the goal of more standardized and computable integration of rare disease data in EHRs.
Subject(s)
Biological Ontologies , Humans , Phenotype , Genomics , Algorithms , Rare DiseasesABSTRACT
Mendelian disorders are prevalent in neonatal and pediatric intensive care units and are a leading cause of morbidity and mortality in these settings. Current diagnostic pipelines that integrate phenotypic and genotypic data are expert-dependent and time-intensive. Artificial intelligence (AI) tools may help address these challenges. Dx29 is an open-source AI tool designed for use by clinicians. It analyzes the patient's phenotype and genotype to generate a ranked differential diagnosis. We used Dx29 to retrospectively analyze 25 acutely ill infants who had been diagnosed with a Mendelian disorder, using a targeted panel of ~5000 genes. For each case, a trio (proband and both parents) file containing gene variant information was analyzed, alongside patient phenotype, which was provided to Dx29 by three approaches: (1) AI extraction from medical records, (2) AI extraction with manual review/editing, and (3) manual entry. We then identified the rank of the correct diagnosis in Dx29's differential diagnosis. With these three approaches, Dx29 ranked the correct diagnosis in the top 10 in 92-96% of cases. These results suggest that non-expert use of Dx29's automated phenotyping and subsequent data analysis may compare favorably to standard workflows utilized by bioinformatics experts to analyze genomic data and diagnose Mendelian diseases.
ABSTRACT
Radiation therapy treatments are typically planned based on a single image set, assuming that the patient's anatomy and its position relative to the delivery system remains constant during the course of treatment. Similarly, the prescription dose assumes constant biological dose-response over the treatment course. However, variations can and do occur on multiple time scales. For treatment sites with significant intra-fractional motion, geometric changes happen over seconds or minutes, while biological considerations change over days or weeks. At an intermediate timescale, geometric changes occur between daily treatment fractions. Adaptive radiation therapy is applied to consider changes in patient anatomy during the course of fractionated treatment delivery. While traditionally adaptation has been done off-line with replanning based on new CT images, online treatment adaptation based on on-board imaging has gained momentum in recent years due to advanced imaging techniques combined with treatment delivery systems. Adaptation is particularly important in proton therapy where small changes in patient anatomy can lead to significant dose perturbations due to the dose conformality and finite range of proton beams. This review summarizes the current state-of-the-art of on-line adaptive proton therapy and identifies areas requiring further research.
Subject(s)
Proton Therapy , Humans , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methodsABSTRACT
Human Phenotype Ontology (HPO) terms are increasingly used in diagnostic settings to aid in the characterization of patient phenotypes. The HPO annotation database is updated frequently and can provide detailed phenotype knowledge on various human diseases, and many HPO terms are now mapped to candidate causal genes with binary relationships. To further improve the genetic diagnosis of rare diseases, we incorporated these HPO annotations, gene-disease databases and gene-gene databases in a probabilistic model to build a novel HPO-driven gene prioritization tool, Phen2Gene. Phen2Gene accesses a database built upon this information called the HPO2Gene Knowledgebase (H2GKB), which provides weighted and ranked gene lists for every HPO term. Phen2Gene is then able to access the H2GKB for patient-specific lists of HPO terms or PhenoPacket descriptions supported by GA4GH (http://phenopackets.org/), calculate a prioritized gene list based on a probabilistic model and output gene-disease relationships with great accuracy. Phen2Gene outperforms existing gene prioritization tools in speed and acts as a real-time phenotype-driven gene prioritization tool to aid the clinical diagnosis of rare undiagnosed diseases. In addition to a command line tool released under the MIT license (https://github.com/WGLab/Phen2Gene), we also developed a web server and web service (https://phen2gene.wglab.org/) for running the tool via web interface or RESTful API queries. Finally, we have curated a large amount of benchmarking data for phenotype-to-gene tools involving 197 patients across 76 scientific articles and 85 patients' de-identified HPO term data from the Children's Hospital of Philadelphia.
ABSTRACT
The purpose of this study was to investigate internal tumor volume density overwrite strategies to minimize intensity modulated proton therapy (IMPT) plan degradation of mobile lung tumors. Four planning paradigms were compared for nine lung cancer patients. Internal gross tumor volume (IGTV) and internal clinical target volume (ICTV) structures were defined encompassing their respective volumes in every 4DCT phase. The paradigms use different planning CT (pCT) created from the average intensity projection (AIP) of the 4DCT, overwriting the density within the IGTV to account for movement. The density overwrites were: (a) constant filling with 100 HU (C100) or (b) 50 HU (C50), (c) maximum intensity projection (MIP) across phases, and (d) water equivalent path length (WEPL) consideration from beam's-eye-view. Plans were created optimizing dose-influence matrices calculated with fast GPU Monte Carlo (MC) simulations in each pCT. Plans were evaluated with MC on the 4DCTs using a model of the beam delivery time structure. Dose accumulation was performed using deformable image registration. Interplay effect was addressed applying 10 times rescanning. Significantly less DVH metrics degradation occurred when using MIP and WEPL approaches. Target coverage ([Formula: see text] Gy(RBE)) was fulfilled in most cases with MIP and WEPL ([Formula: see text] Gy (RBE)), keeping dose heterogeneity low ([Formula: see text] Gy(RBE)). The mean lung dose was kept lowest by the WEPL strategy, as well as the maximum dose to organs at risk (OARs). The impact on dose levels in the heart, spinal cord and esophagus were patient specific. Overall, the WEPL strategy gives the best performance and should be preferred when using a 3D static geometry for lung cancer IMPT treatment planning. Newly available fast MC methods make it possible to handle long simulations based on 4D data sets to perform studies with high accuracy and efficiency, even prior to individual treatment planning.
Subject(s)
Carcinoma, Non-Small-Cell Lung/radiotherapy , Lung Neoplasms/radiotherapy , Movement , Organs at Risk/radiation effects , Proton Therapy/methods , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods , Carcinoma, Non-Small-Cell Lung/pathology , Humans , Lung Neoplasms/pathology , Monte Carlo Method , Radiotherapy Dosage , Retrospective Studies , Tumor BurdenABSTRACT
PURPOSE: We describe a treatment plan optimization method for intensity modulated proton therapy (IMPT) that avoids high values of linear energy transfer (LET) in critical structures located within or near the target volume while limiting degradation of the best possible physical dose distribution. METHODS AND MATERIALS: To allow fast optimization based on dose and LET, a GPU-based Monte Carlo code was extended to provide dose-averaged LET in addition to dose for all pencil beams. After optimizing an initial IMPT plan based on physical dose, a prioritized optimization scheme is used to modify the LET distribution while constraining the physical dose objectives to values close to the initial plan. The LET optimization step is performed based on objective functions evaluated for the product of LET and physical dose (LET×D). To first approximation, LET×D represents a measure of the additional biological dose that is caused by high LET. RESULTS: The method is effective for treatments where serial critical structures with maximum dose constraints are located within or near the target. We report on 5 patients with intracranial tumors (high-grade meningiomas, base-of-skull chordomas, ependymomas) in whom the target volume overlaps with the brainstem and optic structures. In all cases, high LET×D in critical structures could be avoided while minimally compromising physical dose planning objectives. CONCLUSION: LET-based reoptimization of IMPT plans represents a pragmatic approach to bridge the gap between purely physical dose-based and relative biological effectiveness (RBE)-based planning. The method makes IMPT treatments safer by mitigating a potentially increased risk of side effects resulting from elevated RBE of proton beams near the end of range.
Subject(s)
Brain Neoplasms/diagnostic imaging , Brain Neoplasms/radiotherapy , Linear Energy Transfer , Organs at Risk , Proton Therapy/methods , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods , Brain Stem/diagnostic imaging , Chordoma/diagnostic imaging , Chordoma/radiotherapy , Ependymoma/diagnostic imaging , Ependymoma/radiotherapy , Humans , Meningeal Neoplasms/diagnostic imaging , Meningeal Neoplasms/radiotherapy , Meningioma/diagnostic imaging , Meningioma/radiotherapy , Monte Carlo Method , Optic Chiasm/diagnostic imaging , Optic Nerve/diagnostic imaging , Organs at Risk/diagnostic imaging , Quality Improvement , Radiotherapy Dosage , Relative Biological Effectiveness , Skull Base Neoplasms/diagnostic imaging , Skull Base Neoplasms/radiotherapyABSTRACT
Monte Carlo (MC) simulation is commonly considered as the most accurate dose calculation method for proton therapy. Aiming at achieving fast MC dose calculations for clinical applications, we have previously developed a graphics-processing unit (GPU)-based MC tool, gPMC. In this paper, we report our recent updates on gPMC in terms of its accuracy, portability, and functionality, as well as comprehensive tests on this tool. The new version, gPMC v2.0, was developed under the OpenCL environment to enable portability across different computational platforms. Physics models of nuclear interactions were refined to improve calculation accuracy. Scoring functions of gPMC were expanded to enable tallying particle fluence, dose deposited by different particle types, and dose-averaged linear energy transfer (LETd). A multiple counter approach was employed to improve efficiency by reducing the frequency of memory writing conflict at scoring. For dose calculation, accuracy improvements over gPMC v1.0 were observed in both water phantom cases and a patient case. For a prostate cancer case planned using high-energy proton beams, dose discrepancies in beam entrance and target region seen in gPMC v1.0 with respect to the gold standard tool for proton Monte Carlo simulations (TOPAS) results were substantially reduced and gamma test passing rate (1%/1 mm) was improved from 82.7%-93.1%. The average relative difference in LETd between gPMC and TOPAS was 1.7%. The average relative differences in the dose deposited by primary, secondary, and other heavier particles were within 2.3%, 0.4%, and 0.2%. Depending on source proton energy and phantom complexity, it took 8-17 s on an AMD Radeon R9 290x GPU to simulate [Formula: see text] source protons, achieving less than [Formula: see text] average statistical uncertainty. As the beam size was reduced from 10 × 10 cm2 to 1 × 1 cm2, the time on scoring was only increased by 4.8% with eight counters, in contrast to a 40% increase using only one counter. With the OpenCL environment, the portability of gPMC v2.0 was enhanced. It was successfully executed on different CPUs and GPUs and its performance on different devices varied depending on processing power and hardware structure.