RESUMO
Despite increasing numbers of regulatory approvals, deep learning-based computational pathology systems often overlook the impact of demographic factors on performance, potentially leading to biases. This concern is all the more important as computational pathology has leveraged large public datasets that underrepresent certain demographic groups. Using publicly available data from The Cancer Genome Atlas and the EBRAINS brain tumor atlas, as well as internal patient data, we show that whole-slide image classification models display marked performance disparities across different demographic groups when used to subtype breast and lung carcinomas and to predict IDH1 mutations in gliomas. For example, when using common modeling approaches, we observed performance gaps (in area under the receiver operating characteristic curve) between white and Black patients of 3.0% for breast cancer subtyping, 10.9% for lung cancer subtyping and 16.0% for IDH1 mutation prediction in gliomas. We found that richer feature representations obtained from self-supervised vision foundation models reduce performance variations between groups. These representations provide improvements upon weaker models even when those weaker models are combined with state-of-the-art bias mitigation strategies and modeling choices. Nevertheless, self-supervised vision foundation models do not fully eliminate these discrepancies, highlighting the continuing need for bias mitigation efforts in computational pathology. Finally, we demonstrate that our results extend to other demographic factors beyond patient race. Given these findings, we encourage regulatory and policy agencies to integrate demographic-stratified evaluation into their assessment guidelines.
Assuntos
Glioma , Neoplasias Pulmonares , Humanos , Viés , Negro ou Afro-Americano , População Negra , Demografia , Erros de Diagnóstico , Glioma/diagnóstico , Glioma/genética , BrancosRESUMO
Background: The diffuse growth pattern of glioblastoma is one of the main challenges for accurate treatment. Computational tumor growth modeling has emerged as a promising tool to guide personalized therapy. Here, we performed clinical and biological validation of a novel growth model, aiming to close the gap between the experimental state and clinical implementation. Methods: One hundred and twenty-four patients from The Cancer Genome Archive (TCGA) and 397 patients from the UCSF Glioma Dataset were assessed for significant correlations between clinical data, genetic pathway activation maps (generated with PARADIGM; TCGA only), and infiltration (Dw) as well as proliferation (ρ) parameters stemming from a Fisher-Kolmogorov growth model. To further evaluate clinical potential, we performed the same growth modeling on preoperative magnetic resonance imaging data from 30 patients of our institution and compared model-derived tumor volume and recurrence coverage with standard radiotherapy plans. Results: The parameter ratio Dw/ρ (Pâ <â .05 in TCGA) as well as the simulated tumor volume (Pâ <â .05 in TCGA/UCSF) were significantly inversely correlated with overall survival. Interestingly, we found a significant correlation between 11 proliferation pathways and the estimated proliferation parameter. Depending on the cutoff value for tumor cell density, we observed a significant improvement in recurrence coverage without significantly increased radiation volume utilizing model-derived target volumes instead of standard radiation plans. Conclusions: Identifying a significant correlation between computed growth parameters and clinical and biological data, we highlight the potential of tumor growth modeling for individualized therapy of glioblastoma. This might improve the accuracy of radiation planning in the near future.
RESUMO
Biophysical modeling, particularly involving partial differential equations (PDEs), offers significant potential for tailoring disease treatment protocols to individual patients. However, the inverse problem-solving aspect of these models presents a substantial challenge, either due to the high computational requirements of model-based approaches or the limited robustness of deep learning (DL) methods. We propose a novel framework that leverages the unique strengths of both approaches in a synergistic manner. Our method incorporates a DL ensemble for initial parameter estimation, facilitating efficient downstream evolutionary sampling initialized with this DL-based prior. We showcase the effectiveness of integrating a rapid deep-learning algorithm with a high-precision evolution strategy in estimating brain tumor cell concentrations from magnetic resonance images. The DL-Prior plays a pivotal role, significantly constraining the effective sampling-parameter space. This reduction results in a fivefold convergence acceleration and a Dice-score of 95.
RESUMO
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.
Assuntos
Benchmarking , Neoplasias Hepáticas , Humanos , Estudos Retrospectivos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia , Fígado/diagnóstico por imagem , Fígado/patologia , Algoritmos , Processamento de Imagem Assistida por Computador/métodosRESUMO
Current treatment planning of patients diagnosed with a brain tumor, such as glioma, could significantly benefit by accessing the spatial distribution of tumor cell concentration. Existing diagnostic modalities, e.g. magnetic resonance imaging (MRI), contrast sufficiently well areas of high cell density. In gliomas, however, they do not portray areas of low cell concentration, which can often serve as a source for the secondary appearance of the tumor after treatment. To estimate tumor cell densities beyond the visible boundaries of the lesion, numerical simulations of tumor growth could complement imaging information by providing estimates of full spatial distributions of tumor cells. Over recent years a corpus of literature on medical image-based tumor modeling was published. It includes different mathematical formalisms describing the forward tumor growth model. Alongside, various parametric inference schemes were developed to perform an efficient tumor model personalization, i.e. solving the inverse problem. However, the unifying drawback of all existing approaches is the time complexity of the model personalization which prohibits a potential integration of the modeling into clinical settings. In this work, we introduce a deep learning based methodology for inferring the patient-specific spatial distribution of brain tumors from T1Gd and FLAIR MRI medical scans. Coined as Learn-Morph-Infer, the method achieves real-time performance in the order of minutes on widely available hardware and the compute time is stable across tumor models of different complexity, such as reaction-diffusion and reaction-advection-diffusion models. We believe the proposed inverse solution approach not only bridges the way for clinical translation of brain tumor personalization but can also be adopted to other scientific and engineering domains.
Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagemRESUMO
In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in the realm of a single modality, neglecting the broader clinical context, which inevitably diminishes their potential. Integration of different data modalities provides opportunities to increase robustness and accuracy of diagnostic and prognostic models, bringing AI closer to clinical practice. AI models are also capable of discovering novel patterns within and across modalities suitable for explaining differences in patient outcomes or treatment resistance. The insights gleaned from such models can guide exploration studies and contribute to the discovery of novel biomarkers and therapeutic targets. To support these advances, here we present a synopsis of AI methods and strategies for multimodal data fusion and association discovery. We outline approaches for AI interpretability and directions for AI-driven exploration through multimodal data interconnections. We examine challenges in clinical adoption and discuss emerging solutions.
Assuntos
Inteligência Artificial , Radiologia , Registros Eletrônicos de Saúde , Genômica , Humanos , OncologiaRESUMO
The rapidly emerging field of computational pathology has demonstrated promise in developing objective prognostic models from histology images. However, most prognostic models are either based on histology or genomics alone and do not address how these data sources can be integrated to develop joint image-omic prognostic models. Additionally, identifying explainable morphological and molecular descriptors from these models that govern such prognosis is of interest. We use multimodal deep learning to jointly examine pathology whole-slide images and molecular profile data from 14 cancer types. Our weakly supervised, multimodal deep-learning algorithm is able to fuse these heterogeneous modalities to predict outcomes and discover prognostic features that correlate with poor and favorable outcomes. We present all analyses for morphological and molecular correlates of patient prognosis across the 14 cancer types at both a disease and a patient level in an interactive open-access database to allow for further exploration, biomarker discovery, and feature assessment.
Assuntos
Aprendizado Profundo , Neoplasias , Algoritmos , Genômica/métodos , Humanos , Neoplasias/genética , Neoplasias/patologia , PrognósticoRESUMO
Increased intracranial pressure is the source of most critical symptoms in patients with glioma, and often the main cause of death. Clinical interventions could benefit from non-invasive estimates of the pressure distribution in the patient's parenchyma provided by computational models. However, existing glioma models do not simulate the pressure distribution and they rely on a large number of model parameters, which complicates their calibration from available patient data. Here we present a novel model for glioma growth, pressure distribution and corresponding brain deformation. The distinct feature of our approach is that the pressure is directly derived from tumour dynamics and patient-specific anatomy, providing non-invasive insights into the patient's state. The model predictions allow estimation of critical conditions such as intracranial hypertension, brain midline shift or neurological and cognitive impairments. A diffuse-domain formalism is employed to allow for efficient numerical implementation of the model in the patient-specific brain anatomy. The model is tested on synthetic and clinical cases. To facilitate clinical deployment, a high-performance computing implementation of the model has been publicly released.
Assuntos
Glioma , Hipertensão Intracraniana , Encéfalo , Glioma/patologia , Cabeça , Humanos , Hipertensão Intracraniana/diagnóstico , Hipertensão Intracraniana/etiologia , Pressão IntracranianaRESUMO
Endomyocardial biopsy (EMB) screening represents the standard of care for detecting allograft rejections after heart transplant. Manual interpretation of EMBs is affected by substantial interobserver and intraobserver variability, which often leads to inappropriate treatment with immunosuppressive drugs, unnecessary follow-up biopsies and poor transplant outcomes. Here we present a deep learning-based artificial intelligence (AI) system for automated assessment of gigapixel whole-slide images obtained from EMBs, which simultaneously addresses detection, subtyping and grading of allograft rejection. To assess model performance, we curated a large dataset from the United States, as well as independent test cohorts from Turkey and Switzerland, which includes large-scale variability across populations, sample preparations and slide scanning instrumentation. The model detects allograft rejection with an area under the receiver operating characteristic curve (AUC) of 0.962; assesses the cellular and antibody-mediated rejection type with AUCs of 0.958 and 0.874, respectively; detects Quilty B lesions, benign mimics of rejection, with an AUC of 0.939; and differentiates between low-grade and high-grade rejections with an AUC of 0.833. In a human reader study, the AI system showed non-inferior performance to conventional assessment and reduced interobserver variability and assessment time. This robust evaluation of cardiac allograft rejection paves the way for clinical trials to establish the efficacy of AI-assisted EMB assessment and its potential for improving heart transplant outcomes.
Assuntos
Aprendizado Profundo , Rejeição de Enxerto , Aloenxertos , Inteligência Artificial , Biópsia , Rejeição de Enxerto/diagnóstico , Humanos , Miocárdio/patologiaRESUMO
Modeling of brain tumor dynamics has the potential to advance therapeutic planning. Current modeling approaches resort to numerical solvers that simulate the tumor progression according to a given differential equation. Using highly-efficient numerical solvers, a single forward simulation takes up to a few minutes of compute. At the same time, clinical applications of tumor modeling often imply solving an inverse problem, requiring up to tens of thousands of forward model evaluations when used for a Bayesian model personalization via sampling. This results in a total inference time prohibitively expensive for clinical translation. While recent data-driven approaches become capable of emulating physics simulation, they tend to fail in generalizing over the variability of the boundary conditions imposed by the patient-specific anatomy. In this paper, we propose a learnable surrogate for simulating tumor growth which maps the biophysical model parameters directly to simulation outputs, i.e. the local tumor cell densities, whilst respecting patient geometry. We test the neural solver in a Bayesian model personalization task for a cohort of glioma patients. Bayesian inference using the proposed surrogate yields estimates analogous to those obtained by solving the forward model with a regular numerical solver. The near real-time computation cost renders the proposed method suitable for clinical settings. The code is available at https://github.com/IvanEz/tumor-surrogate.
Assuntos
Neoplasias Encefálicas , Glioma , Teorema de Bayes , Neoplasias Encefálicas/diagnóstico por imagem , Calibragem , Simulação por Computador , Glioma/diagnóstico por imagem , HumanosRESUMO
Cancer of unknown primary (CUP) origin is an enigmatic group of diagnoses in which the primary anatomical site of tumour origin cannot be determined1,2. This poses a considerable challenge, as modern therapeutics are predominantly specific to the primary tumour3. Recent research has focused on using genomics and transcriptomics to identify the origin of a tumour4-9. However, genomic testing is not always performed and lacks clinical penetration in low-resource settings. Here, to overcome these challenges, we present a deep-learning-based algorithm-Tumour Origin Assessment via Deep Learning (TOAD)-that can provide a differential diagnosis for the origin of the primary tumour using routinely acquired histology slides. We used whole-slide images of tumours with known primary origins to train a model that simultaneously identifies the tumour as primary or metastatic and predicts its site of origin. On our held-out test set of tumours with known primary origins, the model achieved a top-1 accuracy of 0.83 and a top-3 accuracy of 0.96, whereas on our external test set it achieved top-1 and top-3 accuracies of 0.80 and 0.93, respectively. We further curated a dataset of 317 cases of CUP for which a differential diagnosis was assigned. Our model predictions resulted in concordance for 61% of cases and a top-3 agreement of 82%. TOAD can be used as an assistive tool to assign a differential diagnosis to complicated cases of metastatic tumours and CUPs and could be used in conjunction with or in lieu of ancillary tests and extensive diagnostic work-ups to reduce the occurrence of CUP.
Assuntos
Inteligência Artificial , Simulação por Computador , Neoplasias Primárias Desconhecidas/patologia , Estudos de Coortes , Simulação por Computador/normas , Feminino , Humanos , Masculino , Metástase Neoplásica/patologia , Neoplasias Primárias Desconhecidas/diagnóstico , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Fluxo de TrabalhoRESUMO
Despite great advances in brain tumor segmentation and clear clinical need, translation of state-of-the-art computational methods into clinical routine and scientific practice remains a major challenge. Several factors impede successful implementations, including data standardization and preprocessing. However, these steps are pivotal for the deployment of state-of-the-art image segmentation algorithms. To overcome these issues, we present BraTS Toolkit. BraTS Toolkit is a holistic approach to brain tumor segmentation and consists of three components: First, the BraTS Preprocessor facilitates data standardization and preprocessing for researchers and clinicians alike. It covers the entire image analysis workflow prior to tumor segmentation, from image conversion and registration to brain extraction. Second, BraTS Segmentor enables orchestration of BraTS brain tumor segmentation algorithms for generation of fully-automated segmentations. Finally, Brats Fusionator can combine the resulting candidate segmentations into consensus segmentations using fusion methods such as majority voting and iterative SIMPLE fusion. The capabilities of our tools are illustrated with a practical example to enable easy translation to clinical and scientific practice.
RESUMO
C-X-C chemokine receptor 4 (CXCR4) is a transmembrane chemokine receptor involved in growth, survival, and dissemination of cancer, including aggressive B-cell lymphoma. MRI is the standard imaging technology for central nervous system (CNS) involvement of B-cell lymphoma and provides high sensitivity but moderate specificity. Therefore, novel molecular and functional imaging strategies are urgently required. Methods: In this proof-of-concept study, 11 patients with lymphoma of the CNS (8 primary and 3 secondary involvement) were imaged with the CXCR4-directed PET tracer 68Ga-pentixafor. To evaluate the predictive value of this imaging modality, treatment response, as determined by MRI, was correlated with quantification of CXCR4 expression by 68Ga-pentixafor PET in vivo before initiation of treatment in 7 of 11 patients. Results:68Ga-pentixafor PET showed excellent contrast with the surrounding brain parenchyma in all patients with active disease. Furthermore, initial CXCR4 uptake determined by PET correlated with subsequent treatment response as assessed by MRI. Conclusion:68Ga-pentixafor PET represents a novel diagnostic tool for CNS lymphoma with potential implications for theranostic approaches as well as response and risk assessment.
Assuntos
Neoplasias do Sistema Nervoso Central/diagnóstico por imagem , Linfoma de Células B/diagnóstico por imagem , Receptores CXCR4/metabolismo , Idoso , Idoso de 80 Anos ou mais , Neoplasias do Sistema Nervoso Central/terapia , Complexos de Coordenação , Feminino , Radioisótopos de Gálio , Humanos , Linfoma de Células B/terapia , Masculino , Pessoa de Meia-Idade , Peptídeos Cíclicos , Resultado do TratamentoRESUMO
Diffusion tensor imaging (DTI), and fractional-anisotropy (FA) maps in particular, have shown promise in predicting areas of tumor recurrence in glioblastoma. However, analysis of peritumoral edema, where most recurrences occur, is impeded by free-water contamination. In this study, we evaluated the benefits of a novel, deep-learning-based approach for the free-water correction (FWC) of DTI data for prediction of later recurrence. We investigated 35 glioblastoma cases from our prospective glioma cohort. A preoperative MR image and the first MR scan showing tumor recurrence were semiautomatically segmented into areas of contrast-enhancing tumor, edema, or recurrence of the tumor. The 10th, 50th and 90th percentiles and mean of FA and mean-diffusivity (MD) values (both for the original and FWC-DTI data) were collected for areas with and without recurrence in the peritumoral edema. We found significant differences in the FWC-FA maps between areas of recurrence-free edema and areas with later tumor recurrence, where differences in noncorrected FA maps were less pronounced. Consequently, a generalized mixed-effect model had a significantly higher area under the curve when using FWC-FA maps (AUC = 0.9) compared to noncorrected maps (AUC = 0.77, p < 0.001). This may reflect tumor infiltration that is not visible in conventional imaging, and may therefore reveal important information for personalized treatment decisions.
RESUMO
Glioblastoma (GBM) is a highly invasive brain tumor, whose cells infiltrate surrounding normal brain tissue beyond the lesion outlines visible in the current medical scans. These infiltrative cells are treated mainly by radiotherapy. Existing radiotherapy plans for brain tumors derive from population studies and scarcely account for patient-specific conditions. Here, we provide a Bayesian machine learning framework for the rational design of improved, personalized radiotherapy plans using mathematical modeling and patient multimodal medical scans. Our method, for the first time, integrates complementary information from high-resolution MRI scans and highly specific FET-PET metabolic maps to infer tumor cell density in GBM patients. The Bayesian framework quantifies imaging and modeling uncertainties and predicts patient-specific tumor cell density with credible intervals. The proposed methodology relies only on data acquired at a single time point and, thus, is applicable to standard clinical settings. An initial clinical population study shows that the radiotherapy plans generated from the inferred tumor cell infiltration maps spare more healthy tissue thereby reducing radiation toxicity while yielding comparable accuracy with standard radiotherapy protocols. Moreover, the inferred regions of high tumor cell densities coincide with the tumor radioresistant areas, providing guidance for personalized dose-escalation. The proposed integration of multimodal scans and mathematical modeling provides a robust, non-invasive tool to assist personalized radiotherapy design.
Assuntos
Neoplasias Encefálicas/radioterapia , Glioblastoma/radioterapia , Medicina de Precisão/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Teorema de Bayes , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Glioblastoma/diagnóstico por imagem , Humanos , Imagem Multimodal , Tomografia por Emissão de Pósitrons/métodos , Tirosina/análogos & derivados , Tirosina/uso terapêuticoRESUMO
The identification of bone lesions is crucial in the diagnostic assessment of multiple myeloma (MM). 68Ga-Pentixafor PET/CT can capture the abnormal molecular expression of CXCR-4 in addition to anatomical changes. However, whole-body detection of dozens of lesions on hybrid imaging is tedious and error prone. It is even more difficult to identify lesions with a large heterogeneity. This study employed deep learning methods to automatically combine characteristics of PET and CT for whole-body MM bone lesion detection in a 3D manner. Two convolutional neural networks (CNNs), V-Net and W-Net, were adopted to segment and detect the lesions. The feasibility of deep learning for lesion detection on 68Ga-Pentixafor PET/CT was first verified on digital phantoms generated using realistic PET simulation methods. Then the proposed methods were evaluated on real 68Ga-Pentixafor PET/CT scans of MM patients. The preliminary results showed that deep learning method can leverage multimodal information for spatial feature representation, and W-Net obtained the best result for segmentation and lesion detection. It also outperformed traditional machine learning methods such as random forest classifier (RF), k-Nearest Neighbors (k-NN), and support vector machine (SVM). The proof-of-concept study encourages further development of deep learning approach for MM lesion detection in population study.