Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Nat Mach Intell ; 6(3): 354-367, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38523679

RESUMO

Foundation models in deep learning are characterized by a single large-scale model trained on vast amounts of data serving as the foundation for various downstream tasks. Foundation models are generally trained using self-supervised learning and excel in reducing the demand for training samples in downstream applications. This is especially important in medicine, where large labelled datasets are often scarce. Here, we developed a foundation model for cancer imaging biomarker discovery by training a convolutional encoder through self-supervised learning using a comprehensive dataset of 11,467 radiographic lesions. The foundation model was evaluated in distinct and clinically relevant applications of cancer imaging-based biomarkers. We found that it facilitated better and more efficient learning of imaging biomarkers and yielded task-specific models that significantly outperformed conventional supervised and other state-of-the-art pretrained implementations on downstream tasks, especially when training dataset sizes were very limited. Furthermore, the foundation model was more stable to input variations and showed strong associations with underlying biology. Our results demonstrate the tremendous potential of foundation models in discovering new imaging biomarkers that may extend to other clinical use cases and can accelerate the widespread translation of imaging biomarkers into clinical settings.

2.
Sci Data ; 11(1): 25, 2024 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-38177130

RESUMO

Public imaging datasets are critical for the development and evaluation of automated tools in cancer imaging. Unfortunately, many do not include annotations or image-derived features, complicating downstream analysis. Artificial intelligence-based annotation tools have been shown to achieve acceptable performance and can be used to automatically annotate large datasets. As part of the effort to enrich public data available within NCI Imaging Data Commons (IDC), here we introduce AI-generated annotations for two collections containing computed tomography images of the chest, NSCLC-Radiomics, and a subset of the National Lung Screening Trial. Using publicly available AI algorithms, we derived volumetric annotations of thoracic organs-at-risk, their corresponding radiomics features, and slice-level annotations of anatomical landmarks and regions. The resulting annotations are publicly available within IDC, where the DICOM format is used to harmonize the data and achieve FAIR (Findable, Accessible, Interoperable, Reusable) data principles. The annotations are accompanied by cloud-enabled notebooks demonstrating their use. This study reinforces the need for large, publicly accessible curated datasets and demonstrates how AI can aid in cancer imaging.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Humanos , Inteligência Artificial , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia Computadorizada por Raios X
3.
Radiographics ; 43(12): e230180, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37999984

RESUMO

The remarkable advances of artificial intelligence (AI) technology are revolutionizing established approaches to the acquisition, interpretation, and analysis of biomedical imaging data. Development, validation, and continuous refinement of AI tools requires easy access to large high-quality annotated datasets, which are both representative and diverse. The National Cancer Institute (NCI) Imaging Data Commons (IDC) hosts large and diverse publicly available cancer image data collections. By harmonizing all data based on industry standards and colocalizing it with analysis and exploration resources, the IDC aims to facilitate the development, validation, and clinical translation of AI tools and address the well-documented challenges of establishing reproducible and transparent AI processing pipelines. Balanced use of established commercial products with open-source solutions, interconnected by standard interfaces, provides value and performance, while preserving sufficient agility to address the evolving needs of the research community. Emphasis on the development of tools, use cases to demonstrate the utility of uniform data representation, and cloud-based analysis aim to ease adoption and help define best practices. Integration with other data in the broader NCI Cancer Research Data Commons infrastructure opens opportunities for multiomics studies incorporating imaging data to further empower the research community to accelerate breakthroughs in cancer detection, diagnosis, and treatment. Published under a CC BY 4.0 license.


Assuntos
Inteligência Artificial , Neoplasias , Estados Unidos , Humanos , National Cancer Institute (U.S.) , Reprodutibilidade dos Testes , Diagnóstico por Imagem , Multiômica , Neoplasias/diagnóstico por imagem
4.
medRxiv ; 2023 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-37745558

RESUMO

Because humans age at different rates, a person's physical appearance may yield insights into their biological age and physiological health more reliably than their chronological age. In medicine, however, appearance is incorporated into medical judgments in a subjective and non-standardized fashion. In this study, we developed and validated FaceAge, a deep learning system to estimate biological age from easily obtainable and low-cost face photographs. FaceAge was trained on data from 58,851 healthy individuals, and clinical utility was evaluated on data from 6,196 patients with cancer diagnoses from two institutions in the United States and The Netherlands. To assess the prognostic relevance of FaceAge estimation, we performed Kaplan Meier survival analysis. To test a relevant clinical application of FaceAge, we assessed the performance of FaceAge in end-of-life patients with metastatic cancer who received palliative treatment by incorporating FaceAge into clinical prediction models. We found that, on average, cancer patients look older than their chronological age, and looking older is correlated with worse overall survival. FaceAge demonstrated significant independent prognostic performance in a range of cancer types and stages. We found that FaceAge can improve physicians' survival predictions in incurable patients receiving palliative treatments, highlighting the clinical utility of the algorithm to support end-of-life decision-making. FaceAge was also significantly associated with molecular mechanisms of senescence through gene analysis, while age was not. These findings may extend to diseases beyond cancer, motivating using deep learning algorithms to translate a patient's visual appearance into objective, quantitative, and clinically useful measures.

5.
medRxiv ; 2023 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-37732237

RESUMO

Foundation models represent a recent paradigm shift in deep learning, where a single large-scale model trained on vast amounts of data can serve as the foundation for various downstream tasks. Foundation models are generally trained using self-supervised learning and excel in reducing the demand for training samples in downstream applications. This is especially important in medicine, where large labeled datasets are often scarce. Here, we developed a foundation model for imaging biomarker discovery by training a convolutional encoder through self-supervised learning using a comprehensive dataset of 11,467 radiographic lesions. The foundation model was evaluated in distinct and clinically relevant applications of imaging-based biomarkers. We found that they facilitated better and more efficient learning of imaging biomarkers and yielded task-specific models that significantly outperformed their conventional supervised counterparts on downstream tasks. The performance gain was most prominent when training dataset sizes were very limited. Furthermore, foundation models were more stable to input and inter-reader variations and showed stronger associations with underlying biology. Our results demonstrate the tremendous potential of foundation models in discovering novel imaging biomarkers that may extend to other clinical use cases and can accelerate the widespread translation of imaging biomarkers into clinical settings.

6.
Nat Commun ; 14(1): 2797, 2023 05 16.
Artigo em Inglês | MEDLINE | ID: mdl-37193717

RESUMO

Prevention and management of chronic lung diseases (asthma, lung cancer, etc.) are of great importance. While tests are available for reliable diagnosis, accurate identification of those who will develop severe morbidity/mortality is currently limited. Here, we developed a deep learning model, CXR Lung-Risk, to predict the risk of lung disease mortality from a chest x-ray. The model was trained using 147,497 x-ray images of 40,643 individuals and tested in three independent cohorts comprising 15,976 individuals. We found that CXR Lung-Risk showed a graded association with lung disease mortality after adjustment for risk factors, including age, smoking, and radiologic findings (Hazard ratios up to 11.86 [8.64-16.27]; p < 0.001). Adding CXR Lung-Risk to a multivariable model improved estimates of lung disease mortality in all cohorts. Our results demonstrate that deep learning can identify individuals at risk of lung disease mortality on easily obtainable x-rays, which may improve personalized prevention and treatment strategies.


Assuntos
Aprendizado Profundo , Pneumopatias , Humanos , Radiografia Torácica/métodos , Pulmão/diagnóstico por imagem , Pneumopatias/diagnóstico por imagem , Tórax
7.
Lancet Digit Health ; 5(6): e360-e369, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37087370

RESUMO

BACKGROUND: Pretreatment identification of pathological extranodal extension (ENE) would guide therapy de-escalation strategies for in human papillomavirus (HPV)-associated oropharyngeal carcinoma but is diagnostically challenging. ECOG-ACRIN Cancer Research Group E3311 was a multicentre trial wherein patients with HPV-associated oropharyngeal carcinoma were treated surgically and assigned to a pathological risk-based adjuvant strategy of observation, radiation, or concurrent chemoradiation. Despite protocol exclusion of patients with overt radiographic ENE, more than 30% had pathological ENE and required postoperative chemoradiation. We aimed to evaluate a CT-based deep learning algorithm for prediction of ENE in E3311, a diagnostically challenging cohort wherein algorithm use would be impactful in guiding decision-making. METHODS: For this retrospective evaluation of deep learning algorithm performance, we obtained pretreatment CTs and corresponding surgical pathology reports from the multicentre, randomised de-escalation trial E3311. All enrolled patients on E3311 required pretreatment and diagnostic head and neck imaging; patients with radiographically overt ENE were excluded per study protocol. The lymph node with largest short-axis diameter and up to two additional nodes were segmented on each scan and annotated for ENE per pathology reports. Deep learning algorithm performance for ENE prediction was compared with four board-certified head and neck radiologists. The primary endpoint was the area under the curve (AUC) of the receiver operating characteristic. FINDINGS: From 178 collected scans, 313 nodes were annotated: 71 (23%) with ENE in general, 39 (13%) with ENE larger than 1 mm ENE. The deep learning algorithm AUC for ENE classification was 0·86 (95% CI 0·82-0·90), outperforming all readers (p<0·0001 for each). Among radiologists, there was high variability in specificity (43-86%) and sensitivity (45-96%) with poor inter-reader agreement (κ 0·32). Matching the algorithm specificity to that of the reader with highest AUC (R2, false positive rate 22%) yielded improved sensitivity to 75% (+ 13%). Setting the algorithm false positive rate to 30% yielded 90% sensitivity. The algorithm showed improved performance compared with radiologists for ENE larger than 1 mm (p<0·0001) and in nodes with short-axis diameter 1 cm or larger. INTERPRETATION: The deep learning algorithm outperformed experts in predicting pathological ENE on a challenging cohort of patients with HPV-associated oropharyngeal carcinoma from a randomised clinical trial. Deep learning algorithms should be evaluated prospectively as a treatment selection tool. FUNDING: ECOG-ACRIN Cancer Research Group and the National Cancer Institute of the US National Institutes of Health.


Assuntos
Carcinoma , Aprendizado Profundo , Neoplasias Orofaríngeas , Infecções por Papillomavirus , Humanos , Papillomavirus Humano , Estudos Retrospectivos , Infecções por Papillomavirus/diagnóstico por imagem , Infecções por Papillomavirus/complicações , Extensão Extranodal , Neoplasias Orofaríngeas/diagnóstico por imagem , Neoplasias Orofaríngeas/patologia , Algoritmos , Carcinoma/complicações , Tomografia Computadorizada por Raios X
8.
Eur J Cancer ; 183: 142-151, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36857819

RESUMO

INTRODUCTION: Immunotherapy-induced pneumonitis (IIP) is a serious side-effect which requires accurate diagnosis and management with high-dose corticosteroids. The differential diagnosis between IIP and other types of pneumonitis (OTP) remains challenging due to similar radiological patterns. This study was aimed to develop a prediction model to differentiate IIP from OTP in patients with stage IV non-small cell lung cancer (NSCLC) who developed pneumonitis during immunotherapy. METHODS: Consecutive patients with metastatic NSCLC treated with immunotherapy in six centres in the Netherlands and Belgium from 2017 to 2020 were reviewed and cause-specific pneumonitis events were identified. Seven regions of interest (segmented lungs and spheroidal/cubical regions surrounding the inflammation) were examined to extract the most predictive radiomic features from the chest computed tomography images obtained at pneumonitis manifestation. Models were internally tested regarding discrimination, calibration and decisional benefit. To evaluate the clinical application of the models, predicted labels were compared with the separate clinical and radiological judgements. RESULTS: A total of 556 patients were reviewed; 31 patients (5.6%) developed IIP and 41 patients developed OTP (7.4%). The line of immunotherapy was the only predictive factor in the clinical model (2nd versus 1st odds ratio = 0.08, 95% confidence interval:0.01-0.77). The best radiomic model was achieved using a 75-mm spheroidal region of interest which showed an optimism-corrected area under the receiver operating characteristic curve of 0.83 (95% confidence interval:0.77-0.95) with negative and positive predictive values of 80% and 79%, respectively. Good calibration and net benefits were achieved for the radiomic model across the entire range of probabilities. A correct diagnosis was provided by the radiomic model in 10 out of 12 cases with non-conclusive radiological judgements. CONCLUSION: Radiomic biomarkers applied to computed tomography imaging may support clinicians making the differential diagnosis of pneumonitis in patients with NSCLC receiving immunotherapy, especially when the radiologic assessment is non-conclusive.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Pneumonia , Humanos , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/tratamento farmacológico , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/tratamento farmacológico , Inibidores de Checkpoint Imunológico/efeitos adversos , Diagnóstico Diferencial , Tomografia Computadorizada por Raios X/métodos , Pneumonia/induzido quimicamente , Pneumonia/diagnóstico por imagem
9.
Front Oncol ; 13: 1305511, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38239639

RESUMO

Introduction: Artificial intelligence (AI)-based technologies embody countless solutions in radiation oncology, yet translation of AI-assisted software tools to actual clinical environments remains unrealized. We present the Deep Learning On-Demand Assistant (DL-ODA), a fully automated, end-to-end clinical platform that enables AI interventions for any disease site featuring an automated model-training pipeline, auto-segmentations, and QA reporting. Materials and methods: We developed, tested, and prospectively deployed the DL-ODA system at a large university affiliated hospital center. Medical professionals activate the DL-ODA via two pathways (1): On-Demand, used for immediate AI decision support for a patient-specific treatment plan, and (2) Ambient, in which QA is provided for all daily radiotherapy (RT) plans by comparing DL segmentations with manual delineations and calculating the dosimetric impact. To demonstrate the implementation of a new anatomy segmentation, we used the model-training pipeline to generate a breast segmentation model based on a large clinical dataset. Additionally, the contour QA functionality of existing models was assessed using a retrospective cohort of 3,399 lung and 885 spine RT cases. Ambient QA was performed for various disease sites including spine RT and heart for dosimetric sparing. Results: Successful training of the breast model was completed in less than a day and resulted in clinically viable whole breast contours. For the retrospective analysis, we evaluated manual-versus-AI similarity for the ten most common structures. The DL-ODA detected high similarities in heart, lung, liver, and kidney delineations but lower for esophagus, trachea, stomach, and small bowel due largely to incomplete manual contouring. The deployed Ambient QAs for heart and spine sites have prospectively processed over 2,500 cases and 230 cases over 9 months and 5 months, respectively, automatically alerting the RT personnel. Discussion: The DL-ODA capabilities in providing universal AI interventions were demonstrated for On-Demand contour QA, DL segmentations, and automated model training, and confirmed successful integration of the system into a large academic radiotherapy department. The novelty of deploying the DL-ODA as a multi-modal, fully automated end-to-end AI clinical implementation solution marks a significant step towards a generalizable framework that leverages AI to improve the efficiency and reliability of RT systems.

10.
Cancers (Basel) ; 14(5)2022 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-35267597

RESUMO

Problem. Image biomarker analysis, also known as radiomics, is a tool for tissue characterization and treatment prognosis that relies on routinely acquired clinical images and delineations. Due to the uncertainty in image acquisition, processing, and segmentation (delineation) protocols, radiomics often lack reproducibility. Radiomics harmonization techniques have been proposed as a solution to reduce these sources of uncertainty and/or their influence on the prognostic model performance. A relevant question is how to estimate the protocol-induced uncertainty of a specific image biomarker, what the effect is on the model performance, and how to optimize the model given the uncertainty. Methods. Two non-small cell lung cancer (NSCLC) cohorts, composed of 421 and 240 patients, respectively, were used for training and testing. Per patient, a Monte Carlo algorithm was used to generate three hundred synthetic contours with a surface dice tolerance measure of less than 1.18 mm with respect to the original GTV. These contours were subsequently used to derive 104 radiomic features, which were ranked on their relative sensitivity to contour perturbation, expressed in the parameter η. The top four (low η) and the bottom four (high η) features were selected for two models based on the Cox proportional hazards model. To investigate the influence of segmentation uncertainty on the prognostic model, we trained and tested the setup in 5000 augmented realizations (using a Monte Carlo sampling method); the log-rank test was used to assess the stratification performance and stability of segmentation uncertainty. Results. Although both low and high η setup showed significant testing set log-rank p-values (p = 0.01) in the original GTV delineations (without segmentation uncertainty introduced), in the model with high uncertainty, to effect ratio, only around 30% of the augmented realizations resulted in model performance with p < 0.05 in the test set. In contrast, the low η setup performed with a log-rank p < 0.05 in 90% of the augmented realizations. Moreover, the high η setup classification was uncertain in its predictions for 50% of the subjects in the testing set (for 80% agreement rate), whereas the low η setup was uncertain only in 10% of the cases. Discussion. Estimating image biomarker model performance based only on the original GTV segmentation, without considering segmentation, uncertainty may be deceiving. The model might result in a significant stratification performance, but can be unstable for delineation variations, which are inherent to manual segmentation. Simulating segmentation uncertainty using the method described allows for more stable image biomarker estimation, selection, and model development. The segmentation uncertainty estimation method described here is universal and can be extended to estimate other protocol uncertainties (such as image acquisition and pre-processing).

11.
Hum Brain Mapp ; 42(17): 5563-5580, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34598307

RESUMO

Ultra-high-field magnetic resonance imaging (MRI) enables sub-millimetre resolution imaging of the human brain, allowing the study of functional circuits of cortical layers at the meso-scale. An essential step in many functional and structural neuroimaging studies is segmentation, the operation of partitioning the MR images in anatomical structures. Despite recent efforts in brain imaging analysis, the literature lacks in accurate and fast methods for segmenting 7-tesla (7T) brain MRI. We here present CEREBRUM-7T, an optimised end-to-end convolutional neural network, which allows fully automatic segmentation of a whole 7T T1w MRI brain volume at once, without partitioning the volume, pre-processing, nor aligning it to an atlas. The trained model is able to produce accurate multi-structure segmentation masks on six different classes plus background in only a few seconds. The experimental part, a combination of objective numerical evaluations and subjective analysis, confirms that the proposed solution outperforms the training labels it was trained on and is suitable for neuroimaging studies, such as layer functional MRI studies. Taking advantage of a fine-tuning operation on a reduced set of volumes, we also show how it is possible to effectively apply CEREBRUM-7T to different sites data. Furthermore, we release the code, 7T data, and other materials, including the training labels and the Turing test.


Assuntos
Encéfalo/anatomia & histologia , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Neuroimagem/métodos , Humanos
12.
Cancer Res ; 81(16): 4188-4193, 2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-34185678

RESUMO

The National Cancer Institute (NCI) Cancer Research Data Commons (CRDC) aims to establish a national cloud-based data science infrastructure. Imaging Data Commons (IDC) is a new component of CRDC supported by the Cancer Moonshot. The goal of IDC is to enable a broad spectrum of cancer researchers, with and without imaging expertise, to easily access and explore the value of deidentified imaging data and to support integrated analyses with nonimaging data. We achieve this goal by colocating versatile imaging collections with cloud-based computing resources and data exploration, visualization, and analysis tools. The IDC pilot was released in October 2020 and is being continuously populated with radiology and histopathology collections. IDC provides access to curated imaging collections, accompanied by documentation, a user forum, and a growing number of analysis use cases that aim to demonstrate the value of a data commons framework applied to cancer imaging research. SIGNIFICANCE: This study introduces NCI Imaging Data Commons, a new repository of the NCI Cancer Research Data Commons, which will support cancer imaging research on the cloud.


Assuntos
Diagnóstico por Imagem/métodos , National Cancer Institute (U.S.) , Neoplasias/diagnóstico por imagem , Neoplasias/genética , Pesquisa Biomédica/tendências , Computação em Nuvem , Biologia Computacional/métodos , Gráficos por Computador , Segurança Computacional , Interpretação Estatística de Dados , Bases de Dados Factuais , Diagnóstico por Imagem/normas , Humanos , Processamento de Imagem Assistida por Computador , Projetos Piloto , Linguagens de Programação , Radiologia/métodos , Radiologia/normas , Reprodutibilidade dos Testes , Software , Estados Unidos , Interface Usuário-Computador
13.
Med Image Anal ; 62: 101688, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32272345

RESUMO

Many functional and structural neuroimaging studies call for accurate morphometric segmentation of different brain structures starting from image intensity values of MRI scans. Current automatic (multi-) atlas-based segmentation strategies often lack accuracy on difficult-to-segment brain structures and, since these methods rely on atlas-to-scan alignment, they may take long processing times. Alternatively, recent methods deploying solutions based on Convolutional Neural Networks (CNNs) are enabling the direct analysis of out-of-the-scanner data. However, current CNN-based solutions partition the test volume into 2D or 3D patches, which are processed independently. This process entails a loss of global contextual information, thereby negatively impacting the segmentation accuracy. In this work, we design and test an optimised end-to-end CNN architecture that makes the exploitation of global spatial information computationally tractable, allowing to process a whole MRI volume at once. We adopt a weakly supervised learning strategy by exploiting a large dataset composed of 947 out-of-the-scanner (3 Tesla T1-weighted 1mm isotropic MP-RAGE 3D sequences) MR Images. The resulting model is able to produce accurate multi-structure segmentation results in only a few seconds. Different quantitative measures demonstrate an improved accuracy of our solution when compared to state-of-the-art techniques. Moreover, through a randomised survey involving expert neuroscientists, we show that subjective judgements favour our solution with respect to widely adopted atlas-based software.


Assuntos
Encéfalo , Cérebro , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Humanos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...