Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
J Biomed Inform ; 125: 103978, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34922021

RESUMEN

Alzheimer's disease is a common neurodegenerative brain disease that affects the elderly population worldwide. Its early automatic detection is vital for early intervention and treatment. A common solution is to perform future cognitive score prediction based on the baseline brain structural magnetic resonance image (MRI), which can directly infer the potential severity of disease. Recently, several studies have modelled disease progression by predicting the future brain MRI that can provide visual information of brain changes over time. Nevertheless, no studies explore the intra correlation of these two solutions, and it is unknown whether the predicted MRI can assist the prediction of cognitive score. Here, instead of independent prediction, we aim to predict disease progression in multi-view, i.e., predicting subject-specific changes of cognitive score and MRI volume concurrently. To achieve this, we propose an end-to-end integrated framework, where a regression model and a generative adversarial network are integrated together and then jointly optimized. Three integration strategies are exploited to unify these two models. Moreover, considering that some brain regions, such as hippocampus and middle temporal gyrus, could change significantly during the disease progression, a region-of-interest (ROI) mask and a ROI loss are introduced into the integrated framework to leverage this anatomical prior knowledge. Experimental results on the longitudinal Alzheimer's Disease Neuroimaging Initiative dataset demonstrated that the integrated framework outperformed the independent regression model for cognitive score prediction. And its performance can be further improved with the ROI loss for both cognitive score and MRI prediction.


Asunto(s)
Enfermedad de Alzheimer , Anciano , Enfermedad de Alzheimer/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Progresión de la Enfermedad , Humanos , Imagen por Resonancia Magnética , Neuroimagen
2.
Neuroimage ; 238: 118252, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34116155

RESUMEN

Resting-state functional connectivity (RSFC) can be used for mapping large-scale human brain networks during rest. There is considerable interest in distinguishing the individual-shared and individual-specific components in RSFC for the better identification of individuals and prediction of behavior. Therefore, we propose a multi-task learning based sparse convex alternating structure optimization (MTL-sCASO) method to decompose RSFC into individual-specific connectivity and individual-shared connectivity. We used synthetic data to validate the efficacy of the MTL-sCASO method. In addition, we verified that individual-specific connectivity achieves higher identification rates than the Pearson correlation (PC) method, and the individual-specific components observed in 886 individuals from the Human Connectome Project (HCP) examined in two sessions over two consecutive days might serve as individual fingerprints. Individual-specific connectivity has low inter-subject similarity (-0.005±0.023), while individual-shared connectivity has high inter-subject similarity (0.822±0.061). We also determined the anatomical locations (region or subsystem) related to individual attributes and common features. We find that individual-specific connectivity exhibits low degree centrality in the sensorimotor processing system but high degree centrality in the control system. Importantly, the individual-specific connectivity estimated by the MTL-sCASO method accurately predicts behavioral scores (improved by 9.4% compared to the PC method) in the cognitive dimension. The decomposition of individual-specific and individual-shared components from RSFC provides a new approach for tracing individual traits and group analysis using functional brain networks.


Asunto(s)
Encéfalo/diagnóstico por imagen , Conectoma , Aprendizaje Automático , Red Nerviosa/diagnóstico por imagen , Adulto , Humanos , Imagen por Resonancia Magnética
3.
Comput Methods Programs Biomed ; 244: 107939, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38008678

RESUMEN

BACKGROUND AND OBJECTIVE: Recently, deep learning (DL) algorithms showed to be promising in predicting outcomes such as distant metastasis-free survival (DMFS) and overall survival (OS) using pre-treatment imaging in head and neck cancer. Gross Tumor Volume of the primary tumor (GTVp) segmentation is used as an additional channel in the input to DL algorithms to improve model performance. However, the binary segmentation mask of the GTVp directs the focus of the network to the defined tumor region only and uniformly. DL models trained for tumor segmentation have also been used to generate predicted tumor probability maps (TPM) where each pixel value corresponds to the degree of certainty of that pixel to be classified as tumor. The aim of this study was to explore the effect of using TPM as an extra input channel of CT- and PET-based DL prediction models for oropharyngeal cancer (OPC) patients in terms of local control (LC), regional control (RC), DMFS and OS. METHODS: We included 399 OPC patients from our institute that were treated with definitive (chemo)radiation. For each patient, CT and PET scans and GTVp contours, used for radiotherapy treatment planning, were collected. We first trained a previously developed 2.5D DL framework for tumor probability prediction by 5-fold cross validation using 131 patients. Then, a 3D ResNet18 was trained for outcome prediction using the 3D TPM as one of the possible inputs. The endpoints were LC, RC, DMFS, and OS. We performed 3-fold cross validation on 168 patients for each endpoint using different combinations of image modalities as input. The final prediction in the test set (100) was obtained by averaging the predictions of the 3-fold models. The C-index was used to evaluate the discriminative performance of the models. RESULTS: The models trained replacing the GTVp contours with the TPM achieved the highest C-indexes for LC (0.74) and RC (0.60) prediction. For OS, using the TPM or the GTVp as additional image modality resulted in comparable C-indexes (0.72 and 0.74). CONCLUSIONS: Adding predicted TPMs instead of GTVp contours as an additional input channel for DL-based outcome prediction models improved model performance for LC and RC.


Asunto(s)
Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Neoplasias Orofaríngeas , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Neoplasias Orofaríngeas/diagnóstico por imagen , Pronóstico
4.
Radiother Oncol ; 197: 110368, 2024 08.
Artículo en Inglés | MEDLINE | ID: mdl-38834153

RESUMEN

BACKGROUND AND PURPOSE: To optimize our previously proposed TransRP, a model integrating CNN (convolutional neural network) and ViT (Vision Transformer) designed for recurrence-free survival prediction in oropharyngeal cancer and to extend its application to the prediction of multiple clinical outcomes, including locoregional control (LRC), Distant metastasis-free survival (DMFS) and overall survival (OS). MATERIALS AND METHODS: Data was collected from 400 patients (300 for training and 100 for testing) diagnosed with oropharyngeal squamous cell carcinoma (OPSCC) who underwent (chemo)radiotherapy at University Medical Center Groningen. Each patient's data comprised pre-treatment PET/CT scans, clinical parameters, and clinical outcome endpoints, namely LRC, DMFS and OS. The prediction performance of TransRP was compared with CNNs when inputting image data only. Additionally, three distinct methods (m1-3) of incorporating clinical predictors into TransRP training and one method (m4) that uses TransRP prediction as one parameter in a clinical Cox model were compared. RESULTS: TransRP achieved higher test C-index values of 0.61, 0.84 and 0.70 than CNNs for LRC, DMFS and OS, respectively. Furthermore, when incorporating TransRP's prediction into a clinical Cox model (m4), a higher C-index of 0.77 for OS was obtained. Compared with a clinical routine risk stratification model of OS, our model, using clinical variables, radiomics and TransRP prediction as predictors, achieved larger separations of survival curves between low, intermediate and high risk groups. CONCLUSION: TransRP outperformed CNN models for all endpoints. Combining clinical data and TransRP prediction in a Cox model achieved better OS prediction.


Asunto(s)
Neoplasias Orofaríngeas , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Neoplasias Orofaríngeas/mortalidad , Neoplasias Orofaríngeas/diagnóstico por imagen , Neoplasias Orofaríngeas/patología , Neoplasias Orofaríngeas/radioterapia , Neoplasias Orofaríngeas/terapia , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Masculino , Femenino , Persona de Mediana Edad , Anciano , Redes Neurales de la Computación , Adulto
5.
IEEE J Biomed Health Inform ; 27(10): 4866-4877, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37581964

RESUMEN

Precise delineation of hippocampus subfields is crucial for the identification and management of various neurological and psychiatric disorders. However, segmenting these subfields automatically in routine 3T MRI is challenging due to their complex morphology and small size, as well as the limited signal contrast and resolution of the 3T images. This research proposes Syn_SegNet, an end-to-end, multitask joint deep neural network that leverages ultrahigh-field 7T MRI synthesis to improve hippocampal subfield segmentation in 3T MRI. Our approach involves two key components. First, we employ a modified Pix2PixGAN as the synthesis model, incorporating self-attention modules, image and feature matching loss, and ROI loss to generate high-quality 7T-like MRI around the hippocampal region. Second, we utilize a variant of 3D-U-Net with multiscale deep supervision as the segmentation subnetwork, incorporating an anatomic weighted cross-entropy loss that capitalizes on prior anatomical knowledge. We evaluate our method on hippocampal subfield segmentation in paired 3T MRI and 7T MRI with seven different anatomical structures. The experimental findings demonstrate that Syn_SegNet's segmentation performance benefits from integrating synthetic 7T data in an online manner and is superior to competing methods. Furthermore, we assess the generalizability of the proposed approach using a publicly accessible 3T MRI dataset. The developed method would be an efficient tool for segmenting hippocampal subfields in routine clinical 3T MRI.


Asunto(s)
Hipocampo , Trastornos Mentales , Humanos , Hipocampo/diagnóstico por imagen , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos
6.
Phys Imaging Radiat Oncol ; 28: 100502, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38026084

RESUMEN

Background and purpose: To compare the prediction performance of image features of computed tomography (CT) images extracted by radiomics, self-supervised learning and end-to-end deep learning for local control (LC), regional control (RC), locoregional control (LRC), distant metastasis-free survival (DMFS), tumor-specific survival (TSS), overall survival (OS) and disease-free survival (DFS) of oropharyngeal squamous cell carcinoma (OPSCC) patients after (chemo)radiotherapy. Methods and materials: The OPC-Radiomics dataset was used for model development and independent internal testing and the UMCG-OPC set for external testing. Image features were extracted from the Gross Tumor Volume contours of the primary tumor (GTVt) regions in CT scans when using radiomics or a self-supervised learning-based method (autoencoder). Clinical and combined (radiomics, autoencoder or end-to-end) models were built using multivariable Cox proportional-hazard analysis with clinical features only and both clinical and image features for LC, RC, LRC, DMFS, TSS, OS and DFS prediction, respectively. Results: In the internal test set, combined autoencoder models performed better than clinical models and combined radiomics models for LC, RC, LRC, DMFS, TSS and DFS prediction (largest improvements in C-index: 0.91 vs. 0.76 in RC and 0.74 vs. 0.60 in DMFS). In the external test set, combined radiomics models performed better than clinical and combined autoencoder models for all endpoints (largest improvements in LC, 0.82 vs. 0.71). Furthermore, combined models performed better in risk stratification than clinical models and showed good calibration for most endpoints. Conclusions: Image features extracted using self-supervised learning showed best internal prediction performance while radiomics features have better external generalizability.

7.
Med Phys ; 50(10): 6190-6200, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37219816

RESUMEN

BACKGROUND: Personalized treatment is increasingly required for oropharyngeal squamous cell carcinoma (OPSCC) patients due to emerging new cancer subtypes and treatment options. Outcome prediction model can help identify low or high-risk patients who may be suitable to receive de-escalation or intensified treatment approaches. PURPOSE: To develop a deep learning (DL)-based model for predicting multiple and associated efficacy endpoints in OPSCC patients based on computed tomography (CT). METHODS: Two patient cohorts were used in this study: a development cohort consisting of 524 OPSCC patients (70% for training and 30% for independent testing) and an external test cohort of 396 patients. Pre-treatment CT-scans with the gross primary tumor volume contours (GTVt) and clinical parameters were available to predict endpoints, including 2-year local control (LC), regional control (RC), locoregional control (LRC), distant metastasis-free survival (DMFS), disease-specific survival (DSS), overall survival (OS), and disease-free survival (DFS). We proposed DL outcome prediction models with the multi-label learning (MLL) strategy that integrates the associations of different endpoints based on clinical factors and CT-scans. RESULTS: The multi-label learning models outperformed the models that were developed based on a single endpoint for all endpoints especially with high AUCs ≥ 0.80 for 2-year RC, DMFS, DSS, OS, and DFS in the internal independent test set and for all endpoints except 2-year LRC in the external test set. Furthermore, with the models developed, patients could be stratified into high and low-risk groups that were significantly different for all endpoints in the internal test set and for all endpoints except DMFS in the external test set. CONCLUSION: MLL models demonstrated better discriminative ability for all 2-year efficacy endpoints than single outcome models in the internal test and for all endpoints except LRC in the external set.


Asunto(s)
Carcinoma de Células Escamosas , Neoplasias de Cabeza y Cuello , Neoplasias Orofaríngeas , Humanos , Carcinoma de Células Escamosas de Cabeza y Cuello , Carcinoma de Células Escamosas/diagnóstico por imagen , Carcinoma de Células Escamosas/terapia , Tomografía Computarizada por Rayos X , Supervivencia sin Enfermedad , Neoplasias Orofaríngeas/diagnóstico por imagen , Neoplasias Orofaríngeas/terapia , Estudios Retrospectivos
8.
Comput Methods Programs Biomed ; 208: 106286, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34311412

RESUMEN

BACKGROUND AND OBJECTIVE: Previous studies have indicated that brain morphological measures change in patients with amnestic mild cognitive impairment (aMCI). However, most existing classification methods cannot take full advantage of these measures. In this study, we improve traditional multitask learning framework by fully considering the relevance among related tasks and supplementary information from other unrelated tasks at the same time. METHODS: We propose a feature level-based group lasso (FL-GL) method in which a feature represents the average value of each ROI for each measure. First, we design a correlation matrix in which each row represents the relationship among different measures for each ROI. And this matrix is used to guide the feature selection based on a group lasso framework. Then, we train specific support vector machine (SVM) classifiers with the selected features for each measure. Finally, a weighted voting strategy is applied to combine these classifiers for a final prediction of aMCI from normal control (NC). RESULTS: We use the leave-one-out cross-validation strategy to verify our method on two datasets, the Xuan Wu Hospital dataset and the ADNI dataset. Compared with the traditional method, the results show that the classification accuracies can be improved by 6.12 and 4.92% with the FL-GL method on the two datasets. CONCLUSIONS: The results of an ablation study indicated that feature level-based group sparsity term was the core of our method. So, considering correlation at the feature level could improve the traditional multitask learning framework and our FL-GL method obtained better classification performance of patients with MCI and NCs.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Encéfalo , Disfunción Cognitiva/diagnóstico , Humanos , Interpretación de Imagen Asistida por Computador , Imagen por Resonancia Magnética
9.
IEEE J Biomed Health Inform ; 25(3): 711-719, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-32750952

RESUMEN

Alzheimer's disease (AD) is a chronic neurodegenerative disease, and its long-term progression prediction is definitely important. The structural Magnetic Resonance Imaging (sMRI) can be used to characterize the cortical atrophy that is closely coupled with clinical symptoms in AD and its prodromal stages. Many existing methods have focused on predicting the cognitive scores at future time-points using a set of morphological features derived from sMRI. The 3D sMRI can provide more massive information than the cognitive scores. However, very few works consider to predict an individual brain MRI image at future time-points. In this article, we propose a disease progression prediction framework that comprises a 3D multi-information generative adversarial network (mi-GAN) to predict what one's whole brain will look like with an interval, and a 3D DenseNet based multi-class classification network optimized with a focal loss to determine the clinical stage of the estimated brain. The mi-GAN can generate high-quality individual 3D brain MRI image conditioning on the individual 3D brain sMRI and multi-information at the baseline time-point. Experiments are implemented on the Alzheimer's Disease Neuroimaging Initiative (ADNI). Our mi-GAN shows the state-of-the-art performance with the structural similarity index (SSIM) of 0.943 between the real MRI images at the fourth year and the generated ones. With mi-GAN and focal loss, the pMCI vs. sMCI accuracy achieves 6.04% improvement in comparison with conditional GAN and cross entropy loss.


Asunto(s)
Enfermedad de Alzheimer , Enfermedades Neurodegenerativas , Enfermedad de Alzheimer/diagnóstico por imagen , Progresión de la Enfermedad , Humanos , Imagen por Resonancia Magnética , Neuroimagen
10.
Comput Med Imaging Graph ; 86: 101800, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33130416

RESUMEN

BACKGROUND AND OBJECTIVE: Hippocampal subfields (HS) segmentation accuracy on high resolution (HR) MRI images is higher than that on low resolution (LR) MRI images. However, HR MRI data collection is more expensive and time-consuming. Thus, we intend to generate HR MRI images from the corresponding LR MRI images for HS segmentation. METHODS AND RESULTS: To generate high-quality HR MRI hippocampus region images, we use a dual discriminator adversarial learning model with difficulty-aware attention mechanism in hippocampus regions (da-GAN). A local discriminator is applied in da-GAN to evaluate the visual quality of hippocampus region voxels of the synthetic images. And the difficulty-aware attention mechanism based on the local discriminator can better model the generation of hard-to-synthesis voxels in hippocampus regions. Additionally, we design a SemiDenseNet model with 3D Dense CRF postprocessing and an Unet-based model to perform HS segmentation. The experiments are implemented on Kulaga-Yoskovitz dataset. Compared with conditional generative adversarial network (c-GAN), the PSNR of generated HR T2w images acquired by our da-GAN achieves 0.406 and 0.347 improvement in left and right hippocampus regions. When using two segmentation models to segment HS, the DSC values achieved on the generated HR T1w and T2w images are both improved than that on LR T1w images. CONCLUSION: Experimental results show that da-GAN model can generate higher-quality MRI images, especially in hippocampus regions, and the generated MRI images can improve HS segmentation accuracy.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Corteza Cerebral , Hipocampo/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA