Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Med Image Anal ; 95: 103156, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38603844

RESUMEN

The state-of-the-art multi-organ CT segmentation relies on deep learning models, which only generalize when trained on large samples of carefully curated data. However, it is challenging to train a single model that can segment all organs and types of tumors since most large datasets are partially labeled or are acquired across multiple institutes that may differ in their acquisitions. A possible solution is Federated learning, which is often used to train models on multi-institutional datasets where the data is not shared across sites. However, predictions of federated learning can be unreliable after the model is locally updated at sites due to 'catastrophic forgetting'. Here, we address this issue by using knowledge distillation (KD) so that the local training is regularized with the knowledge of a global model and pre-trained organ-specific segmentation models. We implement the models in a multi-head U-Net architecture that learns a shared embedding space for different organ segmentation, thereby obtaining multi-organ predictions without repeated processes. We evaluate the proposed method using 8 publicly available abdominal CT datasets of 7 different organs. Of those datasets, 889 CTs were used for training, 233 for internal testing, and 30 volumes for external testing. Experimental results verified that our proposed method substantially outperforms other state-of-the-art methods in terms of accuracy, inference time, and the number of parameters.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada por Rayos X , Humanos , Conjuntos de Datos como Asunto , Bases de Datos Factuales
2.
Artículo en Inglés | MEDLINE | ID: mdl-37379192

RESUMEN

Recently, motor imagery (MI) electroencephalography (EEG) classification techniques using deep learning have shown improved performance over conventional techniques. However, improving the classification accuracy on unseen subjects is still challenging due to intersubject variability, scarcity of labeled unseen subject data, and low signal-to-noise ratio (SNR). In this context, we propose a novel two-way few-shot network able to efficiently learn how to learn representative features of unseen subject categories and classify them with limited MI EEG data. The pipeline includes an embedding module that learns feature representations from a set of signals, a temporal-attention module to emphasize important temporal features, an aggregation-attention module for key support signal discovery, and a relation module for final classification based on relation scores between a support set and a query signal. In addition to the unified learning of feature similarity and a few-shot classifier, our method can emphasize informative features in support data relevant to the query, which generalizes better on unseen subjects. Furthermore, we propose to fine-tune the model before testing by arbitrarily sampling a query signal from the provided support set to adapt to the distribution of the unseen subject. We evaluate our proposed method with three different embedding modules on cross-subject and cross-dataset classification tasks using brain-computer interface (BCI) competition IV 2a, 2b, and GIST datasets. Extensive experiments show that our model significantly improves over the baselines and outperforms existing few-shot approaches.

3.
Med Image Comput Comput Assist Interv ; 14221: 521-531, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38204983

RESUMEN

One-shot federated learning (FL) has emerged as a promising solution in scenarios where multiple communication rounds are not practical. Notably, as feature distributions in medical data are less discriminative than those of natural images, robust global model training with FL is non-trivial and can lead to overfitting. To address this issue, we propose a novel one-shot FL framework leveraging Image Synthesis and Client model Adaptation (FedISCA) with knowledge distillation (KD). To prevent overfitting, we generate diverse synthetic images ranging from random noise to realistic images. This approach (i) alleviates data privacy concerns and (ii) facilitates robust global model training using KD with decentralized client models. To mitigate domain disparity in the early stages of synthesis, we design noise-adapted client models where batch normalization statistics on random noise (synthetic images) are updated to enhance KD. Lastly, the global model is trained with both the original and noise-adapted client models via KD and synthetic images. This process is repeated till global model convergence. Extensive evaluation of this design on five small- and three large-scale medical image classification datasets reveals superior accuracy over prior methods. Code is available at https://github.com/myeongkyunkang/FedISCA.

4.
Brain Tumor Res Treat ; 8(1): 36-42, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32390352

RESUMEN

BACKGROUND: To compare the diagnostic performance of two-dimensional (2D) and three-dimensional (3D) fractal dimension (FD) and lacunarity features from MRI for predicting the meningioma grade. METHODS: This retrospective study included 123 meningioma patients [90 World Health Organization (WHO) grade I, 33 WHO grade II/III] with preoperative MRI including post-contrast T1-weighted imaging. The 2D and 3D FD and lacunarity parameters from the contrast-enhancing portion of the tumor were calculated. Reproducibility was assessed with the intraclass correlation coefficient. Multivariable logistic regression analysis using 2D or 3D fractal features was performed to predict the meningioma grade. The diagnostic ability of the 2D and 3D fractal models were compared. RESULTS: The reproducibility between observers was excellent, with intraclass correlation coefficients of 0.97, 0.95, 0.98, and 0.96 for 2D FD, 2D lacunarity, 3D FD, and 3D lacunarity, respectively. WHO grade II/III meningiomas had a higher 2D and 3D FD (p=0.003 and p<0.001, respectively) and higher 2D and 3D lacunarity (p=0.002 and p=0.006, respectively) than WHO grade I meningiomas. The 2D fractal model showed an area under the curve (AUC), accuracy, sensitivity, and specificity of 0.690 [95% confidence interval (CI) 0.581-0.799], 72.4%, 75.8%, and 64.4%, respectively. The 3D fractal model showed an AUC, accuracy, sensitivity, and specificity of 0.813 (95% CI 0.733-0.878), 82.9%, 81.8%, and 70.0%, respectively. The 3D fractal model exhibited significantly better diagnostic performance than the 2D fractal model (p<0.001). CONCLUSION: The 3D fractal analysis proved superiority in diagnostic performance to 2D fractal analysis in grading meningioma.

5.
Eur Radiol ; 30(8): 4615-4622, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-32274524

RESUMEN

OBJECTIVE: To assess whether 3-dimensional (3D) fractal dimension (FD) and lacunarity features from MRI can predict the meningioma grade. METHODS: This retrospective study included 131 patients with meningiomas (98 low-grade, 33 high-grade) who underwent preoperative MRI with post-contrast T1-weighted imaging. The 3D FD and lacunarity parameters from the enhancing portion of the tumor were extracted by box-counting algorithms. Inter-rater reliability was assessed with the intraclass correlation coefficient (ICC). Additionally, conventional imaging features such as location, heterogeneous enhancement, capsular enhancement, and necrosis were assessed. Independent clinical and imaging risk factors for meningioma grade were investigated using multivariable logistic regression. The discriminative value of the prediction model with and without fractal features was evaluated. The relationship of fractal parameters with the mitosis count and Ki-67 labeling index was also assessed. RESULTS: The inter-reader reliability was excellent, with ICCs of 0.99 for FD and 0.97 for lacunarity. High-grade meningiomas had higher FD (p < 0.001) and higher lacunarity (p = 0.007) than low-grade meningiomas. In the multivariable logistic regression, the diagnostic performance of the model with clinical and conventional imaging features increased with 3D fractal features for predicting the meningioma grade, with AUCs of 0.78 and 0.84, respectively. The 3D FD showed significant correlations with both mitosis count and Ki-67 labeling index, and lacunarity showed a significant correlation with the Ki-67 labeling index (all p values < 0.05). CONCLUSION: The 3D FD and lacunarity are higher in high-grade meningiomas and fractal analysis may be a useful imaging biomarker for predicting the meningioma grade. KEY POINTS: • Fractal dimension (FD) and lacunarity are the two parameters used in fractal analysis to describe the complexity of a subject and may aid in predicting meningioma grade. • High-grade meningiomas had a higher fractal dimension and higher lacunarity than low-grade meningiomas, suggesting higher complexity and higher rotational variance. • The discriminative value of the predictive model using clinical and conventional imaging features improved when combined with 3D fractal features for predicting the meningioma grade.


Asunto(s)
Algoritmos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Neoplasias Meníngeas/diagnóstico , Meningioma/diagnóstico , Femenino , Fractales , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...