Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Neuroimage Clin ; 38: 103411, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37163913

RESUMEN

The olfactory bulbs (OBs) play a key role in olfactory processing; their volume is important for diagnosis, prognosis and treatment of patients with olfactory loss. Until now, measurements of OB volumes have been limited to quantification of manually segmented OBs, which is a cumbersome task and makes evaluation of OB volumes in large scale clinical studies infeasible. Hence, the aim of this study was to evaluate the potential of our previously developed automatic OB segmentation method for application in clinical practice and to relate the results to clinical outcome measures. To evaluate utilization potential of the automatic segmentation method, three data sets containing MR scans of patients with olfactory loss were included. Dataset 1 (N = 66) and 3 (N = 181) were collected at the Smell and Taste Center in Ede (NL) on a 3 T scanner; dataset 2 (N = 42) was collected at the Smell and Taste Clinic in Dresden (DE) on a 1.5 T scanner. To define the reference standard, manual annotation of the OBs was performed in Dataset 1 and 2. OBs were segmented with a method that employs two consecutive convolutional neural networks (CNNs) that the first localize the OBs in an MRI scan and subsequently segment them. In Dataset 1 and 2, the method accurately segmented the OBs, resulting in a Dice coefficient above 0.7 and average symmetrical surface distance below 0.3 mm. Volumes determined from manual and automatic segmentations showed a strong correlation (Dataset 1: r = 0.79, p < 0.001; Dataset 2: r = 0.72, p = 0.004). In addition, the method was able to recognize the absence of an OB. In Dataset 3, OB volumes computed from automatic segmentations obtained with our method were related to clinical outcome measures, i.e. duration and etiology of olfactory loss, and olfactory ability. We found that OB volume was significantly related to age of the patient, duration and etiology of olfactory loss, and olfactory ability (F(5, 172) = 11.348, p < 0.001, R2 = 0.248). In conclusion, the results demonstrate that automatic segmentation of the OBs and subsequent computation of their volumes in MRI scans can be performed accurately and can be applied in clinical and research population studies. Automatic evaluation may lead to more insight in the role of OB volume in diagnosis, prognosis and treatment of olfactory loss.


Asunto(s)
Redes Neurales de la Computación , Bulbo Olfatorio , Humanos , Bulbo Olfatorio/diagnóstico por imagen , Olfato , Imagen por Resonancia Magnética/métodos
2.
J Med Imaging (Bellingham) ; 9(5): 052406, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35664539

RESUMEN

Purpose: Coronary artery calcium (CAC) score, i.e., the amount of CAC quantified in CT, is a strong and independent predictor of coronary heart disease (CHD) events. However, CAC scoring suffers from limited interscan reproducibility, which is mainly due to the clinical definition requiring application of a fixed intensity level threshold for segmentation of calcifications. This limitation is especially pronounced in non-electrocardiogram-synchronized computed tomography (CT) where lesions are more impacted by cardiac motion and partial volume effects. Therefore, we propose a CAC quantification method that does not require a threshold for segmentation of CAC. Approach: Our method utilizes a generative adversarial network (GAN) where a CT with CAC is decomposed into an image without CAC and an image showing only CAC. The method, using a cycle-consistent GAN, was trained using 626 low-dose chest CTs and 514 radiotherapy treatment planning (RTP) CTs. Interscan reproducibility was compared to clinical calcium scoring in RTP CTs of 1662 patients, each having two scans. Results: A lower relative interscan difference in CAC mass was achieved by the proposed method: 47% compared to 89% manual clinical calcium scoring. The intraclass correlation coefficient of Agatston scores was 0.96 for the proposed method compared to 0.91 for automatic clinical calcium scoring. Conclusions: The increased interscan reproducibility achieved by our method may lead to increased reliability of CHD risk categorization and improved accuracy of CHD event prediction.

3.
J Med Imaging (Bellingham) ; 9(5): 052407, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35692896

RESUMEN

Purpose: Ensembles of convolutional neural networks (CNNs) often outperform a single CNN in medical image segmentation tasks, but inference is computationally more expensive and makes ensembles unattractive for some applications. We compared the performance of differently constructed ensembles with the performance of CNNs derived from these ensembles using knowledge distillation, a technique for reducing the footprint of large models such as ensembles. Approach: We investigated two different types of ensembles, namely, diverse ensembles of networks with three different architectures and two different loss-functions, and uniform ensembles of networks with the same architecture but initialized with different random seeds. For each ensemble, additionally, a single student network was trained to mimic the class probabilities predicted by the teacher model, the ensemble. We evaluated the performance of each network, the ensembles, and the corresponding distilled networks across three different publicly available datasets. These included chest computed tomography scans with four annotated organs of interest, brain magnetic resonance imaging (MRI) with six annotated brain structures, and cardiac cine-MRI with three annotated heart structures. Results: Both uniform and diverse ensembles obtained better results than any of the individual networks in the ensemble. Furthermore, applying knowledge distillation resulted in a single network that was smaller and faster without compromising performance compared with the ensemble it learned from. The distilled networks significantly outperformed the same network trained with reference segmentation instead of knowledge distillation. Conclusion: Knowledge distillation can compress segmentation ensembles of uniform or diverse composition into a single CNN while maintaining the performance of the ensemble.

4.
IEEE Trans Med Imaging ; 39(12): 4011-4022, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32746142

RESUMEN

In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Puntos Anatómicos de Referencia/diagnóstico por imagen , Redes Neurales de la Computación , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA