Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Med Phys ; 2024 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-38335175

RESUMEN

BACKGROUND: Notwithstanding the encouraging results of previous studies reporting on the efficiency of deep learning (DL) in COVID-19 prognostication, clinical adoption of the developed methodology still needs to be improved. To overcome this limitation, we set out to predict the prognosis of a large multi-institutional cohort of patients with COVID-19 using a DL-based model. PURPOSE: This study aimed to evaluate the performance of deep privacy-preserving federated learning (DPFL) in predicting COVID-19 outcomes using chest CT images. METHODS: After applying inclusion and exclusion criteria, 3055 patients from 19 centers, including 1599 alive and 1456 deceased, were enrolled in this study. Data from all centers were split (randomly with stratification respective to each center and class) into a training/validation set (70%/10%) and a hold-out test set (20%). For the DL model, feature extraction was performed on 2D slices, and averaging was performed at the final layer to construct a 3D model for each scan. The DensNet model was used for feature extraction. The model was developed using centralized and FL approaches. For FL, we employed DPFL approaches. Membership inference attack was also evaluated in the FL strategy. For model evaluation, different metrics were reported in the hold-out test sets. In addition, models trained in two scenarios, centralized and FL, were compared using the DeLong test for statistical differences. RESULTS: The centralized model achieved an accuracy of 0.76, while the DPFL model had an accuracy of 0.75. Both the centralized and DPFL models achieved a specificity of 0.77. The centralized model achieved a sensitivity of 0.74, while the DPFL model had a sensitivity of 0.73. A mean AUC of 0.82 and 0.81 with 95% confidence intervals of (95% CI: 0.79-0.85) and (95% CI: 0.77-0.84) were achieved by the centralized model and the DPFL model, respectively. The DeLong test did not prove statistically significant differences between the two models (p-value = 0.98). The AUC values for the inference attacks fluctuate between 0.49 and 0.51, with an average of 0.50 ± 0.003 and 95% CI for the mean AUC of 0.500 to 0.501. CONCLUSION: The performance of the proposed model was comparable to centralized models while operating on large and heterogeneous multi-institutional datasets. In addition, the model was resistant to inference attacks, ensuring the privacy of shared data during the training process.

2.
Phys Med ; 107: 102560, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36878133

RESUMEN

PURPOSE: Breast cancer is one of the major reasons of death due to cancer in women. Early diagnosis is the most critical key for disease screening, control, and reducing mortality. A robust diagnosis relies on the correct classification of breast lesions. While breast biopsy is referred to as the "gold standard" in assessing both the activity and degree of breast cancer, it is an invasive and time-consuming approach. METHOD: The current study's primary objective was to develop a novel deep-learning architecture based on the InceptionV3 network to classify ultrasound breast lesions. The main promotions of the proposed architecture were converting the InceptionV3 modules to residual inception ones, increasing their number, and altering the hyperparameters. In addition, we used a combination of five datasets (three public datasets and two prepared from different imaging centers) for training and evaluating the model. RESULTS: The dataset was split into the train (80%) and test (20%) groups. The model achieved 0.83, 0.77, 0.8, 0.81, 0.81, 0.18, and 0.77 for the precision, recall, F1 score, accuracy, AUC, Root Mean Squared Error, and Cronbach's α in the test group, respectively. CONCLUSIONS: This study illustrates that the improved InceptionV3 can robustly classify breast tumors, potentially reducing the need for biopsy in many cases.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Femenino , Humanos , Redes Neurales de la Computación , Aprendizaje Automático , Mama/diagnóstico por imagen , Mama/patología , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología
3.
Sci Rep ; 12(1): 6717, 2022 04 25.
Artículo en Inglés | MEDLINE | ID: mdl-35468984

RESUMEN

We introduced Double Attention Res-U-Net architecture to address medical image segmentation problem in different medical imaging system. Accurate medical image segmentation suffers from some challenges including, difficulty of different interest object modeling, presence of noise, and signal dropout throughout the measurement. The base line image segmentation approaches are not sufficient for complex target segmentation throughout the various medical image types. To overcome the issues, a novel U-Net-based model proposed that consists of two consecutive networks with five and four encoding and decoding levels respectively. In each of networks, there are four residual blocks between the encoder-decoder path and skip connections that help the networks to tackle the vanishing gradient problem, followed by the multi-scale attention gates to generate richer contextual information. To evaluate our architecture, we investigated three distinct data-sets, (i.e., CVC-ClinicDB dataset, Multi-site MRI dataset, and a collected ultrasound dataset). The proposed algorithm achieved Dice and Jaccard coefficients of 95.79%, 91.62%, respectively for CRL, and 93.84% and 89.08% for fetal foot segmentation. Moreover, the proposed model outperformed the state-of-the-art U-Net based model on the external CVC-ClinicDB, and multi-site MRI datasets with Dice and Jaccard coefficients of 83%, 75.31% for CVC-ClinicDB, and 92.07% and 87.14% for multi-site MRI dataset, respectively.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Algoritmos , Atención , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos
4.
Insights Imaging ; 13(1): 69, 2022 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-35394221

RESUMEN

BACKGROUND: Accurate cardiac volume and function assessment have valuable and significant diagnostic implications for patients suffering from ventricular dysfunction and cardiovascular disease. This study has focused on finding a reliable assistant to help physicians have more reliable and accurate cardiac measurements using a deep neural network. EchoRCNN is a semi-automated neural network for echocardiography sequence segmentation using a combination of mask region-based convolutional neural network image segmentation structure with reference-guided mask propagation video object segmentation network. RESULTS: The proposed method accurately segments the left and right ventricle regions in four-chamber view echocardiography series with a dice similarity coefficient of 94.03% and 94.97%, respectively. Further post-processing procedures on the segmented left and right ventricle regions resulted in a mean absolute error of 3.13% and 2.03% for ejection fraction and fractional area change parameters, respectively. CONCLUSION: This study has achieved excellent performance on the left and right ventricle segmentation, leading to more accurate estimations of vital cardiac parameters such as ejection fraction and fractional area change parameters in the left and right ventricle functionalities, respectively. The results represent that our method can predict an assured, accurate, and reliable cardiac function diagnosis in clinical screenings.

5.
Ultrason Imaging ; 44(1): 25-38, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34986724

RESUMEN

U-Net based algorithms, due to their complex computations, include limitations when they are used in clinical devices. In this paper, we addressed this problem through a novel U-Net based architecture that called fast and accurate U-Net for medical image segmentation task. The proposed fast and accurate U-Net model contains four tuned 2D-convolutional, 2D-transposed convolutional, and batch normalization layers as its main layers. There are four blocks in the encoder-decoder path. The results of our proposed architecture were evaluated using a prepared dataset for head circumference and abdominal circumference segmentation tasks, and a public dataset (HC18-Grand challenge dataset) for fetal head circumference measurement. The proposed fast network significantly improved the processing time in comparison with U-Net, dilated U-Net, R2U-Net, attention U-Net, and MFP U-Net. It took 0.47 seconds for segmenting a fetal abdominal image. In addition, over the prepared dataset using the proposed accurate model, Dice and Jaccard coefficients were 97.62% and 95.43% for fetal head segmentation, 95.07%, and 91.99% for fetal abdominal segmentation. Moreover, we have obtained the Dice and Jaccard coefficients of 97.45% and 95.00% using the public HC18-Grand challenge dataset. Based on the obtained results, we have concluded that a fine-tuned and a simple well-structured model used in clinical devices can outperform complex models.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
6.
Phys Med ; 88: 127-137, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34242884

RESUMEN

PURPOSE: Fetal biometric measurements face a number of challenges, including the presence of speckle, limited soft-tissue contrast and difficulties in the presence of low amniotic fluid. This work proposes a convolutional neural network for automatic segmentation and measurement of fetal biometric parameters, including biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), and femur length (FL) from ultrasound images that relies on the attention gates incorporated into the multi-feature pyramid Unet (MFP-Unet) network. METHODS: The proposed approach, referred to as Attention MFP-Unet, learns to extract/detect salient regions automatically to be treated as the object of interest via the attention gates. After determining the type of anatomical structure in the image using a convolutional neural network, Niblack's thresholding technique was applied as pre-processing algorithm for head and abdomen identification, whereas a novel algorithm was used for femur extraction. A publicly-available dataset (HC18 grand-challenge) and clinical data of 1334 subjects were utilized for training and evaluation of the Attention MFP-Unet algorithm. RESULTS: Dice similarity coefficient (DSC), hausdorff distance (HD), percentage of good contours, the conformity coefficient, and average perpendicular distance (APD) were employed for quantitative evaluation of fetal anatomy segmentation. In addition, correlation analysis, good contours, and conformity were employed to evaluate the accuracy of the biometry predictions. Attention MFP-Unet achieved 0.98, 1.14 mm, 100%, 0.95, and 0.2 mm for DSC, HD, good contours, conformity, and APD, respectively. CONCLUSIONS: Quantitative evaluation demonstrated the superior performance of the Attention MFP-Unet compared to state-of-the-art approaches commonly employed for automatic measurement of fetal biometric parameters.


Asunto(s)
Biometría , Redes Neurales de la Computación , Algoritmos , Cabeza/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Ultrasonografía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...