Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Artif Intell ; 7: 1396160, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38694880

RESUMEN

Diabetic retinopathy is a condition that affects the retina and causes vision loss due to blood vessel destruction. The retina is the layer of the eye responsible for visual processing and nerve signaling. Diabetic retinopathy causes vision loss, floaters, and sometimes blindness; however, it often shows no warning signals in the early stages. Deep learning-based techniques have emerged as viable options for automated illness classification as large-scale medical imaging datasets have become more widely available. To adapt to medical image analysis tasks, transfer learning makes use of pre-trained models to extract high-level characteristics from natural images. In this research, an intelligent recommendation-based fine-tuned EfficientNetB0 model has been proposed for quick and precise assessment for the diagnosis of diabetic retinopathy from fundus images, which will help ophthalmologists in early diagnosis and detection. The proposed EfficientNetB0 model is compared with three transfer learning-based models, namely, ResNet152, VGG16, and DenseNet169. The experimental work is carried out using publicly available datasets from Kaggle consisting of 3,200 fundus images. Out of all the transfer learning models, the EfficientNetB0 model has outperformed with an accuracy of 0.91, followed by DenseNet169 with an accuracy of 0.90. In comparison to other approaches, the proposed intelligent recommendation-based fine-tuned EfficientNetB0 approach delivers state-of-the-art performance on the accuracy, recall, precision, and F1-score criteria. The system aims to assist ophthalmologists in early detection, potentially alleviating the burden on healthcare units.

2.
BMC Med Imaging ; 24(1): 83, 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38589793

RESUMEN

The research focuses on the segmentation and classification of leukocytes, a crucial task in medical image analysis for diagnosing various diseases. The leukocyte dataset comprises four classes of images such as monocytes, lymphocytes, eosinophils, and neutrophils. Leukocyte segmentation is achieved through image processing techniques, including background subtraction, noise removal, and contouring. To get isolated leukocytes, background mask creation, Erythrocytes mask creation, and Leukocytes mask creation are performed on the blood cell images. Isolated leukocytes are then subjected to data augmentation including brightness and contrast adjustment, flipping, and random shearing, to improve the generalizability of the CNN model. A deep Convolutional Neural Network (CNN) model is employed on augmented dataset for effective feature extraction and classification. The deep CNN model consists of four convolutional blocks having eleven convolutional layers, eight batch normalization layers, eight Rectified Linear Unit (ReLU) layers, and four dropout layers to capture increasingly complex patterns. For this research, a publicly available dataset from Kaggle consisting of a total of 12,444 images of four types of leukocytes was used to conduct the experiments. Results showcase the robustness of the proposed framework, achieving impressive performance metrics with an accuracy of 97.98% and precision of 97.97%. These outcomes affirm the efficacy of the devised segmentation and classification approach in accurately identifying and categorizing leukocytes. The combination of advanced CNN architecture and meticulous pre-processing steps establishes a foundation for future developments in the field of medical image analysis.


Asunto(s)
Aprendizaje Profundo , Humanos , Curaduría de Datos , Leucocitos , Redes Neurales de la Computación , Células Sanguíneas , Procesamiento de Imagen Asistido por Computador/métodos
3.
Sci Rep ; 14(1): 1345, 2024 01 16.
Artículo en Inglés | MEDLINE | ID: mdl-38228639

RESUMEN

A brain tumor is an unnatural expansion of brain cells that can't be stopped, making it one of the deadliest diseases of the nervous system. The brain tumor segmentation for its earlier diagnosis is a difficult task in the field of medical image analysis. Earlier, segmenting brain tumors was done manually by radiologists but that requires a lot of time and effort. Inspite of this, in the manual segmentation there was possibility of making mistakes due to human intervention. It has been proved that deep learning models can outperform human experts for the diagnosis of brain tumor in MRI images. These algorithms employ a huge number of MRI scans to learn the difficult patterns of brain tumors to segment them automatically and accurately. Here, an encoder-decoder based architecture with deep convolutional neural network is proposed for semantic segmentation of brain tumor in MRI images. The proposed method focuses on the image downsampling in the encoder part. For this, an intelligent LinkNet-34 model with EfficientNetB7 encoder based semantic segmentation model is proposed. The performance of LinkNet-34 model is compared with other three models namely FPN, U-Net, and PSPNet. Further, the performance of EfficientNetB7 used as encoder in LinkNet-34 model has been compared with three encoders namely ResNet34, MobileNet_V2, and ResNet50. After that, the proposed model is optimized using three different optimizers such as RMSProp, Adamax and Adam. The LinkNet-34 model has outperformed with EfficientNetB7 encoder using Adamax optimizer with the value of jaccard index as 0.89 and dice coefficient as 0.915.


Asunto(s)
Neoplasias Encefálicas , Semántica , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Algoritmos , Inteligencia , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador
4.
Life (Basel) ; 13(10)2023 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-37895472

RESUMEN

Bone marrow (BM) is an essential part of the hematopoietic system, which generates all of the body's blood cells and maintains the body's overall health and immune system. The classification of bone marrow cells is pivotal in both clinical and research settings because many hematological diseases, such as leukemia, myelodysplastic syndromes, and anemias, are diagnosed based on specific abnormalities in the number, type, or morphology of bone marrow cells. There is a requirement for developing a robust deep-learning algorithm to diagnose bone marrow cells to keep a close check on them. This study proposes a framework for categorizing bone marrow cells into seven classes. In the proposed framework, five transfer learning models-DenseNet121, EfficientNetB5, ResNet50, Xception, and MobileNetV2-are implemented into the bone marrow dataset to classify them into seven classes. The best-performing DenseNet121 model was fine-tuned by adding one batch-normalization layer, one dropout layer, and two dense layers. The proposed fine-tuned DenseNet121 model was optimized using several optimizers, such as AdaGrad, AdaDelta, Adamax, RMSprop, and SGD, along with different batch sizes of 16, 32, 64, and 128. The fine-tuned DenseNet121 model was integrated with an attention mechanism to improve its performance by allowing the model to focus on the most relevant features or regions of the image, which can be particularly beneficial in medical imaging, where certain regions might have critical diagnostic information. The proposed fine-tuned and integrated DenseNet121 achieved the highest accuracy, with a training success rate of 99.97% and a testing success rate of 97.01%. The key hyperparameters, such as batch size, number of epochs, and different optimizers, were all considered for optimizing these pre-trained models to select the best model. This study will help in medical research to effectively classify the BM cells to prevent diseases like leukemia.

5.
Healthcare (Basel) ; 11(11)2023 May 26.
Artículo en Inglés | MEDLINE | ID: mdl-37297701

RESUMEN

Pneumonia has been directly responsible for a huge number of deaths all across the globe. Pneumonia shares visual features with other respiratory diseases, such as tuberculosis, which can make it difficult to distinguish between them. Moreover, there is significant variability in the way chest X-ray images are acquired and processed, which can impact the quality and consistency of the images. This can make it challenging to develop robust algorithms that can accurately identify pneumonia in all types of images. Hence, there is a need to develop robust, data-driven algorithms that are trained on large, high-quality datasets and validated using a range of imaging techniques and expert radiological analysis. In this research, a deep-learning-based model is demonstrated for differentiating between normal and severe cases of pneumonia. This complete proposed system has a total of eight pre-trained models, namely, ResNet50, ResNet152V2, DenseNet121, DenseNet201, Xception, VGG16, EfficientNet, and MobileNet. These eight pre-trained models were simulated on two datasets having 5856 images and 112,120 images of chest X-rays. The best accuracy is obtained on the MobileNet model with values of 94.23% and 93.75% on two different datasets. Key hyperparameters including batch sizes, number of epochs, and different optimizers have all been considered during comparative interpretation of these models to determine the most appropriate model.

6.
Diagnostics (Basel) ; 13(9)2023 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-37175042

RESUMEN

The segmentation of lungs from medical images is a critical step in the diagnosis and treatment of lung diseases. Deep learning techniques have shown great promise in automating this task, eliminating the need for manual annotation by radiologists. In this research, a convolution neural network architecture is proposed for lung segmentation using chest X-ray images. In the proposed model, concatenate block is embedded to learn a series of filters or features used to extract meaningful information from the image. Moreover, a transpose layer is employed in the concatenate block to improve the spatial resolution of feature maps generated by a prior convolutional layer. The proposed model is trained using k-fold validation as it is a powerful and flexible tool for evaluating the performance of deep learning models. The proposed model is evaluated on five different subsets of the data by taking the value of k as 5 to obtain the optimized model to obtain more accurate results. The performance of the proposed model is analyzed for different hyper-parameters such as the batch size as 32, optimizer as Adam and 40 epochs. The dataset used for the segmentation of disease is taken from the Kaggle repository. The various performance parameters such as accuracy, IoU, and dice coefficient are calculated, and the values obtained are 0.97, 0.93, and 0.96, respectively.

7.
Diagnostics (Basel) ; 13(7)2023 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-37046538

RESUMEN

Brain tumor diagnosis at an early stage can improve the chances of successful treatment and better patient outcomes. In the biomedical industry, non-invasive diagnostic procedures, such as magnetic resonance imaging (MRI), can be used to diagnose brain tumors. Deep learning, a type of artificial intelligence, can analyze MRI images in a matter of seconds, reducing the time it takes for diagnosis and potentially improving patient outcomes. Furthermore, an ensemble model can help increase the accuracy of classification by combining the strengths of multiple models and compensating for their individual weaknesses. Therefore, in this research, a weighted average ensemble deep learning model is proposed for the classification of brain tumors. For the weighted ensemble classification model, three different feature spaces are taken from the transfer learning VGG19 model, Convolution Neural Network (CNN) model without augmentation, and CNN model with augmentation. These three feature spaces are ensembled with the best combination of weights, i.e., weight1, weight2, and weight3 by using grid search. The dataset used for simulation is taken from The Cancer Genome Atlas (TCGA), having a lower-grade glioma collection with 3929 MRI images of 110 patients. The ensemble model helps reduce overfitting by combining multiple models that have learned different aspects of the data. The proposed ensemble model outperforms the three individual models for detecting brain tumors in terms of accuracy, precision, and F1-score. Therefore, the proposed model can act as a second opinion tool for radiologists to diagnose the tumor from MRI images of the brain.

8.
Diagnostics (Basel) ; 12(7)2022 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-35885533

RESUMEN

Skin cancer is the most commonly diagnosed and reported malignancy worldwide. To reduce the death rate from cancer, it is essential to diagnose skin cancer at a benign stage as soon as possible. To save lives, an automated system that can detect skin cancer in its earliest stages is necessary. For the diagnosis of skin cancer, various researchers have performed tasks using deep learning and transfer learning models. However, the existing literature is limited in terms of its accuracy and its troublesome and time-consuming process. As a result, it is critical to design an automatic system that can deliver a fast judgment and considerably reduce mistakes in diagnosis. In this work, a deep learning-based model has been designed for the identification of skin cancer at benign and malignant stages using the concept of transfer learning approach. For this, a pre-trained VGG16 model is improved by adding one flatten layer, two dense layers with activation function (LeakyReLU) and another dense layer with activation function (sigmoid) to enhance the accuracy of this model. This proposed model is evaluated on a dataset obtained from Kaggle. The techniques of data augmentation are applied in order to enhance the random-ness among the input dataset for model stability. The proposed model has been validated by considering several useful hyper parameters such as different batch sizes of 8, 16, 32, 64, and 128; different epochs and optimizers. The proposed model is working best with an overall accuracy of 89.09% on 128 batch size with the Adam optimizer and 10 epochs and outperforms state-of-the-art techniques. This model will help dermatologists in the early diagnosis of skin cancers.

9.
Sensors (Basel) ; 22(3)2022 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-35161613

RESUMEN

Dermoscopy images can be classified more accurately if skin lesions or nodules are segmented. Because of their fuzzy borders, irregular boundaries, inter- and intra-class variances, and so on, nodule segmentation is a difficult task. For the segmentation of skin lesions from dermoscopic pictures, several algorithms have been developed. However, their accuracy lags well behind the industry standard. In this paper, a modified U-Net architecture is proposed by modifying the feature map's dimension for an accurate and automatic segmentation of dermoscopic images. Apart from this, more kernels to the feature map allowed for a more precise extraction of the nodule. We evaluated the effectiveness of the proposed model by considering several hyper parameters such as epochs, batch size, and the types of optimizers, testing it with augmentation techniques implemented to enhance the amount of photos available in the PH2 dataset. The best performance achieved by the proposed model is with an Adam optimizer using a batch size of 8 and 75 epochs.


Asunto(s)
Melanoma , Enfermedades de la Piel , Neoplasias Cutáneas , Algoritmos , Dermoscopía , Humanos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA