Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Heliyon ; 10(5): e27509, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38468955

RESUMEN

Several deep-learning assisted disease assessment schemes (DAS) have been proposed to enhance accurate detection of COVID-19, a critical medical emergency, through the analysis of clinical data. Lung imaging, particularly from CT scans, plays a pivotal role in identifying and assessing the severity of COVID-19 infections. Existing automated methods leveraging deep learning contribute significantly to reducing the diagnostic burden associated with this process. This research aims in developing a simple DAS for COVID-19 detection using the pre-trained lightweight deep learning methods (LDMs) applied to lung CT slices. The use of LDMs contributes to a less complex yet highly accurate detection system. The key stages of the developed DAS include image collection and initial processing using Shannon's thresholding, deep-feature mining supported by LDMs, feature optimization utilizing the Brownian Butterfly Algorithm (BBA), and binary classification through three-fold cross-validation. The performance evaluation of the proposed scheme involves assessing individual, fused, and ensemble features. The investigation reveals that the developed DAS achieves a detection accuracy of 93.80% with individual features, 96% accuracy with fused features, and an impressive 99.10% accuracy with ensemble features. These outcomes affirm the effectiveness of the proposed scheme in significantly enhancing COVID-19 detection accuracy in the chosen lung CT database.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38384147

RESUMEN

The death of brain cells occurs when blood flow to a particular area of the brain is abruptly cut off, resulting in a stroke. Early recognition of stroke symptoms is essential to prevent strokes and promote a healthy lifestyle. FAST tests (looking for abnormalities in the face, arms, and speech) have limitations in reliability and accuracy for diagnosing strokes. This research employs machine learning (ML) techniques to develop and assess multiple ML models to establish a robust stroke risk prediction framework. This research uses a stacking-based ensemble method to select the best three machine learning (ML) models and combine their collective intelligence. An empirical evaluation of a publicly available stroke prediction dataset demonstrates the superior performance of the proposed stacking-based ensemble model, with only one misclassification. The experimental results reveal that the proposed stacking model surpasses other state-of-the-art research, achieving accuracy, precision, F1-score of 99.99%, recall of 100%, receiver operating characteristics (ROC), Mathews correlation coefficient (MCC), and Kappa scores 1.0. Furthermore, Shapley's Additive Explanations (SHAP) are employed to analyze the predictions of the black-box machine learning (ML) models. The findings highlight that age, BMI, and glucose level are the most significant risk factors for stroke prediction. These findings contribute to the development of more efficient techniques for stroke prediction, potentially saving many lives.

3.
Math Biosci Eng ; 20(11): 19454-19467, 2023 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-38052609

RESUMEN

Cancer occurrence rates are gradually rising in the population, which reasons a heavy diagnostic burden globally. The rate of colorectal (bowel) cancer (CC) is gradually rising, and is currently listed as the third most common cancer globally. Therefore, early screening and treatments with a recommended clinical protocol are necessary to trat cancer. The proposed research aim of this paper to develop a Deep-Learning Framework (DLF) to classify the colon histology slides into normal/cancer classes using deep-learning-based features. The stages of the framework include the following: (ⅰ) Image collection, resizing, and pre-processing; (ⅱ) Deep-Features (DF) extraction with a chosen scheme; (ⅲ) Binary classification with a 5-fold cross-validation; and (ⅳ) Verification of the clinical significance. This work classifies the considered image database using the follwing: (ⅰ) Individual DF, (ⅱ) Fused DF, and (ⅲ) Ensemble DF. The achieved results are separately verified using binary classifiers. The proposed work considered 4000 (2000 normal and 2000 cancer) histology slides for the examination. The result of this research confirms that the fused DF helps to achieve a detection accuracy of 99% with the K-Nearest Neighbor (KNN) classifier. In contrast, the individual and ensemble DF provide classification accuracies of 93.25 and 97.25%, respectively.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Humanos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Colon , Neoplasias/diagnóstico
4.
Biomolecules ; 13(7)2023 07 07.
Artículo en Inglés | MEDLINE | ID: mdl-37509126

RESUMEN

Humankind is witnessing a gradual increase in cancer incidence, emphasizing the importance of early diagnosis and treatment, and follow-up clinical protocols. Oral or mouth cancer, categorized under head and neck cancers, requires effective screening for timely detection. This study proposes a framework, OralNet, for oral cancer detection using histopathology images. The research encompasses four stages: (i) Image collection and preprocessing, gathering and preparing histopathology images for analysis; (ii) feature extraction using deep and handcrafted scheme, extracting relevant features from images using deep learning techniques and traditional methods; (iii) feature reduction artificial hummingbird algorithm (AHA) and concatenation: Reducing feature dimensionality using AHA and concatenating them serially and (iv) binary classification and performance validation with three-fold cross-validation: Classifying images as healthy or oral squamous cell carcinoma and evaluating the framework's performance using three-fold cross-validation. The current study examined whole slide biopsy images at 100× and 400× magnifications. To establish OralNet's validity, 3000 cropped and resized images were reviewed, comprising 1500 healthy and 1500 oral squamous cell carcinoma images. Experimental results using OralNet achieved an oral cancer detection accuracy exceeding 99.5%. These findings confirm the clinical significance of the proposed technique in detecting oral cancer presence in histology slides.


Asunto(s)
Carcinoma de Células Escamosas , Neoplasias de Cabeza y Cuello , Neoplasias de la Boca , Humanos , Carcinoma de Células Escamosas/diagnóstico , Carcinoma de Células Escamosas/patología , Carcinoma de Células Escamosas de Cabeza y Cuello , Neoplasias de la Boca/diagnóstico , Neoplasias de la Boca/patología , Algoritmos
5.
Diagnostics (Basel) ; 13(11)2023 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-37296683

RESUMEN

Several advances in computing facilities were made due to the advancement of science and technology, including the implementation of automation in multi-specialty hospitals. This research aims to develop an efficient deep-learning-based brain-tumor (BT) detection scheme to detect the tumor in FLAIR- and T2-modality magnetic-resonance-imaging (MRI) slices. MRI slices of the axial-plane brain are used to test and verify the scheme. The reliability of the developed scheme is also verified through clinically collected MRI slices. In the proposed scheme, the following stages are involved: (i) pre-processing the raw MRI image, (ii) deep-feature extraction using pretrained schemes, (iii) watershed-algorithm-based BT segmentation and mining the shape features, (iv) feature optimization using the elephant-herding algorithm (EHA), and (v) binary classification and verification using three-fold cross-validation. Using (a) individual features, (b) dual deep features, and (c) integrated features, the BT-classification task is accomplished in this study. Each experiment is conducted separately on the chosen BRATS and TCIA benchmark MRI slices. This research indicates that the integrated feature-based scheme helps to achieve a classification accuracy of 99.6667% when a support-vector-machine (SVM) classifier is considered. Further, the performance of this scheme is verified using noise-attacked MRI slices, and better classification results are achieved.

6.
Front Public Health ; 11: 1109236, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36794074

RESUMEN

Introduction: Cancer happening rates in humankind are gradually rising due to a variety of reasons, and sensible detection and management are essential to decrease the disease rates. The kidney is one of the vital organs in human physiology, and cancer in the kidney is a medical emergency and needs accurate diagnosis and well-organized management. Methods: The proposed work aims to develop a framework to classify renal computed tomography (CT) images into healthy/cancer classes using pre-trained deep-learning schemes. To improve the detection accuracy, this work suggests a threshold filter-based pre-processing scheme, which helps in removing the artefact in the CT slices to achieve better detection. The various stages of this scheme involve: (i) Image collection, resizing, and artefact removal, (ii) Deep features extraction, (iii) Feature reduction and fusion, and (iv) Binary classification using five-fold cross-validation. Results and discussion: This experimental investigation is executed separately for: (i) CT slices with the artefact and (ii) CT slices without the artefact. As a result of the experimental outcome of this study, the K-Nearest Neighbor (KNN) classifier is able to achieve 100% detection accuracy by using the pre-processed CT slices. Therefore, this scheme can be considered for the purpose of examining clinical grade renal CT images, as it is clinically significant.


Asunto(s)
Neoplasias , Humanos , Tomografía Computarizada por Rayos X/métodos , Diagnóstico Diferencial , Riñón/diagnóstico por imagen
7.
Life (Basel) ; 12(11)2022 Nov 11.
Artículo en Inglés | MEDLINE | ID: mdl-36430983

RESUMEN

Due to various reasons, the incidence rate of communicable diseases in humans is steadily rising, and timely detection and handling will reduce the disease distribution speed. Tuberculosis (TB) is a severe communicable illness caused by the bacterium Mycobacterium-Tuberculosis (M. tuberculosis), which predominantly affects the lungs and causes severe respiratory problems. Due to its significance, several clinical level detections of TB are suggested, including lung diagnosis with chest X-ray images. The proposed work aims to develop an automatic TB detection system to assist the pulmonologist in confirming the severity of the disease, decision-making, and treatment execution. The proposed system employs a pre-trained VGG19 with the following phases: (i) image pre-processing, (ii) mining of deep features, (iii) enhancing the X-ray images with chosen procedures and mining of the handcrafted features, (iv) feature optimization using Seagull-Algorithm and serial concatenation, and (v) binary classification and validation. The classification is executed with 10-fold cross-validation in this work, and the proposed work is investigated using MATLAB® software. The proposed research work was executed using the concatenated deep and handcrafted features, which provided a classification accuracy of 98.6190% with the SVM-Medium Gaussian (SVM-MG) classifier.

8.
Comput Intell Neurosci ; 2022: 9263379, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36248926

RESUMEN

Lung abnormality in humans is steadily increasing due to various causes, and early recognition and treatment are extensively suggested. Tuberculosis (TB) is one of the lung diseases, and due to its occurrence rate and harshness, the World Health Organization (WHO) lists TB among the top ten diseases which lead to death. The clinical level detection of TB is usually performed using bio-medical imaging methods, and a chest X-ray is a commonly adopted imaging modality. This work aims to develop an automated procedure to detect TB from X-ray images using VGG-UNet-supported joint segmentation and classification. The various phases of the proposed scheme involved; (i) image collection and resizing, (ii) deep-features mining, (iii) segmentation of lung section, (iv) local-binary-pattern (LBP) generation and feature extraction, (v) optimal feature selection using spotted hyena algorithm (SHA), (vi) serial feature concatenation, and (vii) classification and validation. This research considered 3000 test images (1500 healthy and 1500 TB class) for the assessment, and the proposed experiment is implemented using Matlab®. This work implements the pretrained models to detect TB in X-rays with improved accuracy, and this research helped achieve a classification accuracy of >99% with a fine-tree classifier.


Asunto(s)
Hyaenidae , Tuberculosis , Algoritmos , Animales , Humanos , Pulmón/diagnóstico por imagen , Tuberculosis/diagnóstico por imagen
9.
Scanning ; 2022: 7733860, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35800206

RESUMEN

This research work aims to implement an automated segmentation process to extract the endoplasmic reticulum (ER) network in fluorescence microscopy images (FMI) using pretrained convolutional neural network (CNN). The threshold level of the raw FMT is complex, and extraction of the ER network is a challenging task. Hence, an image conversion procedure is initially employed to reduce its complexity. This work employed the pretrained CNN schemes, such as VGG-UNet and VGG-SegNet, to mine the ER network from the chosen FMI test images. The proposed ER segmentation pipeline consists of the following phases; (i) clinical image collection, 16-bit to 8-bit conversion and resizing; (ii) implementation of pretrained VGG-UNet and VGG-SegNet; (iii) extraction of the binary form of ER network; (iv) comparing the mined ER with ground-truth; and (v) computation of image measures and validation. The considered FMI dataset consists of 223 test images, and image augmentation is then implemented to increase these images. The result of this scheme is then confirmed against other CNN methods, such as U-Net, SegNet, and Res-UNet. The experimental outcome confirms a segmentation accuracy of >98% with VGG-UNet and VGG-SegNet. The results of this research authenticate that the proposed pipeline can be considered to examine the clinical-grade FMI.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Retículo Endoplásmico , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía Fluorescente
10.
Front Public Health ; 10: 819865, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35400062

RESUMEN

Understanding the reason for an infant's cry is the most difficult thing for parents. There might be various reasons behind the baby's cry. It may be due to hunger, pain, sleep, or diaper-related problems. The key concept behind identifying the reason behind the infant's cry is mainly based on the varying patterns of the crying audio. The audio file comprises many features, which are highly important in classifying the results. It is important to convert the audio signals into the required spectrograms. In this article, we are trying to find efficient solutions to the problem of predicting the reason behind an infant's cry. In this article, we have used the Mel-frequency cepstral coefficients algorithm to generate the spectrograms and analyzed the varying feature vectors. We then came up with two approaches to obtain the experimental results. In the first approach, we used the Convolution Neural network (CNN) variants like VGG16 and YOLOv4 to classify the infant cry signals. In the second approach, a multistage heterogeneous stacking ensemble model was used for infant cry classification. Its major advantage was the inclusion of various advanced boosting algorithms at various levels. The proposed multistage heterogeneous stacking ensemble model had the edge over the other neural network models, especially in terms of overall performance and computing power. Finally, after many comparisons, the proposed model revealed the virtuoso performance and a mean classification accuracy of up to 93.7%.


Asunto(s)
Llanto , Redes Neurales de la Computación , Algoritmos , Humanos , Lactante
11.
Sensors (Basel) ; 23(1)2022 Dec 27.
Artículo en Inglés | MEDLINE | ID: mdl-36616876

RESUMEN

Brain abnormality causes severe human problems, and thorough screening is necessary to identify the disease. In clinics, bio-image-supported brain abnormality screening is employed mainly because of its investigative accuracy compared with bio-signal (EEG)-based practice. This research aims to develop a reliable disease screening framework for the automatic identification of schizophrenia (SCZ) conditions from brain MRI slices. This scheme consists following phases: (i) MRI slices collection and pre-processing, (ii) implementation of VGG16 to extract deep features (DF), (iii) collection of handcrafted features (HF), (iv) mayfly algorithm-supported optimal feature selection, (v) serial feature concatenation, and (vi) binary classifier execution and validation. The performance of the proposed scheme was independently tested with DF, HF, and concatenated features (DF+HF), and the achieved outcome of this study verifies that the schizophrenia screening accuracy with DF+HF is superior compared with other methods. During this work, 40 patients' brain MRI images (20 controlled and 20 SCZ class) were considered for the investigation, and the following accuracies were achieved: DF provided >91%, HF obtained >85%, and DF+HF achieved >95%. Therefore, this framework is clinically significant, and in the future, it can be used to inspect actual patients' brain MRI slices.


Asunto(s)
Encefalopatías , Ephemeroptera , Esquizofrenia , Animales , Humanos , Esquizofrenia/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Algoritmos , Encéfalo/diagnóstico por imagen
12.
Diagnostics (Basel) ; 11(12)2021 Nov 26.
Artículo en Inglés | MEDLINE | ID: mdl-34943443

RESUMEN

Pulmonary nodule is one of the lung diseases and its early diagnosis and treatment are essential to cure the patient. This paper introduces a deep learning framework to support the automated detection of lung nodules in computed tomography (CT) images. The proposed framework employs VGG-SegNet supported nodule mining and pre-trained DL-based classification to support automated lung nodule detection. The classification of lung CT images is implemented using the attained deep features, and then these features are serially concatenated with the handcrafted features, such as the Grey Level Co-Occurrence Matrix (GLCM), Local-Binary-Pattern (LBP) and Pyramid Histogram of Oriented Gradients (PHOG) to enhance the disease detection accuracy. The images used for experiments are collected from the LIDC-IDRI and Lung-PET-CT-Dx datasets. The experimental results attained show that the VGG19 architecture with concatenated deep and handcrafted features can achieve an accuracy of 97.83% with the SVM-RBF classifier.

13.
Behav Neurol ; 2021: 2560388, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34966463

RESUMEN

The excessive number of COVID-19 cases reported worldwide so far, supplemented by a high rate of false alarms in its diagnosis using the conventional polymerase chain reaction method, has led to an increased number of high-resolution computed tomography (CT) examinations conducted. The manual inspection of the latter, besides being slow, is susceptible to human errors, especially because of an uncanny resemblance between the CT scans of COVID-19 and those of pneumonia, and therefore demands a proportional increase in the number of expert radiologists. Artificial intelligence-based computer-aided diagnosis of COVID-19 using the CT scans has been recently coined, which has proven its effectiveness in terms of accuracy and computation time. In this work, a similar framework for classification of COVID-19 using CT scans is proposed. The proposed method includes four core steps: (i) preparing a database of three different classes such as COVID-19, pneumonia, and normal; (ii) modifying three pretrained deep learning models such as VGG16, ResNet50, and ResNet101 for the classification of COVID-19-positive scans; (iii) proposing an activation function and improving the firefly algorithm for feature selection; and (iv) fusing optimal selected features using descending order serial approach and classifying using multiclass supervised learning algorithms. We demonstrate that once this method is performed on a publicly available dataset, this system attains an improved accuracy of 97.9% and the computational time is almost 34 (sec).


Asunto(s)
COVID-19 , Aprendizaje Profundo , Inteligencia Artificial , Computadores , Humanos , SARS-CoV-2 , Tomografía Computarizada por Rayos X
14.
Appl Intell (Dordr) ; 51(1): 571-585, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34764547

RESUMEN

Lung abnormality is one of the common diseases in humans of all age group and this disease may arise due to various reasons. Recently, the lung infection due to SARS-CoV-2 has affected a larger human community globally, and due to its rapidity, the World-Health-Organisation (WHO) declared it as pandemic disease. The COVID-19 disease has adverse effects on the respiratory system, and the infection severity can be detected using a chosen imaging modality. In the proposed research work; the COVID-19 is detected using transfer learning from CT scan images decomposed to three-level using stationary wavelet. A three-phase detection model is proposed to improve the detection accuracy and the procedures are as follows; Phase1- data augmentation using stationary wavelets, Phase2- COVID-19 detection using pre-trained CNN model and Phase3- abnormality localization in CT scan images. This work has considered the well known pre-trained architectures, such as ResNet18, ResNet50, ResNet101, and SqueezeNet for the experimental evaluation. In this work, 70% of images are considered to train the network and 30% images are considered to validate the network. The performance of the considered architectures is evaluated by computing the common performance measures. The result of the experimental evaluation confirms that the ResNet18 pre-trained transfer learning-based model offered better classification accuracy (training = 99.82%, validation = 97.32%, and testing = 99.4%) on the considered image dataset compared with the alternatives.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA