Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 132
Filtrar
1.
Proteins ; 2024 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-38520179

RESUMO

During the last three decades, antimicrobial peptides (AMPs) have emerged as a promising therapeutic alternative to antibiotics. The approaches for designing AMPs span from experimental trial-and-error methods to synthetic hybrid peptide libraries. To overcome the exceedingly expensive and time-consuming process of designing effective AMPs, many computational and machine-learning tools for AMP prediction have been recently developed. In general, to encode the peptide sequences, featurization relies on approaches based on (a) amino acid (AA) composition, (b) physicochemical properties, (c) sequence similarity, and (d) structural properties. In this work, we present an image-based deep neural network model to predict AMPs, where we are using feature encoding based on Drude polarizable force-field atom types, which can capture the peptide properties more efficiently compared to conventional feature vectors. The proposed prediction model identifies short AMPs (≤30 AA) with promising accuracy and efficiency and can be used as a next-generation screening method for predicting new AMPs. The source code is publicly available at the Figshare server sAMP-VGG16.

2.
Front Zool ; 21(1): 10, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561769

RESUMO

BACKGROUND: Rapid identification and classification of bats are critical for practical applications. However, species identification of bats is a typically detrimental and time-consuming manual task that depends on taxonomists and well-trained experts. Deep Convolutional Neural Networks (DCNNs) provide a practical approach for the extraction of the visual features and classification of objects, with potential application for bat classification. RESULTS: In this study, we investigated the capability of deep learning models to classify 7 horseshoe bat taxa (CHIROPTERA: Rhinolophus) from Southern China. We constructed an image dataset of 879 front, oblique, and lateral targeted facial images of live individuals collected during surveys between 2012 and 2021. All images were taken using a standard photograph protocol and setting aimed at enhancing the effectiveness of the DCNNs classification. The results demonstrated that our customized VGG16-CBAM model achieved up to 92.15% classification accuracy with better performance than other mainstream models. Furthermore, the Grad-CAM visualization reveals that the model pays more attention to the taxonomic key regions in the decision-making process, and these regions are often preferred by bat taxonomists for the classification of horseshoe bats, corroborating the validity of our methods. CONCLUSION: Our finding will inspire further research on image-based automatic classification of chiropteran species for early detection and potential application in taxonomy.

3.
Respiration ; : 1-14, 2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-39047695

RESUMO

INTRODUCTION: Exacerbations of chronic obstructive pulmonary disease (COPD) have a significant impact on hospitalizations, morbidity, and mortality of patients. This study aimed to develop a model for predicting acute exacerbation in COPD patients (AECOPD) based on deep-learning (DL) features. METHODS: We performed a retrospective study on 219 patients with COPD who underwent inspiratory and expiratory HRCT scans. By recording the acute respiratory events of the previous year, these patients were further divided into non-AECOPD group and AECOPD group according to the presence of acute exacerbation events. Sixty-nine quantitative CT (QCT) parameters of emphysema and airway were calculated by NeuLungCARE software, and 2,000 DL features were extracted by VGG-16 method. The logistic regression method was employed to identify AECOPD patients, and 29 patients of external validation cohort were used to access the robustness of the results. RESULTS: The model 3-B achieved an area under the receiver operating characteristic curve (AUC) of 0.933 and 0.865 in the testing cohort and external validation cohort, respectively. Model 3-I obtained AUC of 0.895 in the testing cohort and AUC of 0.774 in the external validation cohort. Model 7-B combined clinical characteristics, QCT parameters, and DL features achieved the best performance with an AUC of 0.979 in the testing cohort and demonstrating robust predictability with an AUC of 0.932 in the external validation cohort. Likewise, model 7-I achieved an AUC of 0.938 and 0.872 in the testing cohort and external validation cohort, respectively. CONCLUSIONS: DL features extracted from HRCT scans can effectively predict acute exacerbation phenotype in COPD patients.

4.
BMC Med Imaging ; 24(1): 110, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38750436

RESUMO

Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Aprendizado de Máquina , Interpretação de Imagem Assistida por Computador/métodos
5.
BMC Med Imaging ; 24(1): 156, 2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38910241

RESUMO

Parkinson's disease (PD) is challenging for clinicians to accurately diagnose in the early stages. Quantitative measures of brain health can be obtained safely and non-invasively using medical imaging techniques like magnetic resonance imaging (MRI) and single photon emission computed tomography (SPECT). For accurate diagnosis of PD, powerful machine learning and deep learning models as well as the effectiveness of medical imaging tools for assessing neurological health are required. This study proposes four deep learning models with a hybrid model for the early detection of PD. For the simulation study, two standard datasets are chosen. Further to improve the performance of the models, grey wolf optimization (GWO) is used to automatically fine-tune the hyperparameters of the models. The GWO-VGG16, GWO-DenseNet, GWO-DenseNet + LSTM, GWO-InceptionV3 and GWO-VGG16 + InceptionV3 are applied to the T1,T2-weighted and SPECT DaTscan datasets. All the models performed well and obtained near or above 99% accuracy. The highest accuracy of 99.94% and AUC of 99.99% is achieved by the hybrid model (GWO-VGG16 + InceptionV3) for T1,T2-weighted dataset and 100% accuracy and 99.92% AUC is recorded for GWO-VGG16 + InceptionV3 models using SPECT DaTscan dataset.


Assuntos
Algoritmos , Aprendizado Profundo , Imageamento por Ressonância Magnética , Doença de Parkinson , Tomografia Computadorizada de Emissão de Fóton Único , Humanos , Doença de Parkinson/diagnóstico por imagem , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Feminino
6.
BMC Med Imaging ; 24(1): 176, 2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-39030496

RESUMO

Medical imaging stands as a critical component in diagnosing various diseases, where traditional methods often rely on manual interpretation and conventional machine learning techniques. These approaches, while effective, come with inherent limitations such as subjectivity in interpretation and constraints in handling complex image features. This research paper proposes an integrated deep learning approach utilizing pre-trained models-VGG16, ResNet50, and InceptionV3-combined within a unified framework to improve diagnostic accuracy in medical imaging. The method focuses on lung cancer detection using images resized and converted to a uniform format to optimize performance and ensure consistency across datasets. Our proposed model leverages the strengths of each pre-trained network, achieving a high degree of feature extraction and robustness by freezing the early convolutional layers and fine-tuning the deeper layers. Additionally, techniques like SMOTE and Gaussian Blur are applied to address class imbalance, enhancing model training on underrepresented classes. The model's performance was validated on the IQ-OTH/NCCD lung cancer dataset, which was collected from the Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases over a period of three months in fall 2019. The proposed model achieved an accuracy of 98.18%, with precision and recall rates notably high across all classes. This improvement highlights the potential of integrated deep learning systems in medical diagnostics, providing a more accurate, reliable, and efficient means of disease detection.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação
7.
Sensors (Basel) ; 24(13)2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-39001200

RESUMO

Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone marrow aspiration, and biopsy, all of which are highly time-consuming and expensive. It is essential to obtain an early diagnosis of ALL in order to start therapy in a timely and suitable manner. In recent medical diagnostics, substantial progress has been achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) devices. Our proposal introduces a new AI-based Internet of Medical Things (IoMT) framework designed to automatically identify leukemia from peripheral blood smear (PBS) images. In this study, we present a novel deep learning-based fusion model to detect ALL types of leukemia. The system seamlessly delivers the diagnostic reports to the centralized database, inclusive of patient-specific devices. After collecting blood samples from the hospital, the PBS images are transmitted to the cloud server through a WiFi-enabled microscopic device. In the cloud server, a new fusion model that is capable of classifying ALL from PBS images is configured. The fusion model is trained using a dataset including 6512 original and segmented images from 89 individuals. Two input channels are used for the purpose of feature extraction in the fusion model. These channels include both the original and the segmented images. VGG16 is responsible for extracting features from the original images, whereas DenseNet-121 is responsible for extracting features from the segmented images. The two output features are merged together, and dense layers are used for the categorization of leukemia. The fusion model that has been suggested obtains an accuracy of 99.89%, a precision of 99.80%, and a recall of 99.72%, which places it in an excellent position for the categorization of leukemia. The proposed model outperformed several state-of-the-art Convolutional Neural Network (CNN) models in terms of performance. Consequently, this proposed model has the potential to save lives and effort. For a more comprehensive simulation of the entire methodology, a web application (Beta Version) has been developed in this study. This application is designed to determine the presence or absence of leukemia in individuals. The findings of this study hold significant potential for application in biomedical research, particularly in enhancing the accuracy of computer-aided leukemia detection.


Assuntos
Aprendizado Profundo , Internet das Coisas , Humanos , Leucemia-Linfoma Linfoblástico de Células Precursoras/diagnóstico , Inteligência Artificial , Leucemia/diagnóstico , Leucemia/classificação , Leucemia/patologia , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
8.
Sensors (Basel) ; 24(11)2024 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-38894210

RESUMO

In hazardous environments like mining sites, mobile inspection robots play a crucial role in condition monitoring (CM) tasks, particularly by collecting various kinds of data, such as images. However, the sheer volume of collected image samples and existing noise pose challenges in processing and visualizing thermal anomalies. Recognizing these challenges, our study addresses the limitations of industrial big data analytics for mobile robot-generated image data. We present a novel, fully integrated approach involving a dimension reduction procedure. This includes a semantic segmentation technique utilizing the pre-trained VGG16 CNN architecture for feature selection, followed by random forest (RF) and extreme gradient boosting (XGBoost) classifiers for the prediction of the pixel class labels. We also explore unsupervised learning using the PCA-K-means method for dimension reduction and classification of unlabeled thermal defects based on anomaly severity. Our comprehensive methodology aims to efficiently handle image-based CM tasks in hazardous environments. To validate its practicality, we applied our approach in a real-world scenario, and the results confirm its robust performance in processing and visualizing thermal data collected by mobile inspection robots. This affirms the effectiveness of our methodology in enhancing the overall performance of CM processes.

9.
Int J Mol Sci ; 25(3)2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38338828

RESUMO

Skin cancer is a severe and potentially lethal disease, and early detection is critical for successful treatment. Traditional procedures for diagnosing skin cancer are expensive, time-intensive, and necessitate the expertise of a medical practitioner. In recent years, many researchers have developed artificial intelligence (AI) tools, including shallow and deep machine learning-based approaches, to diagnose skin cancer. However, AI-based skin cancer diagnosis faces challenges in complexity, low reproducibility, and explainability. To address these problems, we propose a novel Grid-Based Structural and Dimensional Explainable Deep Convolutional Neural Network for accurate and interpretable skin cancer classification. This model employs adaptive thresholding for extracting the region of interest (ROI), using its dynamic capabilities to enhance the accuracy of identifying cancerous regions. The VGG-16 architecture extracts the hierarchical characteristics of skin lesion images, leveraging its recognized capabilities for deep feature extraction. Our proposed model leverages a grid structure to capture spatial relationships within lesions, while the dimensional features extract relevant information from various image channels. An Adaptive Intelligent Coney Optimization (AICO) algorithm is employed for self-feature selected optimization and fine-tuning the hyperparameters, which dynamically adapts the model architecture to optimize feature extraction and classification. The model was trained and tested using the ISIC dataset of 10,015 dermascope images and the MNIST dataset of 2357 images of malignant and benign oncological diseases. The experimental results demonstrated that the model achieved accuracy and CSI values of 0.96 and 0.97 for TP 80 using the ISIC dataset, which is 17.70% and 16.49% more than lightweight CNN, 20.83% and 19.59% more than DenseNet, 18.75% and 17.53% more than CNN, 6.25% and 6.18% more than Efficient Net-B0, 5.21% and 5.15% over ECNN, 2.08% and 2.06% over COA-CAN, and 5.21% and 5.15% more than ARO-ECNN. Additionally, the AICO self-feature selected ECNN model exhibited minimal FPR and FNR of 0.03 and 0.02, respectively. The model attained a loss of 0.09 for ISIC and 0.18 for the MNIST dataset, indicating that the model proposed in this research outperforms existing techniques. The proposed model improves accuracy, interpretability, and robustness for skin cancer classification, ultimately aiding clinicians in early diagnosis and treatment.


Assuntos
Bass , Neoplasias Faciais , Neoplasias Cutâneas , Animais , Inteligência Artificial , Reprodutibilidade dos Testes , Pele , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico
10.
Electromagn Biol Med ; 43(1-2): 1-18, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38217513

RESUMO

Magnetic resonance imaging (MRI) is a powerful tool for tumor diagnosis in human brain. Here, the MRI images are considered to detect the brain tumor and classify the regions as meningioma, glioma, pituitary and normal types. Numerous existing methods regarding brain tumor detection were suggested previously, but none of the methods accurately categorizes the brain tumor and consumes more computation period. To address these problems, an Evolutionary Gravitational Neocognitron Neural Network optimized with Marine Predators Algorithm is proposed in this article for MRI Brain Tumor Classification (EGNNN-VGG16-MPA-MRI-BTC). Initially, the brain MRI pictures are collected under Brats MRI image dataset. By using Savitzky-Golay Denoising approach, these images are pre-processed. The features are extracted utilizing visual geometry group network (VGG16). By utilizing VGG16, the features, like Grey level features, Haralick Texture features are extracted. These extracted features are given to EGNNN classifier, which categorizes the brain tumor as glioma, meningioma, pituitary gland and normal. Batch Normalization (BN) layer of EGNNN is eliminated and included with VGG16 layer. Marine Predators Optimization Algorithm (MPA) optimizes the weight parameters of EGNNN. The simulation is activated in MATLAB. Finally, the EGNNN-VGG16-MPA-MRI-BTC method attains 38.98%, 46.74%, 23.27% higher accuracy, 24.24%, 37.82%, 13.92% higher precision, 26.94%, 47.04%, 38.94% higher sensitivity compared with the existing AlexNet-SVM-MRI-BTC, RESNET-SGD-MRI-BTC and MobileNet-V2-MRI-BTC models respectively.


Evolutionary Gravitational Neocognitron Neural Network optimized with Marine Predators Algorithm is proposed in this article for MRI Brain Tumor Classification (EGNNN-VGG16-MPA-MRI-BTC). Initially, the brain MRI pictures are collected under Brats MRI image dataset. By using Savitzky-Golay Denoising approach, these images are pre-processed. The features are extracted utilizing visual geometry group network (VGG16). By utilizing VGG16, the features, like Grey level features, Haralick Texture features are extracted. These extracted features are given to EGNNN classifier, which categorizes the brain tumor as glioma, meningioma, pituitary gland and normal. Batch Normalization (BN) layer of EGNNN is eliminated and included with VGG16 layer. Marine Predators Optimization Algorithm (MPA) optimizes the weight parameters of EGNNN.


Assuntos
Algoritmos , Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Gravitação , Evolução Biológica
11.
Environ Monit Assess ; 196(4): 406, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38561525

RESUMO

This work introduces a novel approach to remotely count and monitor potato plants in high-altitude regions of India using an unmanned aerial vehicle (UAV) and an artificial intelligence (AI)-based deep learning (DL) network. The proposed methodology involves the use of a self-created AI model called PlantSegNet, which is based on VGG-16 and U-Net architectures, to analyze aerial RGB images captured by a UAV. To evaluate the proposed approach, a self-created dataset of aerial images from different planting blocks is used to train and test the PlantSegNet model. The experimental results demonstrate the effectiveness and validity of the proposed method in challenging environmental conditions. The proposed approach achieves pixel accuracy of 98.65%, a loss of 0.004, an Intersection over Union (IoU) of 0.95, and an F1-Score of 0.94. Comparing the proposed model with existing models, such as Mask-RCNN and U-NET, demonstrates that PlantSegNet outperforms both models in terms of performance parameters. The proposed methodology provides a reliable solution for remote crop counting in challenging terrain, which can be beneficial for farmers in the Himalayan regions of India. The methods and results presented in this paper offer a promising foundation for the development of advanced decision support systems for planning planting operations.


Assuntos
Inteligência Artificial , Dispositivos Aéreos não Tripulados , Humanos , Monitoramento Ambiental , Fazendeiros , Índia
12.
J Prosthodont ; 33(7): 645-654, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38566564

RESUMO

PURPOSE: The study aimed to compare the performance of four pre-trained convolutional neural networks in recognizing seven distinct prosthodontic scenarios involving the maxilla, as a preliminary step in developing an artificial intelligence (AI)-powered prosthesis design system. MATERIALS AND METHODS: Seven distinct classes, including cleft palate, dentulous maxillectomy, edentulous maxillectomy, reconstructed maxillectomy, completely dentulous, partially edentulous, and completely edentulous, were considered for recognition. Utilizing transfer learning and fine-tuned hyperparameters, four AI models (VGG16, Inception-ResNet-V2, DenseNet-201, and Xception) were employed. The dataset, consisting of 3541 preprocessed intraoral occlusal images, was divided into training, validation, and test sets. Model performance metrics encompassed accuracy, precision, recall, F1 score, area under the receiver operating characteristic curve (AUC), and confusion matrix. RESULTS: VGG16, Inception-ResNet-V2, DenseNet-201, and Xception demonstrated comparable performance, with maximum test accuracies of 0.92, 0.90, 0.94, and 0.95, respectively. Xception and DenseNet-201 slightly outperformed the other models, particularly compared with InceptionResNet-V2. Precision, recall, and F1 scores exceeded 90% for most classes in Xception and DenseNet-201 and the average AUC values for all models ranged between 0.98 and 1.00. CONCLUSIONS: While DenseNet-201 and Xception demonstrated superior performance, all models consistently achieved diagnostic accuracy exceeding 90%, highlighting their potential in dental image analysis. This AI application could help work assignments based on difficulty levels and enable the development of an automated diagnosis system at patient admission. It also facilitates prosthesis designing by integrating necessary prosthesis morphology, oral function, and treatment difficulty. Furthermore, it tackles dataset size challenges in model optimization, providing valuable insights for future research.


Assuntos
Maxila , Redes Neurais de Computação , Prostodontia , Humanos , Maxila/diagnóstico por imagem , Prostodontia/métodos , Inteligência Artificial
13.
Sensors (Basel) ; 23(10)2023 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-37430604

RESUMO

One of the most severe types of cancer caused by the uncontrollable proliferation of brain cells inside the skull is brain tumors. Hence, a fast and accurate tumor detection method is critical for the patient's health. Many automated artificial intelligence (AI) methods have recently been developed to diagnose tumors. These approaches, however, result in poor performance; hence, there is a need for an efficient technique to perform precise diagnoses. This paper suggests a novel approach for brain tumor detection via an ensemble of deep and hand-crafted feature vectors (FV). The novel FV is an ensemble of hand-crafted features based on the GLCM (gray level co-occurrence matrix) and in-depth features based on VGG16. The novel FV contains robust features compared to independent vectors, which improve the suggested method's discriminating capabilities. The proposed FV is then classified using SVM or support vector machines and the k-nearest neighbor classifier (KNN). The framework achieved the highest accuracy of 99% on the ensemble FV. The results indicate the reliability and efficacy of the proposed methodology; hence, radiologists can use it to detect brain tumors through MRI (magnetic resonance imaging). The results show the robustness of the proposed method and can be deployed in the real environment to detect brain tumors from MRI images accurately. In addition, the performance of our model was validated via cross-tabulated data.


Assuntos
Inteligência Artificial , Neoplasias Encefálicas , Humanos , Encéfalo , Neoplasias Encefálicas/diagnóstico por imagem , Reprodutibilidade dos Testes
14.
Sensors (Basel) ; 23(12)2023 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-37420891

RESUMO

Diabetic retinopathy (DR) is a common complication of long-term diabetes, affecting the human eye and potentially leading to permanent blindness. The early detection of DR is crucial for effective treatment, as symptoms often manifest in later stages. The manual grading of retinal images is time-consuming, prone to errors, and lacks patient-friendliness. In this study, we propose two deep learning (DL) architectures, a hybrid network combining VGG16 and XGBoost Classifier, and the DenseNet 121 network, for DR detection and classification. To evaluate the two DL models, we preprocessed a collection of retinal images obtained from the APTOS 2019 Blindness Detection Kaggle Dataset. This dataset exhibits an imbalanced image class distribution, which we addressed through appropriate balancing techniques. The performance of the considered models was assessed in terms of accuracy. The results showed that the hybrid network achieved an accuracy of 79.50%, while the DenseNet 121 model achieved an accuracy of 97.30%. Furthermore, a comparative analysis with existing methods utilizing the same dataset revealed the superior performance of the DenseNet 121 network. The findings of this study demonstrate the potential of DL architectures for the early detection and classification of DR. The superior performance of the DenseNet 121 model highlights its effectiveness in this domain. The implementation of such automated methods can significantly improve the efficiency and accuracy of DR diagnosis, benefiting both healthcare providers and patients.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico por imagem , Redes Neurais de Computação , Cegueira , Pessoal de Saúde
15.
Sensors (Basel) ; 23(17)2023 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-37688036

RESUMO

Some recent studies show that filters in convolutional neural networks (CNNs) have low color selectivity in datasets of natural scenes such as Imagenet. CNNs, bio-inspired by the visual cortex, are characterized by their hierarchical learning structure which appears to gradually transform the representation space. Inspired by the direct connection between the LGN and V4, which allows V4 to handle low-level information closer to the trichromatic input in addition to processed information that comes from V2/V3, we propose the addition of a long skip connection (LSC) between the first and last blocks of the feature extraction stage to allow deeper parts of the network to receive information from shallower layers. This type of connection improves classification accuracy by combining simple-visual and complex-abstract features to create more color-selective ones. We have applied this strategy to classic CNN architectures and quantitatively and qualitatively analyzed the improvement in accuracy while focusing on color selectivity. The results show that, in general, skip connections improve accuracy, but LSC improves it even more and enhances the color selectivity of the original CNN architectures. As a side result, we propose a new color representation procedure for organizing and filtering feature maps, making their visualization more manageable for qualitative color selectivity analysis.

16.
Sensors (Basel) ; 23(19)2023 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-37837049

RESUMO

Flat foot is a postural deformity in which the plantar part of the foot is either completely or partially contacted with the ground. In recent clinical practices, X-ray radiographs have been introduced to detect flat feet because they are more affordable to many clinics than using specialized devices. This research aims to develop an automated model that detects flat foot cases and their severity levels from lateral foot X-ray images by measuring three different foot angles: the Arch Angle, Meary's Angle, and the Calcaneal Inclination Angle. Since these angles are formed by connecting a set of points on the image, Template Matching is used to allocate a set of potential points for each angle, and then a classifier is used to select the points with the highest predicted likelihood to be the correct point. Inspired by literature, this research constructed and compared two models: a Convolutional Neural Network-based model and a Random Forest-based model. These models were trained on 8000 images and tested on 240 unseen cases. As a result, the highest overall accuracy rate was 93.13% achieved by the Random Forest model, with mean values for all foot types (normal foot, mild flat foot, and moderate flat foot) being: 93.38 precision, 92.56 recall, 96.46 specificity, 95.42 accuracy, and 92.90 F-Score. The main conclusions that were deduced from this research are: (1) Using transfer learning (VGG-16) as a feature-extractor-only, in addition to image augmentation, has greatly increased the overall accuracy rate. (2) Relying on three different foot angles shows more accurate estimations than measuring a single foot angle.


Assuntos
Calcâneo , Pé Chato , Humanos , Pé Chato/diagnóstico por imagem , Pé/diagnóstico por imagem , Radiografia
17.
Multimed Syst ; 29(2): 739-751, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36310764

RESUMO

The pandemic that the SARS-CoV-2 originated in 2019 is continuing to cause serious havoc on the global population's health, economy, and livelihood. A critical way to suppress and restrain this pandemic is the early detection of COVID-19, which will help to control the virus. Chest X-rays are one of the more straightforward ways to detect the COVID-19 virus compared to the standard methods like CT scans and RT-PCR diagnosis, which are very complex, expensive, and take much time. Our research on various papers shows that the currently researchers are actively working for an efficient Deep Learning model to produce an unbiased detection of COVID-19 through chest X-ray images. In this work, we propose a novel convolution neural network model based on supervised classification that simultaneously computes identification and verification loss. We adopt a transfer learning approach using pretrained models trained on imagenet dataset such as Alex Net and VGG16 as back-bone models and use data augmentation techniques to solve class imbalance and boost the classifier's performance. Finally, our proposed classifier architecture model ensures unbiased and high accuracy results, outperforming existing deep learning models for COVID-19 detection from chest X-ray images producing State of the Art performance. It shows strong and robust performance and proves to be easily deployable and scalable, therefore increasing the efficiency of analyzing chest X-ray images with high accuracy in detection of Coronavirus.

18.
Adv Eng Softw ; 175: 103317, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36311489

RESUMO

The Coronavirus (COVID-19) has become a critical and extreme epidemic because of its international dissemination. COVID-19 is the world's most serious health, economic, and survival danger. This disease affects not only a single country but the entire planet due to this infectious disease. Illnesses of Covid-19 spread at a much faster rate than usual influenza cases. Because of its high transmissibility and early diagnosis, it isn't easy to manage COVID-19. The popularly used RT-PCR method for COVID-19 disease diagnosis may provide false negatives. COVID-19 can be detected non-invasively using medical imaging procedures such as chest CT and chest x-ray. Deep learning is the most effective machine learning approach for examining a considerable quantity of chest computed tomography (CT) pictures that can significantly affect Covid-19 screening. Convolutional neural network (CNN) is one of the most popular deep learning techniques right now, and its gaining traction due to its potential to transform several spheres of human life. This research aims to develop conceptual transfer learning enhanced CNN framework models for detecting COVID-19 with CT scan images. Though with minimal datasets, these techniques were demonstrated to be effective in detecting the presence of COVID-19. This proposed research looks into several deep transfer learning-based CNN approaches for detecting the presence of COVID-19 in chest CT images.VGG16, VGG19, Densenet121, InceptionV3, Xception, and Resnet50 are the foundation models used in this work. Each model's performance was evaluated using a confusion matrix and various performance measures such as accuracy, recall, precision, f1-score, loss, and ROC. The VGG16 model performed much better than the other models in this study (98.00 % accuracy). Promising outcomes from experiments have revealed the merits of the proposed model for detecting and monitoring COVID-19 patients. This could help practitioners and academics create a tool to help minimal health professionals decide on the best course of therapy.

19.
Cluster Comput ; 26(2): 1389-1403, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36034678

RESUMO

Coronavirus disease (COVID-19) is rapidly spreading worldwide. Recent studies show that radiological images contain accurate data for detecting the coronavirus. This paper proposes a pre-trained convolutional neural network (VGG16) with Capsule Neural Networks (CapsNet) to detect COVID-19 with unbalanced data sets. The CapsNet is proposed due to its ability to define features such as perspective, orientation, and size. Synthetic Minority Over-sampling Technique (SMOTE) was employed to ensure that new samples were generated close to the sample center, avoiding the production of outliers or changes in data distribution. As the results may change by changing capsule network parameters (Capsule dimensionality and routing number), the Gaussian optimization method has been used to optimize these parameters. Four experiments have been done, (1) CapsNet with the unbalanced data sets, (2) CapsNet with balanced data sets based on class weight, (3) CapsNet with balanced data sets based on SMOTE, and (4) CapsNet hyperparameters optimization with balanced data sets based on SMOTE. The performance has improved and achieved an accuracy rate of 96.58% and an F1- score of 97.08%, a competitive optimized model compared to other related models.

20.
Sensors (Basel) ; 22(21)2022 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-36365886

RESUMO

The aim of this study was to assess the possibility of using deep convolutional neural networks (DCNNs) to develop an effective method for diagnosing osteoporosis based on CT images of the spine. The research material included the CT images of L1 spongy tissue belonging to 100 patients (50 healthy and 50 diagnosed with osteoporosis). Six pre-trained DCNN architectures with different topological depths (VGG16, VGG19, MobileNetV2, Xception, ResNet50, and InceptionResNetV2) were used in the study. The best results were obtained for the VGG16 model characterised by the lowest topological depth (ACC = 95%, TPR = 96%, and TNR = 94%). A specific challenge during the study was the relatively small (for deep learning) number of observations (400 images). This problem was solved using DCNN models pre-trained on a large dataset and a data augmentation technique. The obtained results allow us to conclude that the transfer learning technique yields satisfactory results during the construction of deep models for the diagnosis of osteoporosis based on small datasets of CT images of the spine.


Assuntos
Redes Neurais de Computação , Osteoporose , Humanos , Osteoporose/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA