Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Cogn Neurodyn ; 18(2): 383-404, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38699621

RESUMO

Fibromyalgia is a soft tissue rheumatism with significant qualitative and quantitative impact on sleep macro and micro architecture. The primary objective of this study is to analyze and identify automatically healthy individuals and those with fibromyalgia using sleep electroencephalography (EEG) signals. The study focused on the automatic detection and interpretation of EEG signals obtained from fibromyalgia patients. In this work, the sleep EEG signals are divided into 15-s and a total of 5358 (3411 healthy control and 1947 fibromyalgia) EEG segments are obtained from 16 fibromyalgia and 16 normal subjects. Our developed model has advanced multilevel feature extraction architecture and hence, we used a new feature extractor called GluPat, inspired by the glucose chemical, with a new pooling approach inspired by the D'hondt selection system. Furthermore, our proposed method incorporated feature selection techniques using iterative neighborhood component analysis and iterative Chi2 methods. These selection mechanisms enabled the identification of discriminative features for accurate classification. In the classification phase, we employed a support vector machine and k-nearest neighbor algorithms to classify the EEG signals with leave-one-record-out (LORO) and tenfold cross-validation (CV) techniques. All results are calculated channel-wise and iterative majority voting is used to obtain generalized results. The best results were determined using the greedy algorithm. The developed model achieved a detection accuracy of 100% and 91.83% with a tenfold and LORO CV strategies, respectively using sleep stage (2 + 3) EEG signals. Our generated model is simple and has linear time complexity.

2.
J Digit Imaging ; 36(3): 973-987, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36797543

RESUMO

Modern computer vision algorithms are based on convolutional neural networks (CNNs), and both end-to-end learning and transfer learning modes have been used with CNN for image classification. Thus, automated brain tumor classification models have been proposed by deploying CNNs to help medical professionals. Our primary objective is to increase the classification performance using CNN. Therefore, a patch-based deep feature engineering model has been proposed in this work. Nowadays, patch division techniques have been used to attain high classification performance, and variable-sized patches have achieved good results. In this work, we have used three types of patches of different sizes (32 × 32, 56 × 56, 112 × 112). Six feature vectors have been obtained using these patches and two layers of the pretrained ResNet50 (global average pooling and fully connected layers). In the feature selection phase, three selectors-neighborhood component analysis (NCA), Chi2, and ReliefF-have been used, and 18 final feature vectors have been obtained. By deploying k nearest neighbors (kNN), 18 results have been calculated. Iterative hard majority voting (IHMV) has been applied to compute the general classification accuracy of this framework. This model uses different patches, feature extractors (two layers of the ResNet50 have been utilized as feature extractors), and selectors, making this a framework that we have named PatchResNet. A public brain image dataset containing four classes (glioblastoma multiforme (GBM), meningioma, pituitary tumor, healthy) has been used to develop the proposed PatchResNet model. Our proposed PatchResNet attained 98.10% classification accuracy using the public brain tumor image dataset. The developed PatchResNet model obtained high classification accuracy and has the advantage of being a self-organized framework. Therefore, the proposed method can choose the best result validation prediction vectors and achieve high image classification performance.


Assuntos
Neoplasias Encefálicas , Redes Neurais de Computação , Humanos , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética , Encéfalo
3.
Sci Rep ; 12(1): 17297, 2022 10 14.
Artigo em Inglês | MEDLINE | ID: mdl-36241674

RESUMO

Pain intensity classification using facial images is a challenging problem in computer vision research. This work proposed a patch and transfer learning-based model to classify various pain intensities using facial images. The input facial images were segmented into dynamic-sized horizontal patches or "shutter blinds". A lightweight deep network DarkNet19 pre-trained on ImageNet1K was used to generate deep features from the shutter blinds and the undivided resized segmented input facial image. The most discriminative features were selected from these deep features using iterative neighborhood component analysis, which were then fed to a standard shallow fine k-nearest neighbor classifier for classification using tenfold cross-validation. The proposed shutter blinds-based model was trained and tested on datasets derived from two public databases-University of Northern British Columbia-McMaster Shoulder Pain Expression Archive Database and Denver Intensity of Spontaneous Facial Action Database-which both comprised four pain intensity classes that had been labeled by human experts using validated facial action coding system methodology. Our shutter blinds-based classification model attained more than 95% overall accuracy rates on both datasets. The excellent performance suggests that the automated pain intensity classification model can be deployed to assist doctors in the non-verbal detection of pain using facial images in various situations (e.g., non-communicative patients or during surgery). This system can facilitate timely detection and management of pain.


Assuntos
Algoritmos , Face , Colúmbia Britânica , Bases de Dados Factuais , Humanos , Dor/diagnóstico
4.
Diagnostics (Basel) ; 12(8)2022 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-36010325

RESUMO

Diabetic retinopathy (DR) is a common complication of diabetes that can lead to progressive vision loss. Regular surveillance with fundal photography, early diagnosis, and prompt intervention are paramount to reducing the incidence of DR-induced vision loss. However, manual interpretation of fundal photographs is subject to human error. In this study, a new method based on horizontal and vertical patch division was proposed for the automated classification of DR images on fundal photographs. The novel sides of this study are given as follows. We proposed a new non-fixed-size patch division model to obtain high classification results and collected a new fundus image dataset. Moreover, two datasets are used to test the model: a newly collected three-class (normal, non-proliferative DR, and proliferative DR) dataset comprising 2355 DR images and the established open-access five-class Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 dataset comprising 3662 images. Two analysis scenarios, Case 1 and Case 2, with three (normal, non-proliferative DR, and proliferative DR) and five classes (normal, mild DR, moderate DR, severe DR, and proliferative DR), respectively, were derived from the APTOS 2019 dataset. These datasets and these cases have been used to demonstrate the general classification performance of our proposal. By applying transfer learning, the last fully connected and global average pooling layers of the DenseNet201 architecture were used to extract deep features from input DR images and each of the eight subdivided horizontal and vertical patches. The most discriminative features are then selected using neighborhood component analysis. These were fed as input to a standard shallow cubic support vector machine for classification. Our new DR dataset obtained 94.06% and 91.55% accuracy values for three-class classification with 80:20 hold-out validation and 10-fold cross-validation, respectively. As can be seen from steps of the proposed model, a new patch-based deep-feature engineering model has been proposed. The proposed deep-feature engineering model is a cognitive model, since it uses efficient methods in each phase. Similar excellent results were seen for three-class classification with the Case 1 dataset. In addition, the model attained 87.43% and 84.90% five-class classification accuracy rates using 80:20 hold-out validation and 10-fold cross-validation, respectively, on the Case 2 dataset, which outperformed prior DR classification studies based on the five-class APTOS 2019 dataset. Our model attained about >2% classification results compared to others. These findings demonstrate the accuracy and robustness of the proposed model for classification of DR images.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA