Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
1.
Comput Methods Programs Biomed ; 254: 108253, 2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38861878

RESUMO

BACKGROUND AND OBJECTIVES: Optical coherence tomography (OCT) has ushered in a transformative era in the domain of ophthalmology, offering non-invasive imaging with high resolution for ocular disease detection. OCT, which is frequently used in diagnosing fundamental ocular pathologies, such as glaucoma and age-related macular degeneration (AMD), plays an important role in the widespread adoption of this technology. Apart from glaucoma and AMD, we will also investigate pertinent pathologies, such as epiretinal membrane (ERM), macular hole (MH), macular dystrophy (MD), vitreomacular traction (VMT), diabetic maculopathy (DMP), cystoid macular edema (CME), central serous chorioretinopathy (CSC), diabetic macular edema (DME), diabetic retinopathy (DR), drusen, glaucomatous optic neuropathy (GON), neovascular AMD (nAMD), myopia macular degeneration (MMD) and choroidal neovascularization (CNV) diseases. This comprehensive review examines the role that OCT-derived images play in detecting, characterizing, and monitoring eye diseases. METHOD: The 2020 PRISMA guideline was used to structure a systematic review of research on various eye conditions using machine learning (ML) or deep learning (DL) techniques. A thorough search across IEEE, PubMed, Web of Science, and Scopus databases yielded 1787 publications, of which 1136 remained after removing duplicates. Subsequent exclusion of conference papers, review papers, and non-open-access articles reduced the selection to 511 articles. Further scrutiny led to the exclusion of 435 more articles due to lower-quality indexing or irrelevance, resulting in 76 journal articles for the review. RESULTS: During our investigation, we found that a major challenge for ML-based decision support is the abundance of features and the determination of their significance. In contrast, DL-based decision support is characterized by a plug-and-play nature rather than relying on a trial-and-error approach. Furthermore, we observed that pre-trained networks are practical and especially useful when working on complex images such as OCT. Consequently, pre-trained deep networks were frequently utilized for classification tasks. Currently, medical decision support aims to reduce the workload of ophthalmologists and retina specialists during routine tasks. In the future, it might be possible to create continuous learning systems that can predict ocular pathologies by identifying subtle changes in OCT images.

2.
Sensors (Basel) ; 23(16)2023 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-37631569

RESUMO

Anxiety, learning disabilities, and depression are the symptoms of attention deficit hyperactivity disorder (ADHD), an isogenous pattern of hyperactivity, impulsivity, and inattention. For the early diagnosis of ADHD, electroencephalogram (EEG) signals are widely used. However, the direct analysis of an EEG is highly challenging as it is time-consuming, nonlinear, and nonstationary in nature. Thus, in this paper, a novel approach (LSGP-USFNet) is developed based on the patterns obtained from Ulam's spiral and Sophia Germain's prime numbers. The EEG signals are initially filtered to remove the noise and segmented with a non-overlapping sliding window of a length of 512 samples. Then, a time-frequency analysis approach, namely continuous wavelet transform, is applied to each channel of the segmented EEG signal to interpret it in the time and frequency domain. The obtained time-frequency representation is saved as a time-frequency image, and a non-overlapping n × n sliding window is applied to this image for patch extraction. An n × n Ulam's spiral is localized on each patch, and the gray levels are acquired from this patch as features where Sophie Germain's primes are located in Ulam's spiral. All gray tones from all patches are concatenated to construct the features for ADHD and normal classes. A gray tone selection algorithm, namely ReliefF, is employed on the representative features to acquire the final most important gray tones. The support vector machine classifier is used with a 10-fold cross-validation criteria. Our proposed approach, LSGP-USFNet, was developed using a publicly available dataset and obtained an accuracy of 97.46% in detecting ADHD automatically. Our generated model is ready to be validated using a bigger database and it can also be used to detect other children's neurological disorders.


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Criança , Humanos , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Eletroencefalografia , Algoritmos , Ansiedade , Transtornos de Ansiedade , Niacinamida
3.
Artigo em Inglês | MEDLINE | ID: mdl-37633787

RESUMO

OBJECTIVES: This study, which uses artificial intelligence-based methods, aims to determine the limits of pathologic conditions and infections related to the maxillary sinus in cone beam computed tomography (CBCT) images to facilitate the work of dentists. METHODS: A new UNet architecture based on a state-of-the-art Swin transformer called Res-Swin-UNet was developed to detect sinus. The encoder part of the proposed network model consists of a pre-trained ResNet architecture, and the decoder part consists of Swin transformer blocks. Swin transformers achieve powerful global context properties with self-attention mechanisms. Because the output of the Swin transformer generates sectorized features, the patch expanding layer was used in this section instead of the traditional upsampling layer. In the last layer of the decoder, sinus diagnosis was conducted through classical convolution and sigmoid function. In experimental works, we used a data set including 298 CBCT images. RESULTS: The Res-Swin-UNet model achieved more success, with a 91.72% F1-score, 99% accuracy, and 84.71% IoU, than outperforming the state-of-the-art models. CONCLUSIONS: The deep learning-based model proposed in the present study can assist dentists in automatically detecting the boundaries of pathologic conditions and infections within the maxillary sinus based on CBCT images.

4.
Health Inf Sci Syst ; 11(1): 22, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37151916

RESUMO

Recognizing emotions accurately in real life is crucial in human-computer interaction (HCI) systems. Electroencephalogram (EEG) signals have been extensively employed to identify emotions. The researchers have used several EEG-based emotion identification datasets to validate their proposed models. In this paper, we have employed a novel metaheuristic optimization approach for accurate emotion classification by applying it to select both channel and rhythm of EEG data. In this work, we have proposed the particle swarm with visit table strategy (PS-VTS) metaheuristic technique to improve the effectiveness of EEG-based human emotion identification. First, the EEG signals are denoised using a low pass filter, and then rhythm extraction is done using discrete wavelet transform (DWT). The continuous wavelet transform (CWT) approach transforms each rhythm signal into a rhythm image. The pre-trained MobilNetv2 model has been pre-trained for deep feature extraction, and a support vector machine (SVM) is used to classify the emotions. Two models are developed for optimal channels and rhythm sets. In Model 1, optimal channels are selected separately for each rhythm, and global optima are determined in the optimization process according to the best channel sets of the rhythms. The best rhythms are first determined for each channel, and then the optimal channel-rhythm set is selected in Model 2. Our proposed model obtained an accuracy of 99.2871% and 97.8571% for the classification of HA (high arousal)-LA (low arousal) and HV (high valence)-LV (low valence), respectively with the DEAP dataset. Our generated model obtained the highest classification accuracy compared to the previously reported methods.

5.
Oral Radiol ; 39(4): 614-628, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-36920598

RESUMO

OBJECTIVE: Impacted tooth is a common problem that can occur at any age, causing tooth decay, root resorption, and pain in the later stages. In recent years, major advances have been made in medical imaging segmentation using deep convolutional neural network-based networks. In this study, we report on the development of an artificial intelligence system for the automatic identification of impacted tooth from panoramic dental X-ray images. METHODS: Among existing networks, in medical imaging segmentation, U-Net architectures are widely implemented. In this article, for dental X-ray image segmentation, blocks and convolutional block structures using inverted residual blocks are upgraded by taking advantage of U-Net's network capacity-intensive connections. At the same time, we propose a method for jumping connections in which bi-directional convolution long short-term memory is used instead of a simple connection. Assessment of the proposed artificial intelligence model performance was evaluated with accuracy, F1-score, intersection over union, and recall. RESULTS: In the proposed method, experimental results are obtained with 99.82% accuracy, 91.59% F1-score, 84.48% intersection over union, and 90.71% recall. CONCLUSION: Our findings show that our artificial intelligence system could help with future diagnostic support in clinical practice.


Assuntos
Recuperação Demorada da Anestesia , Dente Impactado , Humanos , Inteligência Artificial , Raios X , Redes Neurais de Computação
6.
Diagnostics (Basel) ; 13(2)2023 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-36672992

RESUMO

Blood pressure is the pressure exerted by the blood in the veins against the walls of the veins. If this value is above normal levels, it is known as high blood pressure (HBP) or hypertension (HPT). This health problem which often referred to as the "silent killer" reduces the quality of life and causes severe damage to many body parts in various ways. Besides, its mortality rate is very high. Hence, rapid and effective diagnosis of this health problem is crucial. In this study, an automatic diagnosis of HPT has been proposed using ballistocardiography (BCG) signals. The BCG signals were transformed to the time-frequency domain using the spectrogram method. While creating the spectrogram images, parameters such as window type, window length, overlapping rate, and fast Fourier transform size were adjusted. Then, these images were classified using ConvMixer architecture, similar to vision transformers (ViT) and multi-layer perceptron (MLP)-mixer structures, which have attracted a lot of attention. Its performance was compared with classical architectures such as ResNet18 and ResNet50. The results obtained showed that the ConvMixer structure gave very successful results and a very short operation time. Our proposed model has obtained an accuracy of 98.14%, 98.79%, and 97.69% for the ResNet18, ResNet50, and ConvMixer architectures, respectively. In addition, it has been observed that the processing time of the ConvMixer architecture is relatively short compared to these two architectures.

7.
Multimed Tools Appl ; 82(8): 12351-12377, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36105661

RESUMO

Multilevel image thresholding is a well-known technique for image segmentation. Recently, various metaheuristic methods have been proposed for the determination of the thresholds for multilevel image segmentation. These methods are mainly based on metaphors and they have high complexity and their convergences are comparably slow. In this paper, a multilevel image thresholding approach is proposed that simplifies the thresholding problem by using a simple optimization technique instead of metaphor-based algorithms. More specifically, in this paper, Chaotic enhanced Rao (CER) algorithms are developed where eight chaotic maps namely Logistic, Sine, Sinusoidal, Gauss, Circle, Chebyshev, Singer, and Tent are used. Besides, in the developed CER algorithm, the number of thresholds is determined automatically, instead of manual determination. The performances of the developed CER algorithms are evaluated based on different statistical analysis metrics namely BDE, PRI, VOI, GCE, SSIM, FSIM, RMSE, PSNR, NK, AD, SC, MD, and NAE. The experimental works and the related evaluations are carried out on the BSDS300 dataset. The obtained experimental results demonstrate that the proposed CER algorithm outperforms the compared methods based on PRI, SSIM, FSIM, PSNR, RMSE, AD, and NAE metrics. In addition, the proposed method provides better convergence regarding speed and accuracy.

8.
Health Inf Sci Syst ; 10(1): 31, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36387749

RESUMO

Emotion identification is an essential task for human-computer interaction systems. Electroencephalogram (EEG) signals have been widely used in emotion recognition. So far, there have been several EEG-based emotion recognition datasets that the researchers have used to validate their developed models. Hence, we have used a new ICBrainDB EEG dataset to classify angry, neutral, happy, and sad emotions in this work. Signal processing-based wavelet transform (WT), tunable Q-factor wavelet transform (TQWT), and image processing-based histogram of oriented gradients (HOG), local binary pattern (LBP), and convolutional neural network (CNN) features have been used extracted from the EEG signals. The WT is used to extract the rhythms from each channel of the EEG signal. The instantaneous frequency and spectral entropy are computed from each EEG rhythm signal. The average, and standard deviation of instantaneous frequency, and spectral entropy of each rhythm of the signal are the final feature vectors. The spectral entropy in each channel of the EEG signal after performing the TQWT is used to create the feature vectors in the second signal side method. Each EEG channel is transformed into time-frequency plots using the synchrosqueezed wavelet transform. Then, the feature vectors are constructed individually using windowed HOG and LBP features. Also, each channel of the EEG data is fed to a pretrained CNN to extract the features. In the feature selection process, the ReliefF feature selector is employed. Various feature classification algorithms namely, k-nearest neighbor (KNN), support vector machines, and neural networks are used for the automated classification of angry, neutral, happy, and sad emotions. Our developed model obtained an average accuracy of 90.7% using HOG features and a KNN classifier with a tenfold cross-validation strategy.

9.
Diagnostics (Basel) ; 12(10)2022 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-36292197

RESUMO

Emotion recognition is one of the most important issues in human-computer interaction (HCI), neuroscience, and psychology fields. It is generally accepted that emotion recognition with neural data such as electroencephalography (EEG) signals, functional magnetic resonance imaging (fMRI), and near-infrared spectroscopy (NIRS) is better than other emotion detection methods such as speech, mimics, body language, facial expressions, etc., in terms of reliability and accuracy. In particular, EEG signals are bioelectrical signals that are frequently used because of the many advantages they offer in the field of emotion recognition. This study proposes an improved approach for EEG-based emotion recognition on a publicly available newly published dataset, VREED. Differential entropy (DE) features were extracted from four wavebands (theta 4-8 Hz, alpha 8-13 Hz, beta 13-30 Hz, and gamma 30-49 Hz) to classify two emotional states (positive/negative). Five classifiers, namely Support Vector Machine (SVM), k-Nearest Neighbor (kNN), Naïve Bayesian (NB), Decision Tree (DT), and Logistic Regression (LR) were employed with DE features for the automated classification of two emotional states. In this work, we obtained the best average accuracy of 76.22% ± 2.06 with the SVM classifier in the classification of two states. Moreover, we observed from the results that the highest average accuracy score was produced with the gamma band, as previously reported in studies in EEG-based emotion recognition.

10.
Biocybern Biomed Eng ; 42(3): 1066-1080, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36092540

RESUMO

The polymerase chain reaction (PCR) test is not only time-intensive but also a contact method that puts healthcare personnel at risk. Thus, contactless and fast detection tests are more valuable. Cough sound is an important indicator of COVID-19, and in this paper, a novel explainable scheme is developed for cough sound-based COVID-19 detection. In the presented work, the cough sound is initially segmented into overlapping parts, and each segment is labeled as the input audio, which may contain other sounds. The deep Yet Another Mobile Network (YAMNet) model is considered in this work. After labeling, the segments labeled as cough are cropped and concatenated to reconstruct the pure cough sounds. Then, four fractal dimensions (FD) calculation methods are employed to acquire the FD coefficients on the cough sound with an overlapped sliding window that forms a matrix. The constructed matrixes are then used to form the fractal dimension images. Finally, a pretrained vision transformer (ViT) model is used to classify the constructed images into COVID-19, healthy and symptomatic classes. In this work, we demonstrate the performance of the ViT on cough sound-based COVID-19, and a visual explainability of the inner workings of the ViT model is shown. Three publically available cough sound datasets, namely COUGHVID, VIRUFY, and COSWARA, are used in this study. We have obtained 98.45%, 98.15%, and 97.59% accuracy for COUGHVID, VIRUFY, and COSWARA datasets, respectively. Our developed model obtained the highest performance compared to the state-of-the-art methods and is ready to be tested in real-world applications.

11.
Comput Biol Med ; 143: 105335, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35219186

RESUMO

BACKGROUND: The world has been suffering from the COVID-19 pandemic since 2019. More than 5 million people have died. Pneumonia is caused by the COVID-19 virus, which can be diagnosed using chest X-ray and computed tomography (CT) scans. COVID-19 also causes clinical and subclinical cardiovascular injury that may be detected on electrocardiography (ECG), which is easily accessible. METHOD: For ECG-based COVID-19 detection, we developed a novel attention-based 3D convolutional neural network (CNN) model with residual connections (RC). In this paper, the deep learning (DL) approach was developed using 12-lead ECG printouts obtained from 250 normal subjects, 250 patients with COVID-19 and 250 with abnormal heartbeat. For binary classification, the COVID-19 and normal classes were considered; and for multiclass classification, all classes. The ECGs were preprocessed into standard ECG lead segments that were channeled into 12-dimensional volumes as input to the network model. Our developed model comprised of 19 layers with three 3D convolutional, three batch normalization, three rectified linear unit, two dropouts, two additional (for residual connections), one attention, and one fully connected layer. The RC were used to improve gradient flow through the developed network, and attention layer, to connect the second residual connection to the fully connected layer through the batch normalization layer. RESULTS: A publicly available dataset was used in this work. We obtained average accuracies of 99.0% and 92.0% for binary and multiclass classifications, respectively, using ten-fold cross-validation. Our proposed model is ready to be tested with a huge ECG database.

12.
Comput Biol Med ; 143: 105311, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35158117

RESUMO

Autism Spectrum Disorders (ASD) is a collection of complicated neurological disorders that first show in early childhood. Electroencephalogram (EEG) signals are widely used to record the electrical activities of the brain. Manual screening is prone to human errors, tedious, and time-consuming. Hence, a novel automated method involving the Douglas-Peucker (DP) algorithm, sparse coding-based feature mapping approach, and deep convolutional neural networks (CNNs) is employed to detect ASD using EEG recordings. Initially, the DP algorithm is used for each channel to reduce the number of samples without degradation of the EEG signal. Then, the EEG rhythms are extracted by using the wavelet transform. The EEG rhythms are coded by using the sparse representation. The matching pursuit algorithm is used for sparse coding of the EEG rhythms. The sparse coded rhythms are segmented into 8 bits length and then converted to decimal numbers. An image is formed by concatenating the histograms of the decimated rhythm signals. Extreme learning machines (ELM)-based autoencoders (AE) are employed at a data augmentation step. After data augmentation, the ASD and healthy EEG signals are classified using pre-trained deep CNN models. Our proposed method yielded an accuracy of 98.88%, the sensitivity of 100% and specificity of 96.4%, and the F1-score of 99.19% in the detection of ASD automatically. Our developed model is ready to be tested with more EEG signals before its clinical application.

13.
Health Inf Sci Syst ; 10(1): 1, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35096384

RESUMO

The reliable and rapid identification of the COVID-19 has become crucial to prevent the rapid spread of the disease, ease lockdown restrictions and reduce pressure on public health infrastructures. Recently, several methods and techniques have been proposed to detect the SARS-CoV-2 virus using different images and data. However, this is the first study that will explore the possibility of using deep convolutional neural network (CNN) models to detect COVID-19 from electrocardiogram (ECG) trace images. In this work, COVID-19 and other cardiovascular diseases (CVDs) were detected using deep-learning techniques. A public dataset of ECG images consisting of 1937 images from five distinct categories, such as normal, COVID-19, myocardial infarction (MI), abnormal heartbeat (AHB), and recovered myocardial infarction (RMI) were used in this study. Six different deep CNN models (ResNet18, ResNet50, ResNet101, InceptionV3, DenseNet201, and MobileNetv2) were used to investigate three different classification schemes: (i) two-class classification (normal vs COVID-19); (ii) three-class classification (normal, COVID-19, and other CVDs), and finally, (iii) five-class classification (normal, COVID-19, MI, AHB, and RMI). For two-class and three-class classification, Densenet201 outperforms other networks with an accuracy of 99.1%, and 97.36%, respectively; while for the five-class classification, InceptionV3 outperforms others with an accuracy of 97.83%. ScoreCAM visualization confirms that the networks are learning from the relevant area of the trace images. Since the proposed method uses ECG trace images which can be captured by smartphones and are readily available facilities in low-resources countries, this study will help in faster computer-aided diagnosis of COVID-19 and other cardiac abnormalities.

14.
J Pers Med ; 12(1)2022 Jan 06.
Artigo em Inglês | MEDLINE | ID: mdl-35055370

RESUMO

Parkinson's disease (PD), which is a slowly progressing neurodegenerative disorder, negatively affects people's daily lives. Early diagnosis is of great importance to minimize the effects of PD. One of the most important symptoms in the early diagnosis of PD disease is the monotony and distortion of speech. Artificial intelligence-based approaches can help specialists and physicians to automatically detect these disorders. In this study, a new and powerful approach based on multi-level feature selection was proposed to detect PD from features containing voice recordings of already-diagnosed cases. At the first level, feature selection was performed with the Chi-square and L1-Norm SVM algorithms (CLS). Then, the features that were extracted from these algorithms were combined to increase the representation power of the samples. At the last level, those samples that were highly distinctive from the combined feature set were selected with feature importance weights using the ReliefF algorithm. In the classification stage, popular classifiers such as KNN, SVM, and DT were used for machine learning, and the best performance was achieved with the KNN classifier. Moreover, the hyperparameters of the KNN classifier were selected with the Bayesian optimization algorithm, and the performance of the proposed approach was further improved. The proposed approach was evaluated using a 10-fold cross-validation technique on a dataset containing PD and normal classes, and a classification accuracy of 95.4% was achieved.

15.
New Gener Comput ; 40(4): 1053-1075, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35035024

RESUMO

The new type of coronavirus disease, which has spread from Wuhan, China since the beginning of 2020 called COVID-19, has caused many deaths and cases in most countries and has reached a global pandemic scale. In addition to test kits, imaging techniques with X-rays used in lung patients have been frequently used in the detection of COVID-19 cases. In the proposed method, a novel approach based on a deep learning model named DeepCovNet was utilized to classify chest X-ray images containing COVID-19, normal (healthy), and pneumonia classes. The convolutional-autoencoder model, which had convolutional layers in encoder and decoder blocks, was trained by using the processed chest X-ray images from scratch for deep feature extraction. The distinctive features were selected with a novel and robust algorithm named SDAR from the deep feature set. In the classification stage, an SVM classifier with various kernel functions was used to evaluate the classification performance of the proposed method. Also, hyperparameters of the SVM classifier were optimized with the Bayesian algorithm for increasing classification accuracy. Specificity, sensitivity, precision, and F-score, were also used as performance metrics in addition to accuracy which was used as the main criterion. The proposed method with an accuracy of 99.75 outperformed the other approaches based on deep learning.

16.
J Digit Imaging ; 34(2): 263-272, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33674979

RESUMO

Coronavirus (COVID-19) is a pandemic, which caused suddenly unexplained pneumonia cases and caused a devastating effect on global public health. Computerized tomography (CT) is one of the most effective tools for COVID-19 screening. Since some specific patterns such as bilateral, peripheral, and basal predominant ground-glass opacity, multifocal patchy consolidation, crazy-paving pattern with a peripheral distribution can be observed in CT images and these patterns have been declared as the findings of COVID-19 infection. For patient monitoring, diagnosis and segmentation of COVID-19, which spreads into the lung, expeditiously and accurately from CT, will provide vital information about the stage of the disease. In this work, we proposed a SegNet-based network using the attention gate (AG) mechanism for the automatic segmentation of COVID-19 regions in CT images. AGs can be easily integrated into standard convolutional neural network (CNN) architectures with a minimum computing load as well as increasing model precision and predictive accuracy. Besides, the success of the proposed network has been evaluated based on dice, Tversky, and focal Tversky loss functions to deal with low sensitivity arising from the small lesions. The experiments were carried out using a fivefold cross-validation technique on a COVID-19 CT segmentation database containing 473 CT images. The obtained sensitivity, specificity, and dice scores were reported as 92.73%, 99.51%, and 89.61%, respectively. The superiority of the proposed method has been highlighted by comparing with the results reported in previous studies and it is thought that it will be an auxiliary tool that accurately detects automatic COVID-19 regions from CT images.


Assuntos
COVID-19 , Humanos , Redes Neurais de Computação , SARS-CoV-2 , Semântica , Tomografia Computadorizada por Raios X
17.
Expert Syst Appl ; 164: 114054, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33013005

RESUMO

COVID-19 is a novel virus that causes infection in both the upper respiratory tract and the lungs. The numbers of cases and deaths have increased on a daily basis on the scale of a global pandemic. Chest X-ray images have proven useful for monitoring various lung diseases and have recently been used to monitor the COVID-19 disease. In this paper, deep-learning-based approaches, namely deep feature extraction, fine-tuning of pretrained convolutional neural networks (CNN), and end-to-end training of a developed CNN model, have been used in order to classify COVID-19 and normal (healthy) chest X-ray images. For deep feature extraction, pretrained deep CNN models (ResNet18, ResNet50, ResNet101, VGG16, and VGG19) were used. For classification of the deep features, the Support Vector Machines (SVM) classifier was used with various kernel functions, namely Linear, Quadratic, Cubic, and Gaussian. The aforementioned pretrained deep CNN models were also used for the fine-tuning procedure. A new CNN model is proposed in this study with end-to-end training. A dataset containing 180 COVID-19 and 200 normal (healthy) chest X-ray images was used in the study's experimentation. Classification accuracy was used as the performance measurement of the study. The experimental works reveal that deep learning shows potential in the detection of COVID-19 based on chest X-ray images. The deep features extracted from the ResNet50 model and SVM classifier with the Linear kernel function produced a 94.7% accuracy score, which was the highest among all the obtained results. The achievement of the fine-tuned ResNet50 model was found to be 92.6%, whilst end-to-end training of the developed CNN model produced a 91.6% result. Various local texture descriptors and SVM classifications were also used for performance comparison with alternative deep approaches; the results of which showed the deep approaches to be quite efficient when compared to the local texture descriptors in the detection of COVID-19 based on chest X-ray images.

18.
Health Inf Sci Syst ; 8(1): 29, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33014355

RESUMO

COVID-19 is a novel virus, which has a fast spreading rate, and now it is seen all around the world. The case and death numbers are increasing day by day. Some tests have been used to determine the COVID-19. Chest X-ray and chest computerized tomography (CT) are two important imaging tools for determination and monitoring of COVID-19. And new methods have been searching for determination of the COVID-19. In this paper, the investigation of various multiresolution approaches in detection of COVID-19 is carried out. Chest X-ray images are used as input to the proposed approach. As recent trend in machine learning shifts toward the deep learning, we would like to show that the traditional methods such as multiresolution approaches are still effective. To this end, the well-known multiresolution approaches namely Wavelet, Shearlet and Contourlet transforms are used to decompose the chest X-ray images and the entropy and the normalized energy approaches are employed for feature extraction from the decomposed chest X-ray images. Entropy and energy features are generally accompanied with the multiresolution approaches in texture recognition applications. The extreme learning machines (ELM) classifier is considered in the classification stage of the proposed study. A dataset containing 361 different COVID-19 chest X-ray images and 200 normal (healthy) chest X-ray images are used in the experimental works. The performance evaluation is carried out by employing various metric namely accuracy, sensitivity, specificity and precision. As deep learning is mentioned, a comparison between proposed multiresolution approaches and deep learning approaches is also carried out. To this end, deep feature extraction and fine-tuning of pretrained convolutional neural networks (CNNs) are considered. For deep feature extraction, pretrained, ResNet50 model is employed. For classification of the deep features, the Support Vector Machines (SVM) classifier is used. The ResNet50 model is also used in the fine-tuning. The experimental works show that multiresolution approaches produced better performance than the deep learning approaches. Especially, Shearlet transform outperformed at all. 99.29% accuracy score is obtained by using Shearlet transform.

19.
Brain Inform ; 7(1): 9, 2020 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-32940803

RESUMO

In this paper, a novel approach that is based on two-stepped majority voting is proposed for efficient EEG-based emotion classification. Emotion recognition is important for human-machine interactions. Facial features- and body gestures-based approaches have been generally proposed for emotion recognition. Recently, EEG-based approaches become more popular in emotion recognition. In the proposed approach, the raw EEG signals are initially low-pass filtered for noise removal and band-pass filters are used for rhythms extraction. For each rhythm, the best performed EEG channels are determined based on wavelet-based entropy features and fractal dimension-based features. The k-nearest neighbor (KNN) classifier is used in classification. The best five EEG channels are used in majority voting for getting the final predictions for each EEG rhythm. In the second majority voting step, the predictions from all rhythms are used to get a final prediction. The DEAP dataset is used in experiments and classification accuracy, sensitivity and specificity are used for performance evaluation metrics. The experiments are carried out to classify the emotions into two binary classes such as high valence (HV) vs low valence (LV) and high arousal (HA) vs low arousal (LA). The experiments show that 86.3% HV vs LV discrimination accuracy and 85.0% HA vs LA discrimination accuracy is obtained. The obtained results are also compared with some of the existing methods. The comparisons show that the proposed method has potential in the use of EEG-based emotion classification.

20.
IEEE Trans Neural Syst Rehabil Eng ; 28(9): 1966-1976, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32746328

RESUMO

Mild cognitive impairment (MCI) can be an indicator representing the early stage of Alzheimier's disease (AD). AD, which is the most common form of dementia, is a major public health problem worldwide. Efficient detection of MCI is essential to identify the risks of AD and dementia. Currently Electroencephalography (EEG) is the most popular tool to investigate the presenence of MCI biomarkers. This study aims to develop a new framework that can use EEG data to automatically distinguish MCI patients from healthy control subjects. The proposed framework consists of noise removal (baseline drift and power line interference noises), segmentation, data compression, feature extraction, classification, and performance evaluation. This study introduces Piecewise Aggregate Approximation (PAA) for compressing massive volumes of EEG data for reliable analysis. Permutation entropy (PE) and auto-regressive (AR) model features are investigated to explore whether the changes in EEG signals can effectively distinguish MCI from healthy control subjects. Finally, three models are developed based on three modern machine learning techniques: Extreme Learning Machine (ELM); Support Vector Machine (SVM) and K-Nearest Neighbours (KNN) for the obtained feature sets. Our developed models are tested on a publicly available MCI EEG database and the robustness of our models is evaluated by using a 10-fold cross validation method. The results show that the proposed ELM based method achieves the highest classification accuracy (98.78%) with lower execution time (0.281 seconds) and also outperforms the existing methods. The experimental results suggest that our proposed framework could provide a robust biomarker for efficient detection of MCI patients.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Disfunção Cognitiva/diagnóstico , Eletroencefalografia , Entropia , Humanos , Aprendizado de Máquina , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...