RESUMEN
Electroencephalogram (EEG) signals contain information about the brain's state as they reflect the brain's functioning. However, the manual interpretation of EEG signals is tedious and time-consuming. Therefore, automatic EEG translation models need to be proposed using machine learning methods. In this study, we proposed an innovative method to achieve high classification performance with explainable results. We introduce channel-based transformation, a channel pattern (ChannelPat), the t algorithm, and Lobish (a symbolic language). By using channel-based transformation, EEG signals were encoded using the index of the channels. The proposed ChannelPat feature extractor encoded the transition between two channels and served as a histogram-based feature extractor. An iterative neighborhood component analysis (INCA) feature selector was employed to select the most informative features, and the selected features were fed into a new ensemble k-nearest neighbor (tkNN) classifier. To evaluate the classification capability of the proposed channel-based EEG language detection model, a new EEG language dataset comprising Arabic and Turkish was collected. Additionally, Lobish was introduced to obtain explainable outcomes from the proposed EEG language detection model. The proposed channel-based feature engineering model was applied to the collected EEG language dataset, achieving a classification accuracy of 98.59%. Lobish extracted meaningful information from the cortex of the brain for language detection.
RESUMEN
Fibromyalgia is a soft tissue rheumatism with significant qualitative and quantitative impact on sleep macro and micro architecture. The primary objective of this study is to analyze and identify automatically healthy individuals and those with fibromyalgia using sleep electroencephalography (EEG) signals. The study focused on the automatic detection and interpretation of EEG signals obtained from fibromyalgia patients. In this work, the sleep EEG signals are divided into 15-s and a total of 5358 (3411 healthy control and 1947 fibromyalgia) EEG segments are obtained from 16 fibromyalgia and 16 normal subjects. Our developed model has advanced multilevel feature extraction architecture and hence, we used a new feature extractor called GluPat, inspired by the glucose chemical, with a new pooling approach inspired by the D'hondt selection system. Furthermore, our proposed method incorporated feature selection techniques using iterative neighborhood component analysis and iterative Chi2 methods. These selection mechanisms enabled the identification of discriminative features for accurate classification. In the classification phase, we employed a support vector machine and k-nearest neighbor algorithms to classify the EEG signals with leave-one-record-out (LORO) and tenfold cross-validation (CV) techniques. All results are calculated channel-wise and iterative majority voting is used to obtain generalized results. The best results were determined using the greedy algorithm. The developed model achieved a detection accuracy of 100% and 91.83% with a tenfold and LORO CV strategies, respectively using sleep stage (2 + 3) EEG signals. Our generated model is simple and has linear time complexity.
RESUMEN
INTRODUCTION: Computer vision models have been used to diagnose some disorders using computer tomography (CT) and magnetic resonance (MR) images. In this work, our objective is to detect large and small brain vessel occlusion using a deep feature engineering model in acute of ischemic stroke. METHODS: We use our dataset. which contains 324 patient's CT images with two classes; these classes are large and small brain vessel occlusion. We divided the collected image into horizontal and vertical patches. Then, pretrained AlexNet was utilized to extract deep features. Here, fc6 and fc7 (sixth and seventh fully connected layers) layers have been used to extract deep features from the created patches. The generated features from patches have been concatenated/merged to generate the final feature vector. In order to select the best combination from the generated final feature vector, an iterative selector (iterative neighborhood component analysis-INCA) has been used, and this selector has chosen 43 features. These 43 features have been used for classification. In the last phase, we used a kNN classifier with tenfold cross-validation. RESULTS: By using 43 features and a kNN classifier, our AlexNet-based deep feature engineering model surprisingly attained 100% classification accuracy. CONCLUSION: The obtained perfect classification performance clearly demonstrated that our proposal could separate large and small brain vessel occlusion detection in non-contrast CT images. In this aspect, this model can assist neurology experts with the early recanalization chance.
Asunto(s)
Accidente Cerebrovascular Isquémico , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Accidente Cerebrovascular Isquémico/diagnóstico por imagen , Masculino , Aprendizaje Profundo , Femenino , Encéfalo/diagnóstico por imagenRESUMEN
BACKGROUND: Timely detection of neurodevelopmental and neurological conditions is crucial for early intervention. Specific Language Impairment (SLI) in children and Parkinson's disease (PD) manifests in speech disturbances that may be exploited for diagnostic screening using recorded speech signals. We were motivated to develop an accurate yet computationally lightweight model for speech-based detection of SLI and PD, employing novel feature engineering techniques to mimic the adaptable dynamic weight assignment network capability of deep learning architectures. MATERIALS AND METHODS: In this research, we have introduced an advanced feature engineering model incorporating a novel feature extraction function, the Factor Lattice Pattern (FLP), which is a quantum-inspired method and uses a superposition-like mechanism, making it dynamic in nature. The FLP encompasses eight distinct patterns, from which the most appropriate pattern was discerned based on the data structure. Through the implementation of the FLP, we automatically extracted signal-specific textural features. Additionally, we developed a new feature engineering model to assess the efficacy of the FLP. This model is self-organizing, producing nine potential results and subsequently choosing the optimal one. Our speech classification framework consists of (1) feature extraction using the proposed FLP and a statistical feature extractor; (2) feature selection employing iterative neighborhood component analysis and an intersection-based feature selector; (3) classification via support vector machine and k-nearest neighbors; and (4) outcome determination using combinational majority voting to select the most favorable results. RESULTS: To validate the classification capabilities of our proposed feature engineering model, designed to automatically detect PD and SLI, we employed three speech datasets of PD and SLI patients. Our presented FLP-centric model achieved classification accuracy of more than 95% and 99.79% for all PD and SLI datasets, respectively. CONCLUSIONS: Our results indicate that the proposed model is an accurate alternative to deep learning models in classifying neurological conditions using speech signals.
Asunto(s)
Enfermedad de Parkinson , Trastorno Específico del Lenguaje , Niño , Humanos , Habla , Enfermedad de Parkinson/diagnóstico , Máquina de Vectores de SoporteRESUMEN
BACKGROUND AND AIM: Anxiety disorder is common; early diagnosis is crucial for management. Anxiety can induce physiological changes in the brain and heart. We aimed to develop an efficient and accurate handcrafted feature engineering model for automated anxiety detection using ECG signals. MATERIALS AND METHODS: We studied open-access electrocardiography (ECG) data of 19 subjects collected via wearable sensors while they were shown videos that might induce anxiety. Using the Hamilton Anxiety Rating Scale, subjects are categorized into normal, light anxiety, moderate anxiety, and severe anxiety groups. ECGs were divided into non-overlapping 4- (Case 1), 5- (Case 2), and 6-second (Case 3) segments for analysis. We proposed a self-organized dynamic pattern-based feature extraction function-probabilistic binary pattern (PBP)-in which patterns within the function were determined by the probabilities of the input signal-dependent values. This was combined with tunable q-factor wavelet transform to facilitate multileveled generation of feature vectors in both spatial and frequency domains. Neighborhood component analysis and Chi2 functions were used to select features and reduce data dimensionality. Shallow k-nearest neighbors and support vector machine classifiers were used to calculate four (=2 × 2) classifier-wise results per input signal. From the latter, novel self-organized combinational majority voting was applied to calculate an additional five voted results. The optimal final model outcome was chosen from among the nine (classifier-wise and voted) results using a greedy algorithm. RESULTS: Our model achieved classification accuracies of over 98.5 % for all three cases. Ablation studies confirmed the incremental accuracy of PBP-based feature engineering over traditional local binary pattern feature extraction. CONCLUSIONS: The results demonstrated the feasibility and accuracy of our PBP-based feature engineering model for anxiety classification using ECG signals.
Asunto(s)
Electrocardiografía , Análisis de Ondículas , Humanos , Algoritmos , Ansiedad/diagnóstico , Trastornos de Ansiedad , Procesamiento de Señales Asistido por ComputadorRESUMEN
Neuropsychiatric disorders are one of the leading causes of disability. Mental health problems can occur due to various biological and environmental factors. The absence of definitive confirmatory diagnostic tests for psychiatric disorders complicates the diagnosis. It's critical to distinguish between bipolar disorder, depression, and schizophrenia since their symptoms and treatments differ. Because of brain-heart autonomic connections, electrocardiography (ECG) signals can be changed in behavioral disorders. In this research, we have automatically classified bipolar, depression, and schizophrenia from ECG signals. In this work, a new hand-crafted feature engineering model has been proposed to detect psychiatric disorders automatically. The main objective of this model is to accurately detect psychiatric disorders using ECG beats with linear time complexity. Therefore, we collected a new ECG signal dataset containing 3,570 ECG beats with four categories. The used categories are bipolar, depression, schizophrenia, and control. Furthermore, a new ternary pattern-based signal classification model has been proposed to classify these four categories. Our proposal contains four essential phases, and these phases are (i) multileveled feature extraction using multilevel discrete wavelet transform and ternary pattern, (ii) the best features selection applying iterative Chi2 selector, (iii) classification with artificial neural network (ANN) to calculate lead wise results and (iv) calculation the voted/general classification accuracy using iterative majority voting (IMV) algorithm. tenfold cross-validation is one of the most used validation techniques in the literature, and this validation model gives robust classification results. Using ANN with tenfold cross-validation, lead-by-lead and voted results have been calculated. The lead-by-lead accuracy range of the proposed model using the ANN classifier is from 73.67 to 89.19%. By deploying the IMV method, the general classification performance of our ternary pattern-based ECG classification model is increased from 89.19 to 96.25%. The findings and the calculated classification accuracies (single lead and voted) clearly demonstrated the success of the proposed ternary pattern-based advanced signal processing model. By using this model, a new wearable device can be proposed.
RESUMEN
Background and Aim: In the era of deep learning, numerous models have emerged in the literature and various application domains. Transformer architectures, particularly, have gained popularity in deep learning, with diverse transformer-based computer vision algorithms. Attention convolutional neural networks (CNNs) have been introduced to enhance image classification capabilities. In this context, we propose a novel attention convolutional model with the primary objective of detecting bipolar disorder using optical coherence tomography (OCT) images. Materials and Methods: To facilitate our study, we curated a unique OCT image dataset, initially comprising two distinct cases. For the development of an automated OCT image detection system, we introduce a new attention convolutional neural network named "TurkerNeXt". This proposed Attention TurkerNeXt encompasses four key modules: (i) the patchify stem block, (ii) the Attention TurkerNeXt block, (iii) the patchify downsampling block, and (iv) the output block. In line with the swin transformer, we employed a patchify operation in this study. The design of the attention block, Attention TurkerNeXt, draws inspiration from ConvNeXt, with an added shortcut operation to mitigate the vanishing gradient problem. The overall architecture is influenced by ResNet18. Results: The dataset comprises two distinctive cases: (i) top to bottom and (ii) left to right. Each case contains 987 training and 328 test images. Our newly proposed Attention TurkerNeXt achieved 100% test and validation accuracies for both cases. Conclusions: We curated a novel OCT dataset and introduced a new CNN, named TurkerNeXt in this research. Based on the research findings and classification results, our proposed TurkerNeXt model demonstrated excellent classification performance. This investigation distinctly underscores the potential of OCT images as a biomarker for bipolar disorder.
RESUMEN
BACKGROUND: Ankylosing spondylitis (AS) is a chronic, painful, progressive disease usually seen in the spine. Traditional diagnostic methods have limitations in detecting the early stages of AS. The early diagnosis of AS can improve patients' quality of life. This study aims to diagnose AS with a pre-trained hybrid model using magnetic resonance imaging (MRI). MATERIALS AND METHODS: In this research, we collected a new MRI dataset comprising three cases. Furthermore, we introduced a novel deep feature engineering model. Within this model, we utilized three renowned pretrained convolutional neural networks (CNNs): DenseNet201, ResNet50, and ShuffleNet. Through these pretrained CNNs, deep features were generated using the transfer learning approach. For each pretrained network, two feature vectors were generated from an MRI. Three feature selectors were employed during the feature selection phase, amplifying the number of features from 6 to 18 (calculated as 6 × 3). The k-nearest neighbors (kNN) classifier was utilized in the classification phase to determine classification results. During the information phase, the iterative majority voting (IMV) algorithm was applied to secure voted results, and our model selected the output with the highest classification accuracy. In this manner, we have introduced a self-organized deep feature engineering model. RESULTS: We have applied the presented model to the collected dataset. The proposed method yielded 99.80%, 99.60%, 100%, and 99.80% results for accuracy, recall, precision, and F1-score for the collected axial images dataset. The collected coronal image dataset yielded 99.45%, 99.20%, 99.70%, and 99.45% results for accuracy, recall, precision, and F1-score, respectively. As for contrast-enhanced images, accuracy of 95.62%, recall of 80.72%, precision of 94.24%, and an F1-score of 86.96% were attained. CONCLUSIONS: Based on the results, the proposed method for classifying AS disease has demonstrated successful outcomes using MRI. The model has been tested on three cases, and its consistently high classification performance across all cases underscores the model's general robustness. Furthermore, the ability to diagnose AS disease using only axial images, without the need for contrast-enhanced MRI, represents a significant advancement in both healthcare and economic terms.
RESUMEN
Detecting neurological abnormalities such as brain tumors and Alzheimer's disease (AD) using magnetic resonance imaging (MRI) images is an important research topic in the literature. Numerous machine learning models have been used to detect brain abnormalities accurately. This study addresses the problem of detecting neurological abnormalities in MRI. The motivation behind this problem lies in the need for accurate and efficient methods to assist neurologists in the diagnosis of these disorders. In addition, many deep learning techniques have been applied to MRI to develop accurate brain abnormality detection models, but these networks have high time complexity. Hence, a novel hand-modeled feature-based learning network is presented to reduce the time complexity and obtain high classification performance. The model proposed in this work uses a new feature generation architecture named pyramid and fixed-size patch (PFP). The main aim of the proposed PFP structure is to attain high classification performance using essential feature extractors with both multilevel and local features. Furthermore, the PFP feature extractor generates low- and high-level features using a handcrafted extractor. To obtain the high discriminative feature extraction ability of the PFP, we have used histogram-oriented gradients (HOG); hence, it is named PFP-HOG. Furthermore, the iterative Chi2 (IChi2) is utilized to choose the clinically significant features. Finally, the k-nearest neighbors (kNN) with tenfold cross-validation is used for automated classification. Four MRI neurological databases (AD dataset, brain tumor dataset 1, brain tumor dataset 2, and merged dataset) have been utilized to develop our model. PFP-HOG and IChi2-based models attained 100%, 94.98%, 98.19%, and 97.80% using the AD dataset, brain tumor dataset1, brain tumor dataset 2, and merged brain MRI dataset, respectively. These findings not only provide an accurate and robust classification of various neurological disorders using MRI but also hold the potential to assist neurologists in validating manual MRI brain abnormality screening.
Asunto(s)
Enfermedad de Alzheimer , Neoplasias Encefálicas , Humanos , Imagen por Resonancia Magnética/métodos , Neuroimagen , Encéfalo/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Aprendizaje Automático , Enfermedad de Alzheimer/diagnóstico por imagenRESUMEN
BACKGROUND: Epilepsy is one of the most common neurological conditions globally, and the fourth most common in the United States. Recurrent non-provoked seizures characterize it and have huge impacts on the quality of life and financial impacts for affected individuals. A rapid and accurate diagnosis is essential in order to instigate and monitor optimal treatments. There is also a compelling need for the accurate interpretation of epilepsy due to the current scarcity in neurologist diagnosticians and a global inequity in access and outcomes. Furthermore, the existing clinical and traditional machine learning diagnostic methods exhibit limitations, warranting the need to create an automated system using deep learning model for epilepsy detection and monitoring using a huge database. METHOD: The EEG signals from 35 channels were used to train the deep learning-based transformer model named (EpilepsyNet). For each training iteration, 1-min-long data were randomly sampled from each participant. Thereafter, each 5-s epoch was mapped to a matrix using the Pearson Correlation Coefficient (PCC), such that the bottom part of the triangle was discarded and only the upper triangle of the matrix was vectorized as input data. PCC is a reliable method used to measure the statistical relationship between two variables. Based on the 5 s of data, single embedding was performed thereafter to generate a 1-dimensional array of signals. In the final stage, a positional encoding with learnable parameters was added to each correlation coefficient's embedding before being fed to the developed EpilepsyNet as input data to epilepsy EEG signals. The ten-fold cross-validation technique was used to generate the model. RESULTS: Our transformer-based model (EpilepsyNet) yielded high classification accuracy, sensitivity, specificity and positive predictive values of 85%, 82%, 87%, and 82%, respectively. CONCLUSION: The proposed method is both accurate and robust since ten-fold cross-validation was employed to evaluate the performance of the model. Compared to the deep models used in existing studies for epilepsy diagnosis, our proposed method is simple and less computationally intensive. This is the earliest study to have uniquely employed the positional encoding with learnable parameters to each correlation coefficient's embedding together with the deep transformer model, using a huge database of 121 participants for epilepsy detection. With the training and validation of the model using a larger dataset, the same study approach can be extended for the detection of other neurological conditions, with a transformative impact on neurological diagnostics worldwide.
Asunto(s)
Epilepsia , Calidad de Vida , Humanos , Epilepsia/diagnóstico , Bases de Datos Factuales , Aprendizaje Automático , ElectroencefalografíaRESUMEN
The distance education system was widely adopted during the Covid-19 pandemic by many institutions of learning. To measure the effectiveness of this system, it is essential to evaluate the performance of the lecturers. To this end, an automated speech emotion recognition model is a solution. This research aims to develop an accurate speech emotion recognition model that will check the lecturers/instructors' emotional state during lecture presentations. A new speech emotion dataset is collected, and an automated speech emotion recognition (SER) model is proposed to achieve this aim. The presented SER model contains three main phases, which are (i) feature extraction using multi-level discrete wavelet transform (DWT) and one-dimensional orbital local binary pattern (1D-OLBP), (ii) feature selection using neighborhood component analysis (NCA), (iii) classification using support vector machine (SVM) with ten-fold cross-validation. The proposed 1D-OLBP and NCA-based model is tested on the collected dataset, containing three emotional states with 7101 sound segments. The presented 1D-OLBP and NCA-based technique achieved a 93.40% classification accuracy using the proposed model on the new dataset. Moreover, the proposed architecture has been tested on the three publicly available speech emotion recognition datasets to highlight the general classification ability of this self-organized model. We reached over 70% classification accuracies for all three public datasets, and these results demonstrated the success of this model.
RESUMEN
Electroencephalography (EEG) may detect early changes in Alzheimer's disease (AD), a debilitating progressive neurodegenerative disease. We have developed an automated AD detection model using a novel directed graph for local texture feature extraction with EEG signals. The proposed graph was created from a topological map of the macroscopic connectome, i.e., neuronal pathways linking anatomo-functional brain segments involved in visual object recognition and motor response in the primate brain. This primate brain pattern (PBP)-based model was tested on a public AD EEG signal dataset. The dataset comprised 16-channel EEG signal recordings of 12 AD patients and 11 healthy controls. While PBP could generate 448 low-level features per one-dimensional EEG signal, combining it with tunable q-factor wavelet transform created a multilevel feature extractor (which mimicked deep models) to generate 8,512 (= 448 × 19) features per signal input. Iterative neighborhood component analysis was used to choose the most discriminative features (the number of optimal features varied among the individual EEG channels) to feed to a weighted k-nearest neighbor (KNN) classifier for binary classification into AD vs. healthy using both leave-one subject-out (LOSO) and tenfold cross-validations. Iterative majority voting was used to compute subject-level general performance results from the individual channel classification outputs. Channel-wise, as well as subject-level general results demonstrated exemplary performance. In addition, the model attained 100% and 92.01% accuracy for AD vs. healthy classification using the KNN classifier with tenfold and LOSO cross-validations, respectively. Our developed multilevel PBP-based model extracted discriminative features from EEG signals and paved the way for further development of models inspired by the brain connectome.
RESUMEN
Microscopic examination of urinary sediments is a common laboratory procedure. Automated image-based classification of urinary sediments can reduce analysis time and costs. Inspired by cryptographic mixing protocols and computer vision, we developed an image classification model that combines a novel Arnold Cat Map (ACM)- and fixed-size patch-based mixer algorithm with transfer learning for deep feature extraction. Our study dataset comprised 6,687 urinary sediment images belonging to seven classes: Cast, Crystal, Epithelia, Epithelial nuclei, Erythrocyte, Leukocyte, and Mycete. The developed model consists of four layers: (1) an ACM-based mixer to generate mixed images from resized 224 × 224 input images using fixed-size 16 × 16 patches; (2) DenseNet201 pre-trained on ImageNet1K to extract 1,920 features from each raw input image, and its six corresponding mixed images were concatenated to form a final feature vector of length 13,440; (3) iterative neighborhood component analysis to select the most discriminative feature vector of optimal length 342, determined using a k-nearest neighbor (kNN)-based loss function calculator; and (4) shallow kNN-based classification with ten-fold cross-validation. Our model achieved 98.52% overall accuracy for seven-class classification, outperforming published models for urinary cell and sediment analysis. We demonstrated the feasibility and accuracy of deep feature engineering using an ACM-based mixer algorithm for image preprocessing combined with pre-trained DenseNet201 for feature extraction. The classification model was both demonstrably accurate and computationally lightweight, making it ready for implementation in real-world image-based urine sediment analysis applications.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , MicroscopíaRESUMEN
PURPOSE: The classification of medical images is an important priority for clinical research and helps to improve the diagnosis of various disorders. This work aims to classify the neuroradiological features of patients with Alzheimer's disease (AD) using an automatic hand-modeled method with high accuracy. MATERIALS AND METHOD: This work uses two (private and public) datasets. The private dataset consists of 3807 magnetic resonance imaging (MRI) and computer tomography (CT) images belonging to two (normal and AD) classes. The second public (Kaggle AD) dataset contains 6400 MR images. The presented classification model comprises three fundamental phases: feature extraction using an exemplar hybrid feature extractor, neighborhood component analysis-based feature selection, and classification utilizing eight different classifiers. The novelty of this model is feature extraction. Vision transformers inspire this phase, and hence 16 exemplars are generated. Histogram-oriented gradients (HOG), local binary pattern (LBP) and local phase quantization (LPQ) feature extraction functions have been applied to each exemplar/patch and raw brain image. Finally, the created features are merged, and the best features are selected using neighborhood component analysis (NCA). These features are fed to eight classifiers to obtain highest classification performance using our proposed method. The presented image classification model uses exemplar histogram-based features; hence, it is called ExHiF. RESULTS: We have developed the ExHiF model with a ten-fold cross-validation strategy using two (private and public) datasets with shallow classifiers. We have obtained 100% classification accuracy using cubic support vector machine (CSVM) and fine k nearest neighbor (FkNN) classifiers for both datasets. CONCLUSIONS: Our developed model is ready to be validated with more datasets and has the potential to be employed in mental hospitals to assist neurologists in confirming their manual screening of AD using MRI/CT images.
Asunto(s)
Enfermedad de Alzheimer , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/patología , Imagen por Resonancia Magnética/métodos , Interpretación de Imagen Asistida por Computador/métodos , Encéfalo/diagnóstico por imagen , Tomografía Computarizada por Rayos XRESUMEN
Modern computer vision algorithms are based on convolutional neural networks (CNNs), and both end-to-end learning and transfer learning modes have been used with CNN for image classification. Thus, automated brain tumor classification models have been proposed by deploying CNNs to help medical professionals. Our primary objective is to increase the classification performance using CNN. Therefore, a patch-based deep feature engineering model has been proposed in this work. Nowadays, patch division techniques have been used to attain high classification performance, and variable-sized patches have achieved good results. In this work, we have used three types of patches of different sizes (32 × 32, 56 × 56, 112 × 112). Six feature vectors have been obtained using these patches and two layers of the pretrained ResNet50 (global average pooling and fully connected layers). In the feature selection phase, three selectors-neighborhood component analysis (NCA), Chi2, and ReliefF-have been used, and 18 final feature vectors have been obtained. By deploying k nearest neighbors (kNN), 18 results have been calculated. Iterative hard majority voting (IHMV) has been applied to compute the general classification accuracy of this framework. This model uses different patches, feature extractors (two layers of the ResNet50 have been utilized as feature extractors), and selectors, making this a framework that we have named PatchResNet. A public brain image dataset containing four classes (glioblastoma multiforme (GBM), meningioma, pituitary tumor, healthy) has been used to develop the proposed PatchResNet model. Our proposed PatchResNet attained 98.10% classification accuracy using the public brain tumor image dataset. The developed PatchResNet model obtained high classification accuracy and has the advantage of being a self-organized framework. Therefore, the proposed method can choose the best result validation prediction vectors and achieve high image classification performance.
Asunto(s)
Neoplasias Encefálicas , Redes Neurales de la Computación , Humanos , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagen , Imagen por Resonancia Magnética , EncéfaloRESUMEN
Incidental adrenal masses are seen in 5% of abdominal computed tomography (CT) examinations. Accurate discrimination of the possible differential diagnoses has important therapeutic and prognostic significance. A new handcrafted machine learning method has been developed for the automated and accurate classification of adrenal gland CT images. A new dataset comprising 759 adrenal gland CT image slices from 96 subjects were analyzed. Experts had labeled the collected images into four classes: normal, pheochromocytoma, lipid-poor adenoma, and metastasis. The images were preprocessed, resized, and the image features were extracted using the center symmetric local binary pattern (CS-LBP) method. CT images were next divided into 16 × 16 fixed-size patches, and further feature extraction using CS-LBP was performed on these patches. Next, extracted features were selected using neighborhood component analysis (NCA) to obtain the most meaningful ones for downstream classification. Finally, the selected features were classified using k-nearest neighbor (kNN), support vector machine (SVM), and neural network (NN) classifiers to obtain the optimum performing model. Our proposed method obtained an accuracy of 99.87%, 99.21%, and 98.81% with kNN, SVM, and NN classifiers, respectively. Hence, the kNN classifier yielded the highest classification results with no pathological image misclassified as normal. Our developed fixed patch CS-LBP-based automatic classification of adrenal gland pathologies on CT images is highly accurate and has low time complexity [Formula: see text]. It has the potential to be used for screening of adrenal gland disease classes with CT images.
Asunto(s)
Adenoma , Enfermedades de las Glándulas Suprarrenales , Humanos , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Aprendizaje AutomáticoRESUMEN
Objective.Schizophrenia (SZ) is a severe, chronic psychiatric-cognitive disorder. The primary objective of this work is to present a handcrafted model using state-of-the-art technique to detect SZ accurately with EEG signals.Approach.In our proposed work, the features are generated using a histogram-based generator and an iterative decomposition model. The graph-based molecular structure of the carbon chain is employed to generate low-level features. Hence, the developed feature generation model is called the carbon chain pattern (CCP). An iterative tunable q-factor wavelet transform (ITQWT) technique is implemented in the feature extraction phase to generate various sub-bands of the EEG signal. The CCP was applied to the generated sub-bands to obtain several feature vectors. The clinically significant features were selected using iterative neighborhood component analysis (INCA). The selected features were then classified using the k nearest neighbor (kNN) with a 10-fold cross-validation strategy. Finally, the iterative weighted majority method was used to obtain the results in multiple channels.Main results.The presented CCP-ITQWT and INCA-based automated model achieved an accuracy of 95.84% and 99.20% using a single channel and majority voting method, respectively with kNN classifier.Significance.Our results highlight the success of the proposed CCP-ITQWT and INCA-based model in the automated detection of SZ using EEG signals.
Asunto(s)
Disfunción Cognitiva , Esquizofrenia , Humanos , Electroencefalografía/métodos , Esquizofrenia/diagnóstico , Análisis de Ondículas , Carbono , AlgoritmosRESUMEN
Background: Chest computed tomography (CT) has a high sensitivity for detecting COVID-19 lung involvement and is widely used for diagnosis and disease monitoring. We proposed a new image classification model, swin-textural, that combined swin-based patch division with textual feature extraction for automated diagnosis of COVID-19 on chest CT images. The main objective of this work is to evaluate the performance of the swin architecture in feature engineering. Material and method: We used a public dataset comprising 2167, 1247, and 757 (total 4171) transverse chest CT images belonging to 80, 80, and 50 (total 210) subjects with COVID-19, other non-COVID lung conditions, and normal lung findings. In our model, resized 420 × 420 input images were divided using uniform square patches of incremental dimensions, which yielded ten feature extraction layers. At each layer, local binary pattern and local phase quantization operations extracted textural features from individual patches as well as the undivided input image. Iterative neighborhood component analysis was used to select the most informative set of features to form ten selected feature vectors and also used to select the 11th vector from among the top selected feature vectors with accuracy >97.5%. The downstream kNN classifier calculated 11 prediction vectors. From these, iterative hard majority voting generated another nine voted prediction vectors. Finally, the best result among the twenty was determined using a greedy algorithm. Results: Swin-textural attained 98.71% three-class classification accuracy, outperforming published deep learning models trained on the same dataset. The model has linear time complexity. Conclusions: Our handcrafted computationally lightweight swin-textural model can detect COVID-19 accurately on chest CT images with low misclassification rates. The model can be implemented in hospitals for efficient automated screening of COVID-19 on chest CT images. Moreover, findings demonstrate that our presented swin-textural is a self-organized, highly accurate, and lightweight image classification model and is better than the compared deep learning models for this dataset.
RESUMEN
Specific language impairment (SLI) is one of the most common diseases in children, and early diagnosis can help to obtain better timely therapy economically. It is difficult and time-consuming for clinicians to accurately detect SLI through standard clinical assessments. Hence, machine learning algorithms have been developed to assist in the accurate diagnosis of SLI. This work aims to investigate the graph of the favipiravir molecule-based feature extraction function and propose an accurate SLI detection model using vowels. We proposed a novel handcrafted machine learning framework. This architecture comprises the favipiravir molecular structure pattern, statistical feature extractor, wavelet packet decomposition (WPD), iterative neighborhood component analysis (INCA), and support vector machine (SVM) classifier. Two feature extraction models, statistical and textural, are employed in the handcrafted feature generation methodology. A new nature-inspired graph-based feature extractor that uses the chemical depiction of the favipiravir (favipiravir became popular with the COVID-19 pandemic) is employed for feature extraction. Finally, the proposed favipiravir pattern, statistical feature extractor, and wavelet packet decomposition are used to create a feature vector. Moreover, a statistical feature extractor is used in this work. The WPD generates multilevel features, and the most meaningful features are selected using the NCA feature selector. Finally, these chosen features are fed to SVM classifier for automated classification. Two validation methods, (i) leave one subject out (LOSO) and (ii) tenfold cross-validations (CV), are used to obtain robust classification results. Our proposed favipiravir pattern-based model developed using a vowel dataset can detect SLI children with an accuracy of 99.87% and 98.86% using tenfold and LOSO CV strategies, respectively. These results demonstrated the high vowel classification ability of the proposed favipiravir pattern-based model.
RESUMEN
Myocardial infarction (MI) is detected using electrocardiography (ECG) signals. Machine learning (ML) models have been used for automated MI detection on ECG signals. Deep learning models generally yield high classification performance but are computationally intensive. We have developed a novel multilevel hybrid feature extraction-based classification model with low time complexity for MI classification. The study dataset comprising 12-lead ECGs belonging to one healthy and 10 MI classes were downloaded from a public ECG signal databank. The model architecture comprised multilevel hybrid feature extraction, iterative feature selection, classification, and iterative majority voting (IMV). In the hybrid handcrafted feature (HHF) generation phase, both textural and statistical feature extraction functions were used to extract features from ECG beats but only at a low level. A new pooling-based multilevel decomposition model was presented to enable them to create features at a high level. This model used average and maximum pooling to create decomposed signals. Using these pooling functions, an unbalanced tree was obtained. Therefore, this model was named multilevel unbalanced pooling tree transformation (MUPTT). On the feature extraction side, two extractors (functions) were used to generate both statistical and textural features. To generate statistical features, 20 commonly used moments were used. A new, improved symmetric binary pattern function was proposed to generate textural features. Both feature extractors were applied to the original MI signal and the decomposed signals generated by the MUPTT. The most valuable features from among the extracted feature vectors were selected using iterative neighborhood component analysis (INCA). In the classification phase, a one-dimensional nearest neighbor classifier with ten-fold cross-validation was used to obtain lead-wise results. The computed lead-wise results derived from all 12 leads of the same beat were input to the IMV algorithm to generate ten voted results. The most representative was chosen using a greedy technique to calculate the overall classification performance of the model. The HHF-MUPTT-based ECG beat classification model attained excellent performance, with the best lead-wise accuracy of 99.85% observed in Lead III and 99.94% classification accuracy using the IMV algorithm. The results confirmed the high MI classification ability of the presented computationally lightweight HHF-MUPTT-based model.