Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 60
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(16)2023 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-37631569

RESUMO

Anxiety, learning disabilities, and depression are the symptoms of attention deficit hyperactivity disorder (ADHD), an isogenous pattern of hyperactivity, impulsivity, and inattention. For the early diagnosis of ADHD, electroencephalogram (EEG) signals are widely used. However, the direct analysis of an EEG is highly challenging as it is time-consuming, nonlinear, and nonstationary in nature. Thus, in this paper, a novel approach (LSGP-USFNet) is developed based on the patterns obtained from Ulam's spiral and Sophia Germain's prime numbers. The EEG signals are initially filtered to remove the noise and segmented with a non-overlapping sliding window of a length of 512 samples. Then, a time-frequency analysis approach, namely continuous wavelet transform, is applied to each channel of the segmented EEG signal to interpret it in the time and frequency domain. The obtained time-frequency representation is saved as a time-frequency image, and a non-overlapping n × n sliding window is applied to this image for patch extraction. An n × n Ulam's spiral is localized on each patch, and the gray levels are acquired from this patch as features where Sophie Germain's primes are located in Ulam's spiral. All gray tones from all patches are concatenated to construct the features for ADHD and normal classes. A gray tone selection algorithm, namely ReliefF, is employed on the representative features to acquire the final most important gray tones. The support vector machine classifier is used with a 10-fold cross-validation criteria. Our proposed approach, LSGP-USFNet, was developed using a publicly available dataset and obtained an accuracy of 97.46% in detecting ADHD automatically. Our generated model is ready to be validated using a bigger database and it can also be used to detect other children's neurological disorders.


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Criança , Humanos , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Eletroencefalografia , Algoritmos , Ansiedade , Transtornos de Ansiedade , Niacinamida
2.
Sensors (Basel) ; 23(14)2023 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37514877

RESUMO

Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and privacy. One potential solution lies in combining the chest X-ray imaging mode with federated deep learning, ensuring that no single data source can bias the model adversely. This study presents a pre-processing pipeline designed to debias chest X-ray images, thereby enhancing internal classification and external generalization. The pipeline employs a pruning mechanism to train a deep learning model for nodule detection, utilizing the most informative images from a publicly available lung nodule X-ray dataset. Histogram equalization is used to remove systematic differences in image brightness and contrast. Model training is then performed using combinations of lung field segmentation, close cropping, and rib/bone suppression. The resulting deep learning models, generated through this pre-processing pipeline, demonstrate successful generalization on an independent lung nodule dataset. By eliminating confounding variables in chest X-ray images and suppressing signal noise from the bone structures, the proposed deep learning lung nodule detection algorithm achieves an external generalization accuracy of 89%. This approach paves the way for the development of a low-cost and accessible deep learning-based clinical system for lung cancer screening.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Redes Neurais de Computação , Raios X , Detecção Precoce de Câncer , Neoplasias Pulmonares/diagnóstico por imagem , Pulmão
3.
J Digit Imaging ; 36(3): 879-892, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36658376

RESUMO

Incidental adrenal masses are seen in 5% of abdominal computed tomography (CT) examinations. Accurate discrimination of the possible differential diagnoses has important therapeutic and prognostic significance. A new handcrafted machine learning method has been developed for the automated and accurate classification of adrenal gland CT images. A new dataset comprising 759 adrenal gland CT image slices from 96 subjects were analyzed. Experts had labeled the collected images into four classes: normal, pheochromocytoma, lipid-poor adenoma, and metastasis. The images were preprocessed, resized, and the image features were extracted using the center symmetric local binary pattern (CS-LBP) method. CT images were next divided into 16 × 16 fixed-size patches, and further feature extraction using CS-LBP was performed on these patches. Next, extracted features were selected using neighborhood component analysis (NCA) to obtain the most meaningful ones for downstream classification. Finally, the selected features were classified using k-nearest neighbor (kNN), support vector machine (SVM), and neural network (NN) classifiers to obtain the optimum performing model. Our proposed method obtained an accuracy of 99.87%, 99.21%, and 98.81% with kNN, SVM, and NN classifiers, respectively. Hence, the kNN classifier yielded the highest classification results with no pathological image misclassified as normal. Our developed fixed patch CS-LBP-based automatic classification of adrenal gland pathologies on CT images is highly accurate and has low time complexity [Formula: see text]. It has the potential to be used for screening of adrenal gland disease classes with CT images.


Assuntos
Adenoma , Doenças das Glândulas Suprarrenais , Humanos , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Aprendizado de Máquina
4.
J Digit Imaging ; 36(3): 973-987, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36797543

RESUMO

Modern computer vision algorithms are based on convolutional neural networks (CNNs), and both end-to-end learning and transfer learning modes have been used with CNN for image classification. Thus, automated brain tumor classification models have been proposed by deploying CNNs to help medical professionals. Our primary objective is to increase the classification performance using CNN. Therefore, a patch-based deep feature engineering model has been proposed in this work. Nowadays, patch division techniques have been used to attain high classification performance, and variable-sized patches have achieved good results. In this work, we have used three types of patches of different sizes (32 × 32, 56 × 56, 112 × 112). Six feature vectors have been obtained using these patches and two layers of the pretrained ResNet50 (global average pooling and fully connected layers). In the feature selection phase, three selectors-neighborhood component analysis (NCA), Chi2, and ReliefF-have been used, and 18 final feature vectors have been obtained. By deploying k nearest neighbors (kNN), 18 results have been calculated. Iterative hard majority voting (IHMV) has been applied to compute the general classification accuracy of this framework. This model uses different patches, feature extractors (two layers of the ResNet50 have been utilized as feature extractors), and selectors, making this a framework that we have named PatchResNet. A public brain image dataset containing four classes (glioblastoma multiforme (GBM), meningioma, pituitary tumor, healthy) has been used to develop the proposed PatchResNet model. Our proposed PatchResNet attained 98.10% classification accuracy using the public brain tumor image dataset. The developed PatchResNet model obtained high classification accuracy and has the advantage of being a self-organized framework. Therefore, the proposed method can choose the best result validation prediction vectors and achieve high image classification performance.


Assuntos
Neoplasias Encefálicas , Redes Neurais de Computação , Humanos , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética , Encéfalo
5.
J Digit Imaging ; 36(6): 2441-2460, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37537514

RESUMO

Detecting neurological abnormalities such as brain tumors and Alzheimer's disease (AD) using magnetic resonance imaging (MRI) images is an important research topic in the literature. Numerous machine learning models have been used to detect brain abnormalities accurately. This study addresses the problem of detecting neurological abnormalities in MRI. The motivation behind this problem lies in the need for accurate and efficient methods to assist neurologists in the diagnosis of these disorders. In addition, many deep learning techniques have been applied to MRI to develop accurate brain abnormality detection models, but these networks have high time complexity. Hence, a novel hand-modeled feature-based learning network is presented to reduce the time complexity and obtain high classification performance. The model proposed in this work uses a new feature generation architecture named pyramid and fixed-size patch (PFP). The main aim of the proposed PFP structure is to attain high classification performance using essential feature extractors with both multilevel and local features. Furthermore, the PFP feature extractor generates low- and high-level features using a handcrafted extractor. To obtain the high discriminative feature extraction ability of the PFP, we have used histogram-oriented gradients (HOG); hence, it is named PFP-HOG. Furthermore, the iterative Chi2 (IChi2) is utilized to choose the clinically significant features. Finally, the k-nearest neighbors (kNN) with tenfold cross-validation is used for automated classification. Four MRI neurological databases (AD dataset, brain tumor dataset 1, brain tumor dataset 2, and merged dataset) have been utilized to develop our model. PFP-HOG and IChi2-based models attained 100%, 94.98%, 98.19%, and 97.80% using the AD dataset, brain tumor dataset1, brain tumor dataset 2, and merged brain MRI dataset, respectively. These findings not only provide an accurate and robust classification of various neurological disorders using MRI but also hold the potential to assist neurologists in validating manual MRI brain abnormality screening.


Assuntos
Doença de Alzheimer , Neoplasias Encefálicas , Humanos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Aprendizado de Máquina , Doença de Alzheimer/diagnóstico por imagem
6.
J Digit Imaging ; 36(4): 1675-1686, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37131063

RESUMO

Microscopic examination of urinary sediments is a common laboratory procedure. Automated image-based classification of urinary sediments can reduce analysis time and costs. Inspired by cryptographic mixing protocols and computer vision, we developed an image classification model that combines a novel Arnold Cat Map (ACM)- and fixed-size patch-based mixer algorithm with transfer learning for deep feature extraction. Our study dataset comprised 6,687 urinary sediment images belonging to seven classes: Cast, Crystal, Epithelia, Epithelial nuclei, Erythrocyte, Leukocyte, and Mycete. The developed model consists of four layers: (1) an ACM-based mixer to generate mixed images from resized 224 × 224 input images using fixed-size 16 × 16 patches; (2) DenseNet201 pre-trained on ImageNet1K to extract 1,920 features from each raw input image, and its six corresponding mixed images were concatenated to form a final feature vector of length 13,440; (3) iterative neighborhood component analysis to select the most discriminative feature vector of optimal length 342, determined using a k-nearest neighbor (kNN)-based loss function calculator; and (4) shallow kNN-based classification with ten-fold cross-validation. Our model achieved 98.52% overall accuracy for seven-class classification, outperforming published models for urinary cell and sediment analysis. We demonstrated the feasibility and accuracy of deep feature engineering using an ACM-based mixer algorithm for image preprocessing combined with pre-trained DenseNet201 for feature extraction. The classification model was both demonstrably accurate and computationally lightweight, making it ready for implementation in real-world image-based urine sediment analysis applications.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Microscopia
7.
Sensors (Basel) ; 22(5)2022 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-35271154

RESUMO

Recently, deep models have been very popular because they achieve excellent performance with many classification problems. Deep networks have high computational complexities and require specific hardware. To overcome this problem (without decreasing classification ability), a hand-modeled feature selection method is proposed in this paper. A new shape-based local feature extractor is presented which uses the geometric shape of the frustum. By using a frustum pattern, textural features are generated. Moreover, statistical features have been extracted in this model. Textures and statistics features are fused, and a hybrid feature extraction phase is obtained; these features are low-level. To generate high level features, tunable Q factor wavelet transform (TQWT) is used. The presented hybrid feature generator creates 154 feature vectors; hence, it is named Frustum154. In the multilevel feature creation phase, this model can select the appropriate feature vectors automatically and create the final feature vector by merging the appropriate feature vectors. Iterative neighborhood component analysis (INCA) chooses the best feature vector, and shallow classifiers are then used. Frustum154 has been tested on three basic hand-movement sEMG datasets. Hand-movement sEMG datasets are commonly used in biomedical engineering, but there are some problems in this area. The presented models generally required one dataset to achieve high classification ability. In this work, three sEMG datasets have been used to test the performance of Frustum154. The presented model is self-organized and selects the most informative subbands and features automatically. It achieved 98.89%, 94.94%, and 95.30% classification accuracies using shallow classifiers, indicating that Frustum154 can improve classification accuracy.


Assuntos
Algoritmos , Análise de Ondaletas , Mãos , Força da Mão , Movimento
8.
Pattern Recognit Lett ; 153: 67-74, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34876763

RESUMO

Coronavirus (which is also known as COVID-19) is severely impacting the wellness and lives of many across the globe. There are several methods currently to detect and monitor the progress of the disease such as radiological image from patients' chests, measuring the symptoms and applying polymerase chain reaction (RT-PCR) test. X-ray imaging is one of the popular techniques used to visualise the impact of the virus on the lungs. Although manual detection of this disease using radiology images is more popular, it can be time-consuming, and is prone to human errors. Hence, automated detection of lung pathologies due to COVID-19 utilising deep learning (Bowles et al.) techniques can assist with yielding accurate results for huge databases. Large volumes of data are needed to achieve generalizable DL models; however, there are very few public databases available for detecting COVID-19 disease pathologies automatically. Standard data augmentation method can be used to enhance the models' generalizability. In this research, the Extensive COVID-19 X-ray and CT Chest Images Dataset has been used and generative adversarial network (GAN) coupled with trained, semi-supervised CycleGAN (SSA- CycleGAN) has been applied to augment the training dataset. Then a newly designed and finetuned Inception V3 transfer learning model has been developed to train the algorithm for detecting COVID-19 pandemic. The obtained results from the proposed Inception-CycleGAN model indicated Accuracy = 94.2%, Area under Curve = 92.2%, Mean Squared Error = 0.27, Mean Absolute Error = 0.16. The developed Inception-CycleGAN framework is ready to be tested with further COVID-19 X-Ray images of the chest.

9.
Sensors (Basel) ; 21(21)2021 Oct 23.
Artigo em Inglês | MEDLINE | ID: mdl-34770340

RESUMO

Parkinson's disease (PD) is the second most common neurodegenerative disorder affecting over 6 million people globally. Although there are symptomatic treatments that can increase the survivability of the disease, there are no curative treatments. The prevalence of PD and disability-adjusted life years continue to increase steadily, leading to a growing burden on patients, their families, society and the economy. Dopaminergic medications can significantly slow down the progression of PD when applied during the early stages. However, these treatments often become less effective with the disease progression. Early diagnosis of PD is crucial for immediate interventions so that the patients can remain self-sufficient for the longest period of time possible. Unfortunately, diagnoses are often late, due to factors such as a global shortage of neurologists skilled in early PD diagnosis. Computer-aided diagnostic (CAD) tools, based on artificial intelligence methods, that can perform automated diagnosis of PD, are gaining attention from healthcare services. In this review, we have identified 63 studies published between January 2011 and July 2021, that proposed deep learning models for an automated diagnosis of PD, using various types of modalities like brain analysis (SPECT, PET, MRI and EEG), and motion symptoms (gait, handwriting, speech and EMG). From these studies, we identify the best performing deep learning model reported for each modality and highlight the current limitations that are hindering the adoption of such CAD tools in healthcare. Finally, we propose new directions to further the studies on deep learning in the automated detection of PD, in the hopes of improving the utility, applicability and impact of such tools to improve early detection of PD globally.


Assuntos
Aprendizado Profundo , Doença de Parkinson , Inteligência Artificial , Marcha , Humanos , Doença de Parkinson/diagnóstico , Fala
10.
Sensors (Basel) ; 21(24)2021 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-34960599

RESUMO

Amongst the most common causes of death globally, stroke is one of top three affecting over 100 million people worldwide annually. There are two classes of stroke, namely ischemic stroke (due to impairment of blood supply, accounting for ~70% of all strokes) and hemorrhagic stroke (due to bleeding), both of which can result, if untreated, in permanently damaged brain tissue. The discovery that the affected brain tissue (i.e., 'ischemic penumbra') can be salvaged from permanent damage and the bourgeoning growth in computer aided diagnosis has led to major advances in stroke management. Abiding to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines, we have surveyed a total of 177 research papers published between 2010 and 2021 to highlight the current status and challenges faced by computer aided diagnosis (CAD), machine learning (ML) and deep learning (DL) based techniques for CT and MRI as prime modalities for stroke detection and lesion region segmentation. This work concludes by showcasing the current requirement of this domain, the preferred modality, and prospective research areas.


Assuntos
Acidente Vascular Cerebral , Encéfalo , Computadores , Diagnóstico por Computador , Humanos , Estudos Prospectivos , Acidente Vascular Cerebral/diagnóstico por imagem
11.
Sensors (Basel) ; 21(23)2021 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-34884045

RESUMO

The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.


Assuntos
COVID-19 , Pandemias , Inteligência Artificial , Humanos , SARS-CoV-2 , Tomografia Computadorizada por Raios X
12.
Entropy (Basel) ; 23(12)2021 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-34945957

RESUMO

Optical coherence tomography (OCT) images coupled with many learning techniques have been developed to diagnose retinal disorders. This work aims to develop a novel framework for extracting deep features from 18 pre-trained convolutional neural networks (CNN) and to attain high performance using OCT images. In this work, we have developed a new framework for automated detection of retinal disorders using transfer learning. This model consists of three phases: deep fused and multilevel feature extraction, using 18 pre-trained networks and tent maximal pooling, feature selection with ReliefF, and classification using the optimized classifier. The novelty of this proposed framework is the feature generation using widely used CNNs and to select the most suitable features for classification. The extracted features using our proposed intelligent feature extractor are fed to iterative ReliefF (IRF) to automatically select the best feature vector. The quadratic support vector machine (QSVM) is utilized as a classifier in this work. We have developed our model using two public OCT image datasets, and they are named database 1 (DB1) and database 2 (DB2). The proposed framework can attain 97.40% and 100% classification accuracies using the two OCT datasets, DB1 and DB2, respectively. These results illustrate the success of our model.

13.
Comput Methods Programs Biomed ; 244: 107992, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38218118

RESUMO

BACKGROUND AND OBJECTIVE: Sleep staging is an essential step for sleep disorder diagnosis, which is time-intensive and laborious for experts to perform this work manually. Automatic sleep stage classification methods not only alleviate experts from these demanding tasks but also enhance the accuracy and efficiency of the classification process. METHODS: A novel multi-channel biosignal-based model constructed by the combination of a 3D convolutional operation and a graph convolutional operation is proposed for the automated sleep stages using various physiological signals. Both the 3D convolution and graph convolution can aggregate information from neighboring brain areas, which helps to learn intrinsic connections from the biosignals. Electroencephalogram (EEG), electromyogram (EMG), electrooculogram (EOG) and electrocardiogram (ECG) signals are employed to extract time domain and frequency domain features. Subsequently, these signals are input to the 3D convolutional and graph convolutional branches, respectively. The 3D convolution branch can explore the correlations between multi-channel signals and multi-band waves in each channel in the time series, while the graph convolution branch can explore the connections between each channel and each frequency band. In this work, we have developed the proposed multi-channel convolution combined sleep stage classification model (MixSleepNet) using ISRUC datasets (Subgroup 3 and 50 random samples from Subgroup 1). RESULTS: Based on the first expert's label, our generated MixSleepNet yielded an accuracy, F1-score and Cohen kappa scores of 0.830, 0.821 and 0.782, respectively for ISRUC-S3. It obtained accuracy, F1-score and Cohen kappa scores of 0.812, 0.786, and 0.756, respectively for the ISRUC-S1 dataset. In accordance with the evaluations conducted by the second expert, the comprehensive accuracies, F1-scores, and Cohen kappa coefficients for the ISRUC-S3 and ISRUC-S1 datasets are determined to be 0.837, 0.820, 0.789, and 0.829, 0.791, 0.775, respectively. CONCLUSION: The results of the performance metrics by the proposed method are much better than those from all the compared models. Additional experiments were carried out on the ISRUC-S3 sub-dataset to evaluate the contributions of each module towards the classification performance.


Assuntos
Fases do Sono , Sono , Fases do Sono/fisiologia , Fatores de Tempo , Eletroencefalografia/métodos , Eletroculografia/métodos
14.
Comput Biol Med ; 173: 108280, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38547655

RESUMO

BACKGROUND: Timely detection of neurodevelopmental and neurological conditions is crucial for early intervention. Specific Language Impairment (SLI) in children and Parkinson's disease (PD) manifests in speech disturbances that may be exploited for diagnostic screening using recorded speech signals. We were motivated to develop an accurate yet computationally lightweight model for speech-based detection of SLI and PD, employing novel feature engineering techniques to mimic the adaptable dynamic weight assignment network capability of deep learning architectures. MATERIALS AND METHODS: In this research, we have introduced an advanced feature engineering model incorporating a novel feature extraction function, the Factor Lattice Pattern (FLP), which is a quantum-inspired method and uses a superposition-like mechanism, making it dynamic in nature. The FLP encompasses eight distinct patterns, from which the most appropriate pattern was discerned based on the data structure. Through the implementation of the FLP, we automatically extracted signal-specific textural features. Additionally, we developed a new feature engineering model to assess the efficacy of the FLP. This model is self-organizing, producing nine potential results and subsequently choosing the optimal one. Our speech classification framework consists of (1) feature extraction using the proposed FLP and a statistical feature extractor; (2) feature selection employing iterative neighborhood component analysis and an intersection-based feature selector; (3) classification via support vector machine and k-nearest neighbors; and (4) outcome determination using combinational majority voting to select the most favorable results. RESULTS: To validate the classification capabilities of our proposed feature engineering model, designed to automatically detect PD and SLI, we employed three speech datasets of PD and SLI patients. Our presented FLP-centric model achieved classification accuracy of more than 95% and 99.79% for all PD and SLI datasets, respectively. CONCLUSIONS: Our results indicate that the proposed model is an accurate alternative to deep learning models in classifying neurological conditions using speech signals.


Assuntos
Doença de Parkinson , Transtorno Específico de Linguagem , Criança , Humanos , Fala , Doença de Parkinson/diagnóstico , Máquina de Vetores de Suporte
15.
Comput Methods Programs Biomed ; 247: 108076, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38422891

RESUMO

BACKGROUND AND AIM: Anxiety disorder is common; early diagnosis is crucial for management. Anxiety can induce physiological changes in the brain and heart. We aimed to develop an efficient and accurate handcrafted feature engineering model for automated anxiety detection using ECG signals. MATERIALS AND METHODS: We studied open-access electrocardiography (ECG) data of 19 subjects collected via wearable sensors while they were shown videos that might induce anxiety. Using the Hamilton Anxiety Rating Scale, subjects are categorized into normal, light anxiety, moderate anxiety, and severe anxiety groups. ECGs were divided into non-overlapping 4- (Case 1), 5- (Case 2), and 6-second (Case 3) segments for analysis. We proposed a self-organized dynamic pattern-based feature extraction function-probabilistic binary pattern (PBP)-in which patterns within the function were determined by the probabilities of the input signal-dependent values. This was combined with tunable q-factor wavelet transform to facilitate multileveled generation of feature vectors in both spatial and frequency domains. Neighborhood component analysis and Chi2 functions were used to select features and reduce data dimensionality. Shallow k-nearest neighbors and support vector machine classifiers were used to calculate four (=2 × 2) classifier-wise results per input signal. From the latter, novel self-organized combinational majority voting was applied to calculate an additional five voted results. The optimal final model outcome was chosen from among the nine (classifier-wise and voted) results using a greedy algorithm. RESULTS: Our model achieved classification accuracies of over 98.5 % for all three cases. Ablation studies confirmed the incremental accuracy of PBP-based feature engineering over traditional local binary pattern feature extraction. CONCLUSIONS: The results demonstrated the feasibility and accuracy of our PBP-based feature engineering model for anxiety classification using ECG signals.


Assuntos
Eletrocardiografia , Análise de Ondaletas , Humanos , Algoritmos , Ansiedade/diagnóstico , Transtornos de Ansiedade , Processamento de Sinais Assistido por Computador
16.
Cogn Neurodyn ; 18(4): 1609-1625, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39104684

RESUMO

In this study, attention deficit hyperactivity disorder (ADHD), a childhood neurodevelopmental disorder, is being studied alongside its comorbidity, conduct disorder (CD), a behavioral disorder. Because ADHD and CD share commonalities, distinguishing them is difficult, thus increasing the risk of misdiagnosis. It is crucial that these two conditions are not mistakenly identified as the same because the treatment plan varies depending on whether the patient has CD or ADHD. Hence, this study proposes an electroencephalogram (EEG)-based deep learning system known as ADHD/CD-NET that is capable of objectively distinguishing ADHD, ADHD + CD, and CD. The 12-channel EEG signals were first segmented and converted into channel-wise continuous wavelet transform (CWT) correlation matrices. The resulting matrices were then used to train the convolutional neural network (CNN) model, and the model's performance was evaluated using 10-fold cross-validation. Gradient-weighted class activation mapping (Grad-CAM) was also used to provide explanations for the prediction result made by the 'black box' CNN model. Internal private dataset (45 ADHD, 62 ADHD + CD and 16 CD) and external public dataset (61 ADHD and 60 healthy controls) were used to evaluate ADHD/CD-NET. As a result, ADHD/CD-NET achieved classification accuracy, sensitivity, specificity, and precision of 93.70%, 90.83%, 95.35% and 91.85% for the internal evaluation, and 98.19%, 98.36%, 98.03% and 98.06% for the external evaluation. Grad-CAM also identified significant channels that contributed to the diagnosis outcome. Therefore, ADHD/CD-NET can perform temporal localization and choose significant EEG channels for diagnosis, thus providing objective analysis for mental health professionals and clinicians to consider when making a diagnosis. Supplementary Information: The online version contains supplementary material available at 10.1007/s11571-023-10028-2.

17.
Cogn Neurodyn ; 18(2): 383-404, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38699621

RESUMO

Fibromyalgia is a soft tissue rheumatism with significant qualitative and quantitative impact on sleep macro and micro architecture. The primary objective of this study is to analyze and identify automatically healthy individuals and those with fibromyalgia using sleep electroencephalography (EEG) signals. The study focused on the automatic detection and interpretation of EEG signals obtained from fibromyalgia patients. In this work, the sleep EEG signals are divided into 15-s and a total of 5358 (3411 healthy control and 1947 fibromyalgia) EEG segments are obtained from 16 fibromyalgia and 16 normal subjects. Our developed model has advanced multilevel feature extraction architecture and hence, we used a new feature extractor called GluPat, inspired by the glucose chemical, with a new pooling approach inspired by the D'hondt selection system. Furthermore, our proposed method incorporated feature selection techniques using iterative neighborhood component analysis and iterative Chi2 methods. These selection mechanisms enabled the identification of discriminative features for accurate classification. In the classification phase, we employed a support vector machine and k-nearest neighbor algorithms to classify the EEG signals with leave-one-record-out (LORO) and tenfold cross-validation (CV) techniques. All results are calculated channel-wise and iterative majority voting is used to obtain generalized results. The best results were determined using the greedy algorithm. The developed model achieved a detection accuracy of 100% and 91.83% with a tenfold and LORO CV strategies, respectively using sleep stage (2 + 3) EEG signals. Our generated model is simple and has linear time complexity.

18.
Comput Biol Med ; 172: 108207, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38489986

RESUMO

Artificial Intelligence (AI) techniques are increasingly used in computer-aided diagnostic tools in medicine. These techniques can also help to identify Hypertension (HTN) in its early stage, as it is a global health issue. Automated HTN detection uses socio-demographic, clinical data, and physiological signals. Additionally, signs of secondary HTN can also be identified using various imaging modalities. This systematic review examines related work on automated HTN detection. We identify datasets, techniques, and classifiers used to develop AI models from clinical data, physiological signals, and fused data (a combination of both). Image-based models for assessing secondary HTN are also reviewed. The majority of the studies have primarily utilized single-modality approaches, such as biological signals (e.g., electrocardiography, photoplethysmography), and medical imaging (e.g., magnetic resonance angiography, ultrasound). Surprisingly, only a small portion of the studies (22 out of 122) utilized a multi-modal fusion approach combining data from different sources. Even fewer investigated integrating clinical data, physiological signals, and medical imaging to understand the intricate relationships between these factors. Future research directions are discussed that could build better healthcare systems for early HTN detection through more integrated modeling of multi-modal data sources.


Assuntos
Hipertensão , Medicina , Humanos , Inteligência Artificial , Eletrocardiografia , Hipertensão/diagnóstico por imagem , Angiografia por Ressonância Magnética
19.
Comput Biol Med ; 155: 106649, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36805219

RESUMO

BACKGROUND: Natural Language Processing (NLP) is widely used to extract clinical insights from Electronic Health Records (EHRs). However, the lack of annotated data, automated tools, and other challenges hinder the full utilisation of NLP for EHRs. Various Machine Learning (ML), Deep Learning (DL) and NLP techniques are studied and compared to understand the limitations and opportunities in this space comprehensively. METHODOLOGY: After screening 261 articles from 11 databases, we included 127 papers for full-text review covering seven categories of articles: (1) medical note classification, (2) clinical entity recognition, (3) text summarisation, (4) deep learning (DL) and transfer learning architecture, (5) information extraction, (6) Medical language translation and (7) other NLP applications. This study follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. RESULT AND DISCUSSION: EHR was the most commonly used data type among the selected articles, and the datasets were primarily unstructured. Various ML and DL methods were used, with prediction or classification being the most common application of ML or DL. The most common use cases were: the International Classification of Diseases, Ninth Revision (ICD-9) classification, clinical note analysis, and named entity recognition (NER) for clinical descriptions and research on psychiatric disorders. CONCLUSION: We find that the adopted ML models were not adequately assessed. In addition, the data imbalance problem is quite important, yet we must find techniques to address this underlining problem. Future studies should address key limitations in studies, primarily identifying Lupus Nephritis, Suicide Attempts, perinatal self-harmed and ICD-9 classification.


Assuntos
Registros Eletrônicos de Saúde , Processamento de Linguagem Natural , Humanos , Aprendizado de Máquina , Armazenamento e Recuperação da Informação , Atenção à Saúde
20.
Comput Biol Med ; 165: 107441, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37683529

RESUMO

Uncertainty estimation in healthcare involves quantifying and understanding the inherent uncertainty or variability associated with medical predictions, diagnoses, and treatment outcomes. In this era of Artificial Intelligence (AI) models, uncertainty estimation becomes vital to ensure safe decision-making in the medical field. Therefore, this review focuses on the application of uncertainty techniques to machine and deep learning models in healthcare. A systematic literature review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Our analysis revealed that Bayesian methods were the predominant technique for uncertainty quantification in machine learning models, with Fuzzy systems being the second most used approach. Regarding deep learning models, Bayesian methods emerged as the most prevalent approach, finding application in nearly all aspects of medical imaging. Most of the studies reported in this paper focused on medical images, highlighting the prevalent application of uncertainty quantification techniques using deep learning models compared to machine learning models. Interestingly, we observed a scarcity of studies applying uncertainty quantification to physiological signals. Thus, future research on uncertainty quantification should prioritize investigating the application of these techniques to physiological signals. Overall, our review highlights the significance of integrating uncertainty techniques in healthcare applications of machine learning and deep learning models. This can provide valuable insights and practical solutions to manage uncertainty in real-world medical data, ultimately improving the accuracy and reliability of medical diagnoses and treatment recommendations.


Assuntos
Inteligência Artificial , Atenção à Saúde , Teorema de Bayes , Reprodutibilidade dos Testes , Incerteza
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA