Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 395
Filtrar
1.
Comput Methods Programs Biomed ; 257: 108455, 2024 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-39447439

RESUMO

BACKGROUND AND OBJECTIVE: Sudden cardiac death (SCD) is a critical health issue characterized by the sudden failure of heart function, often caused by ventricular fibrillation (VF). Early prediction of SCD is crucial to enable timely interventions. However, current methods predict SCD only a few minutes before its onset, limiting intervention time. This study aims to develop a deep learning-based model for the early prediction of SCD using electrocardiography (ECG) signals. METHODS: A multimodal explainable deep learning-based model is developed to analyze ECG signals at discrete intervals ranging from 5 to 30 min before SCD onset. The raw ECG signals, 2D scalograms generated through wavelet transform and 2D Hilbert spectrum generated through Hilbert-Huang transform (HHT) of ECG signals were applied to multiple deep learning algorithms. For raw ECG, a combination of 1D-convolutional neural networks (1D-CNN) and long short-term memory networks were employed for feature extraction and temporal pattern recognition. Besides, to extract and analyze features from scalograms and Hilbert spectra, Vision Transformer (ViT) and 2D-CNN have been used. RESULTS: The developed model achieved high performance, with accuracy, precision, recall and F1-score of 98.81%, 98.83%, 98.81%, and 98.81% respectively to predict SCD onset 30 min in advance. Further, the proposed model can accurately classify SCD patients and normal controls with 100% accuracy. Thus, the proposed method outperforms the existing state-of-the-art methods. CONCLUSIONS: The developed model is capable of capturing diverse patterns on ECG signals recorded at multiple discrete time intervals (at 5-minute increments from 5 min to 30 min) prior to SCD onset that could discriminate for SCD. The proposed model significantly improves early SCD prediction, providing a valuable tool for continuous ECG monitoring in high-risk patients.

2.
Interdiscip Sci ; 16(4): 882-906, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39367993

RESUMO

Cardiotocography (CTG) is used to assess the health of the fetus during birth or antenatally in the third trimester. It concurrently detects the maternal uterine contractions (UC) and fetal heart rate (FHR). Fetal distress, which may require therapeutic intervention, can be diagnosed using baseline FHR and its reaction to uterine contractions. Using CTG, a pragmatic machine learning strategy based on feature reduction and hyperparameter optimization was suggested in this study to classify the various fetal states (Normal, Suspect, Pathological). An application of this strategy can be a decision support tool to manage pregnancies. On a public dataset of 2126 CTG recordings, the model was assessed using various standard CTG dataset specific and relevant classifiers. The classifiers' accuracy was improved by the proposed method. The model accuracy was increased to 97.20% while using Random Forest (best classifier). Practically speaking, the model was able to correctly predict 100% of all pathological cases and 98.8% of all normal cases in the dataset. The proposed model was also implemented on another public CTG dataset having 552 CTG signals, resulting in a 97.34% accuracy. If integrated with telemedicine, this proposed model could also be used for long-distance "stay at home" fetal monitoring in high-risk pregnancies.


Assuntos
Cardiotocografia , Aprendizado de Máquina , Cardiotocografia/métodos , Humanos , Feminino , Gravidez , Monitorização Fetal/métodos , Frequência Cardíaca Fetal , Algoritmos
3.
Diagnostics (Basel) ; 14(17)2024 Sep 08.
Artigo em Inglês | MEDLINE | ID: mdl-39272771

RESUMO

Electroencephalogram (EEG) signals contain information about the brain's state as they reflect the brain's functioning. However, the manual interpretation of EEG signals is tedious and time-consuming. Therefore, automatic EEG translation models need to be proposed using machine learning methods. In this study, we proposed an innovative method to achieve high classification performance with explainable results. We introduce channel-based transformation, a channel pattern (ChannelPat), the t algorithm, and Lobish (a symbolic language). By using channel-based transformation, EEG signals were encoded using the index of the channels. The proposed ChannelPat feature extractor encoded the transition between two channels and served as a histogram-based feature extractor. An iterative neighborhood component analysis (INCA) feature selector was employed to select the most informative features, and the selected features were fed into a new ensemble k-nearest neighbor (tkNN) classifier. To evaluate the classification capability of the proposed channel-based EEG language detection model, a new EEG language dataset comprising Arabic and Turkish was collected. Additionally, Lobish was introduced to obtain explainable outcomes from the proposed EEG language detection model. The proposed channel-based feature engineering model was applied to the collected EEG language dataset, achieving a classification accuracy of 98.59%. Lobish extracted meaningful information from the cortex of the brain for language detection.

4.
Cogn Neurodyn ; 18(4): 1609-1625, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39104684

RESUMO

In this study, attention deficit hyperactivity disorder (ADHD), a childhood neurodevelopmental disorder, is being studied alongside its comorbidity, conduct disorder (CD), a behavioral disorder. Because ADHD and CD share commonalities, distinguishing them is difficult, thus increasing the risk of misdiagnosis. It is crucial that these two conditions are not mistakenly identified as the same because the treatment plan varies depending on whether the patient has CD or ADHD. Hence, this study proposes an electroencephalogram (EEG)-based deep learning system known as ADHD/CD-NET that is capable of objectively distinguishing ADHD, ADHD + CD, and CD. The 12-channel EEG signals were first segmented and converted into channel-wise continuous wavelet transform (CWT) correlation matrices. The resulting matrices were then used to train the convolutional neural network (CNN) model, and the model's performance was evaluated using 10-fold cross-validation. Gradient-weighted class activation mapping (Grad-CAM) was also used to provide explanations for the prediction result made by the 'black box' CNN model. Internal private dataset (45 ADHD, 62 ADHD + CD and 16 CD) and external public dataset (61 ADHD and 60 healthy controls) were used to evaluate ADHD/CD-NET. As a result, ADHD/CD-NET achieved classification accuracy, sensitivity, specificity, and precision of 93.70%, 90.83%, 95.35% and 91.85% for the internal evaluation, and 98.19%, 98.36%, 98.03% and 98.06% for the external evaluation. Grad-CAM also identified significant channels that contributed to the diagnosis outcome. Therefore, ADHD/CD-NET can perform temporal localization and choose significant EEG channels for diagnosis, thus providing objective analysis for mental health professionals and clinicians to consider when making a diagnosis. Supplementary Information: The online version contains supplementary material available at 10.1007/s11571-023-10028-2.

5.
J Ultrasound Med ; 43(11): 2051-2068, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39051752

RESUMO

OBJECTIVES: Breast cancer is a type of cancer caused by the uncontrolled growth of cells in the breast tissue. In a few cases, erroneous diagnosis of breast cancer by specialists and unnecessary biopsies can lead to various negative consequences. In some cases, radiologic examinations or clinical findings may raise the suspicion of breast cancer, but subsequent detailed evaluations may not confirm cancer. In addition to causing unnecessary anxiety and stress to patients, such diagnosis can also lead to unnecessary biopsy procedures, which are painful, expensive, and prone to misdiagnosis. Therefore, there is a need for the development of more accurate and reliable methods for breast cancer diagnosis. METHODS: In this study, we proposed an artificial intelligence (AI)-based method for automatically classifying breast solid mass lesions as benign vs malignant. In this study, a new breast cancer dataset (Breast-XD) was created with 791 solid mass lesions belonging to 752 different patients aged 18 to 85 years, which were examined by experienced radiologists between 2017 and 2022. RESULTS: Six classifiers, support vector machine (SVM), K-nearest neighbor (K-NN), random forest (RF), decision tree (DT), logistic regression (LR), and XGBoost, were trained on the training samples of the Breast-XD dataset. Then, each classifier made predictions on 159 test data that it had not seen before. The highest classification result was obtained using the explainable XGBoost model (X2GAI) with an accuracy of 94.34%. An explainable structure is also implemented to build the reliability of the developed model. CONCLUSIONS: The results obtained by radiologists and the X2GAI model were compared according to the diagnosis obtained from the biopsy. It was observed that our developed model performed well in cases where experienced radiologists gave false positive results.


Assuntos
Inteligência Artificial , Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Adulto , Idoso , Pessoa de Meia-Idade , Idoso de 80 Anos ou mais , Reprodutibilidade dos Testes , Adulto Jovem , Adolescente , Ultrassonografia Mamária/métodos , Radiologistas/estatística & dados numéricos , Mama/diagnóstico por imagem , Sensibilidade e Especificidade , Interpretação de Imagem Assistida por Computador/métodos , Diagnóstico Diferencial
6.
J Ultrasound Med ; 2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-39032010

RESUMO

Artificial intelligence (AI) models can play a more effective role in managing patients with the explosion of digital health records available in the healthcare industry. Machine-learning (ML) and deep-learning (DL) techniques are two methods used to develop predictive models that serve to improve the clinical processes in the healthcare industry. These models are also implemented in medical imaging machines to empower them with an intelligent decision system to aid physicians in their decisions and increase the efficiency of their routine clinical practices. The physicians who are going to work with these machines need to have an insight into what happens in the background of the implemented models and how they work. More importantly, they need to be able to interpret their predictions, assess their performance, and compare them to find the one with the best performance and fewer errors. This review aims to provide an accessible overview of key evaluation metrics for physicians without AI expertise. In this review, we developed four real-world diagnostic AI models (two ML and two DL models) for breast cancer diagnosis using ultrasound images. Then, 23 of the most commonly used evaluation metrics were reviewed uncomplicatedly for physicians. Finally, all metrics were calculated and used practically to interpret and evaluate the outputs of the models. Accessible explanations and practical applications empower physicians to effectively interpret, evaluate, and optimize AI models to ensure safety and efficacy when integrated into clinical practice.

7.
J Diabetes Metab Disord ; 23(1): 773-781, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38932891

RESUMO

Purpose: We applied machine learning to study associations between regional body fat distribution and diabetes mellitus in a population of community adults in order to investigate the predictive capability. We retrospectively analyzed a subset of data from the published Fasa cohort study using individual standard classifiers as well as ensemble learning algorithms. Methods: We measured segmental body composition using the Tanita Analyzer BC-418 MA (Tanita Corp, Japan). The following features were input to our machine learning model: fat-free mass, fat percentage, basal metabolic rate, total body water, right arm fat-free mass, right leg fat-free mass, trunk fat-free mass, trunk fat percentage, sex, age, right leg fat percentage, and right arm fat percentage. We performed classification into diabetes vs. no diabetes classes using linear support vector machine, decision tree, stochastic gradient descent, logistic regression, Gaussian naïve Bayes, k-nearest neighbors (k = 3 and k = 4), and multi-layer perceptron, as well as ensemble learning using random forest, gradient boosting, adaptive boosting, XGBoost, and ensemble voting classifiers with Top3 and Top4 algorithms. 4661 subjects (mean age 47.64 ± 9.37 years, range 35 to 70 years; 2155 male, 2506 female) were analyzed and stratified into 571 and 4090 subjects with and without a self-declared history of diabetes, respectively. Results: Age, fat mass, and fat percentages in the legs, arms, and trunk were positively associated with diabetes; fat-free mass in the legs, arms, and trunk, were negatively associated. Using XGBoost, our model attained the best excellent accuracy, precision, recall, and F1-score of 89.96%, 90.20%, 89.65%, and 89.91%, respectively. Conclusions: Our machine learning model showed that regional body fat compositions were predictive of diabetes status.

8.
Comput Methods Programs Biomed ; 254: 108253, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38861878

RESUMO

BACKGROUND AND OBJECTIVES: Optical coherence tomography (OCT) has ushered in a transformative era in the domain of ophthalmology, offering non-invasive imaging with high resolution for ocular disease detection. OCT, which is frequently used in diagnosing fundamental ocular pathologies, such as glaucoma and age-related macular degeneration (AMD), plays an important role in the widespread adoption of this technology. Apart from glaucoma and AMD, we will also investigate pertinent pathologies, such as epiretinal membrane (ERM), macular hole (MH), macular dystrophy (MD), vitreomacular traction (VMT), diabetic maculopathy (DMP), cystoid macular edema (CME), central serous chorioretinopathy (CSC), diabetic macular edema (DME), diabetic retinopathy (DR), drusen, glaucomatous optic neuropathy (GON), neovascular AMD (nAMD), myopia macular degeneration (MMD) and choroidal neovascularization (CNV) diseases. This comprehensive review examines the role that OCT-derived images play in detecting, characterizing, and monitoring eye diseases. METHOD: The 2020 PRISMA guideline was used to structure a systematic review of research on various eye conditions using machine learning (ML) or deep learning (DL) techniques. A thorough search across IEEE, PubMed, Web of Science, and Scopus databases yielded 1787 publications, of which 1136 remained after removing duplicates. Subsequent exclusion of conference papers, review papers, and non-open-access articles reduced the selection to 511 articles. Further scrutiny led to the exclusion of 435 more articles due to lower-quality indexing or irrelevance, resulting in 76 journal articles for the review. RESULTS: During our investigation, we found that a major challenge for ML-based decision support is the abundance of features and the determination of their significance. In contrast, DL-based decision support is characterized by a plug-and-play nature rather than relying on a trial-and-error approach. Furthermore, we observed that pre-trained networks are practical and especially useful when working on complex images such as OCT. Consequently, pre-trained deep networks were frequently utilized for classification tasks. Currently, medical decision support aims to reduce the workload of ophthalmologists and retina specialists during routine tasks. In the future, it might be possible to create continuous learning systems that can predict ocular pathologies by identifying subtle changes in OCT images.


Assuntos
Inteligência Artificial , Retina , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Retina/diagnóstico por imagem , Doenças Retinianas/diagnóstico por imagem , Aprendizado de Máquina , Aprendizado Profundo
9.
Cogn Neurodyn ; 18(2): 383-404, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38699621

RESUMO

Fibromyalgia is a soft tissue rheumatism with significant qualitative and quantitative impact on sleep macro and micro architecture. The primary objective of this study is to analyze and identify automatically healthy individuals and those with fibromyalgia using sleep electroencephalography (EEG) signals. The study focused on the automatic detection and interpretation of EEG signals obtained from fibromyalgia patients. In this work, the sleep EEG signals are divided into 15-s and a total of 5358 (3411 healthy control and 1947 fibromyalgia) EEG segments are obtained from 16 fibromyalgia and 16 normal subjects. Our developed model has advanced multilevel feature extraction architecture and hence, we used a new feature extractor called GluPat, inspired by the glucose chemical, with a new pooling approach inspired by the D'hondt selection system. Furthermore, our proposed method incorporated feature selection techniques using iterative neighborhood component analysis and iterative Chi2 methods. These selection mechanisms enabled the identification of discriminative features for accurate classification. In the classification phase, we employed a support vector machine and k-nearest neighbor algorithms to classify the EEG signals with leave-one-record-out (LORO) and tenfold cross-validation (CV) techniques. All results are calculated channel-wise and iterative majority voting is used to obtain generalized results. The best results were determined using the greedy algorithm. The developed model achieved a detection accuracy of 100% and 91.83% with a tenfold and LORO CV strategies, respectively using sleep stage (2 + 3) EEG signals. Our generated model is simple and has linear time complexity.

10.
Physiol Meas ; 45(5)2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38697206

RESUMO

Objective.Myocarditis poses a significant health risk, often precipitated by viral infections like coronavirus disease, and can lead to fatal cardiac complications. As a less invasive alternative to the standard diagnostic practice of endomyocardial biopsy, which is highly invasive and thus limited to severe cases, cardiac magnetic resonance (CMR) imaging offers a promising solution for detecting myocardial abnormalities.Approach.This study introduces a deep model called ELRL-MD that combines ensemble learning and reinforcement learning (RL) for effective myocarditis diagnosis from CMR images. The model begins with pre-training via the artificial bee colony (ABC) algorithm to enhance the starting point for learning. An array of convolutional neural networks (CNNs) then works in concert to extract and integrate features from CMR images for accurate diagnosis. Leveraging the Z-Alizadeh Sani myocarditis CMR dataset, the model employs RL to navigate the dataset's imbalance by conceptualizing diagnosis as a decision-making process.Main results.ELRL-DM demonstrates remarkable efficacy, surpassing other deep learning, conventional machine learning, and transfer learning models, achieving an F-measure of 88.2% and a geometric mean of 90.6%. Extensive experimentation helped pinpoint the optimal reward function settings and the perfect count of CNNs.Significance.The study addresses the primary technical challenge of inherent data imbalance in CMR imaging datasets and the risk of models converging on local optima due to suboptimal initial weight settings. Further analysis, leaving out ABC and RL components, confirmed their contributions to the model's overall performance, underscoring the effectiveness of addressing these critical technical challenges.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Miocardite , Miocardite/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
11.
Comput Methods Programs Biomed ; 250: 108200, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38677080

RESUMO

BACKGROUND AND OBJECTIVES: Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS: A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS: Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS: Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.


Assuntos
Inteligência Artificial , Diagnóstico por Imagem , Humanos , Diagnóstico por Imagem/normas , Processamento de Imagem Assistida por Computador/métodos , Estudos Multicêntricos como Assunto
12.
Comput Biol Med ; 173: 108280, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38547655

RESUMO

BACKGROUND: Timely detection of neurodevelopmental and neurological conditions is crucial for early intervention. Specific Language Impairment (SLI) in children and Parkinson's disease (PD) manifests in speech disturbances that may be exploited for diagnostic screening using recorded speech signals. We were motivated to develop an accurate yet computationally lightweight model for speech-based detection of SLI and PD, employing novel feature engineering techniques to mimic the adaptable dynamic weight assignment network capability of deep learning architectures. MATERIALS AND METHODS: In this research, we have introduced an advanced feature engineering model incorporating a novel feature extraction function, the Factor Lattice Pattern (FLP), which is a quantum-inspired method and uses a superposition-like mechanism, making it dynamic in nature. The FLP encompasses eight distinct patterns, from which the most appropriate pattern was discerned based on the data structure. Through the implementation of the FLP, we automatically extracted signal-specific textural features. Additionally, we developed a new feature engineering model to assess the efficacy of the FLP. This model is self-organizing, producing nine potential results and subsequently choosing the optimal one. Our speech classification framework consists of (1) feature extraction using the proposed FLP and a statistical feature extractor; (2) feature selection employing iterative neighborhood component analysis and an intersection-based feature selector; (3) classification via support vector machine and k-nearest neighbors; and (4) outcome determination using combinational majority voting to select the most favorable results. RESULTS: To validate the classification capabilities of our proposed feature engineering model, designed to automatically detect PD and SLI, we employed three speech datasets of PD and SLI patients. Our presented FLP-centric model achieved classification accuracy of more than 95% and 99.79% for all PD and SLI datasets, respectively. CONCLUSIONS: Our results indicate that the proposed model is an accurate alternative to deep learning models in classifying neurological conditions using speech signals.


Assuntos
Doença de Parkinson , Transtorno Específico de Linguagem , Criança , Humanos , Fala , Doença de Parkinson/diagnóstico , Máquina de Vetores de Suporte
13.
Comput Biol Med ; 172: 108207, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38489986

RESUMO

Artificial Intelligence (AI) techniques are increasingly used in computer-aided diagnostic tools in medicine. These techniques can also help to identify Hypertension (HTN) in its early stage, as it is a global health issue. Automated HTN detection uses socio-demographic, clinical data, and physiological signals. Additionally, signs of secondary HTN can also be identified using various imaging modalities. This systematic review examines related work on automated HTN detection. We identify datasets, techniques, and classifiers used to develop AI models from clinical data, physiological signals, and fused data (a combination of both). Image-based models for assessing secondary HTN are also reviewed. The majority of the studies have primarily utilized single-modality approaches, such as biological signals (e.g., electrocardiography, photoplethysmography), and medical imaging (e.g., magnetic resonance angiography, ultrasound). Surprisingly, only a small portion of the studies (22 out of 122) utilized a multi-modal fusion approach combining data from different sources. Even fewer investigated integrating clinical data, physiological signals, and medical imaging to understand the intricate relationships between these factors. Future research directions are discussed that could build better healthcare systems for early HTN detection through more integrated modeling of multi-modal data sources.


Assuntos
Hipertensão , Medicina , Humanos , Inteligência Artificial , Eletrocardiografia , Hipertensão/diagnóstico por imagem , Angiografia por Ressonância Magnética
14.
Comput Methods Programs Biomed ; 247: 108076, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38422891

RESUMO

BACKGROUND AND AIM: Anxiety disorder is common; early diagnosis is crucial for management. Anxiety can induce physiological changes in the brain and heart. We aimed to develop an efficient and accurate handcrafted feature engineering model for automated anxiety detection using ECG signals. MATERIALS AND METHODS: We studied open-access electrocardiography (ECG) data of 19 subjects collected via wearable sensors while they were shown videos that might induce anxiety. Using the Hamilton Anxiety Rating Scale, subjects are categorized into normal, light anxiety, moderate anxiety, and severe anxiety groups. ECGs were divided into non-overlapping 4- (Case 1), 5- (Case 2), and 6-second (Case 3) segments for analysis. We proposed a self-organized dynamic pattern-based feature extraction function-probabilistic binary pattern (PBP)-in which patterns within the function were determined by the probabilities of the input signal-dependent values. This was combined with tunable q-factor wavelet transform to facilitate multileveled generation of feature vectors in both spatial and frequency domains. Neighborhood component analysis and Chi2 functions were used to select features and reduce data dimensionality. Shallow k-nearest neighbors and support vector machine classifiers were used to calculate four (=2 × 2) classifier-wise results per input signal. From the latter, novel self-organized combinational majority voting was applied to calculate an additional five voted results. The optimal final model outcome was chosen from among the nine (classifier-wise and voted) results using a greedy algorithm. RESULTS: Our model achieved classification accuracies of over 98.5 % for all three cases. Ablation studies confirmed the incremental accuracy of PBP-based feature engineering over traditional local binary pattern feature extraction. CONCLUSIONS: The results demonstrated the feasibility and accuracy of our PBP-based feature engineering model for anxiety classification using ECG signals.


Assuntos
Eletrocardiografia , Análise de Ondaletas , Humanos , Algoritmos , Ansiedade/diagnóstico , Transtornos de Ansiedade , Processamento de Sinais Assistido por Computador
15.
Med Eng Phys ; 124: 104107, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38418014

RESUMO

Today, depression is a common problem that affects many people all over the world. It can impact a person's mood and quality of life unless identified and treated immediately. Due to the hectic and stressful modern life seems to be, depression has become a leading cause of mental health illnesses. Signals from electroencephalograms (EEG) are frequently used to detect depression. It is difficult, time-consuming, and highly skilled to manually detect depression using EEG data analysis. Hence, in the proposed study, an automated depression detection system using EEG signals is proposed. The proposed study uses a clinically available dataset and dataset provided by the Department of Psychiatry at the Government Medical College (GMC) in Kozhikode, Kerala, India which consisted of 15 depressed patients and 15 healthy subjects and a publically available Multi-modal Open Dataset (MODMA) for Mental-disorder Analysis available at UK Data service reshare that consisted of 24 depressed patients and 29 healthy subjects. In this study, we have developed a novel Deep Wavelet Scattering Network (DWSN) for the automated detection of depression EEG signals. The best-performing classifier is then chosen by feeding the features into several machine-learning algorithms. For the clinically available GMC dataset, Medium Neural Network (MNN) achieved the highest accuracy of 99.95% with a Kappa value of 0.999. Using the suggested methods, the precision, recall, and F1-score are all 1. For the MODMA dataset, Wide Neural Network (WNN) achieved the highest accuracy of 99.3% with a Kappa value of 0.987. Using the suggested methods, the precision, recall, and F1-score are all 0.99. In comparison to all current methodologies, the performance of the suggested research is superior. The proposed method can be used to automatically diagnose depression both at home and in clinical settings.


Assuntos
Depressão , Qualidade de Vida , Humanos , Depressão/diagnóstico , Redes Neurais de Computação , Algoritmos , Aprendizado de Máquina , Eletroencefalografia/métodos
16.
Physiol Meas ; 2024 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-38237198

RESUMO

Insomnia is a prevalent sleep disorder characterized by difficulties in initiating sleep or experiencing non-restorative sleep. It is a multifaceted condition that impacts both the quantity and quality of an individual's sleep. Recent advancements in machine learning (ML), and deep learning (DL) have enabled automated sleep analysis using physiological signals. This has led to the development of technologies for more accurate detection of various sleep disorders, including insomnia. This paper explores the algorithms and techniques for automatic insomnia detection. Methods: We followed the recommendations given in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) during our process of content discovery. Our review encompasses research papers published between 2015 and 2023, with a specific emphasis on automating the identification of insomnia. From a se- lection of well-regarded journals, we included more than 30 publications dedicated to insomnia detection. In our analysis, we assessed the performance of various meth- ods for detecting insomnia, considering different datasets and physiological signals. A common thread across all the papers we reviewed was the utilization of artificial intel- ligence (AI) models, trained and tested using annotated physiological signals. Upon closer examination, we identified the utilization of 15 distinct algorithms for this de- tection task. Results: Result: The major goal of this research is to conduct a thorough study to categorize, compare, and assess the key traits of automated systems for identifying insomnia. Our analysis offers complete and in-depth information. The essential com- ponents under investigation in the automated technique include the data input source, objective, machine learning (ML) and deep learning (DL) network, training framework, and references to databases. We classified pertinent research studies based on ML and DL model perspectives, considering factors like learning structure and input data types. Conclusion: Based on our review of the studies featured in this paper, we have identi- fied a notable research gap in the current methods for identifying insomnia and oppor- tunities for future advancements in the automation of insomnia detection. While the current techniques have shown promising results, there is still room for improvement in terms of accuracy and reliability. Future developments in technology and machine learning algorithms could help address these limitations and enable more effective and efficient identification of insomnia. .

17.
Comput Methods Programs Biomed ; 244: 107992, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38218118

RESUMO

BACKGROUND AND OBJECTIVE: Sleep staging is an essential step for sleep disorder diagnosis, which is time-intensive and laborious for experts to perform this work manually. Automatic sleep stage classification methods not only alleviate experts from these demanding tasks but also enhance the accuracy and efficiency of the classification process. METHODS: A novel multi-channel biosignal-based model constructed by the combination of a 3D convolutional operation and a graph convolutional operation is proposed for the automated sleep stages using various physiological signals. Both the 3D convolution and graph convolution can aggregate information from neighboring brain areas, which helps to learn intrinsic connections from the biosignals. Electroencephalogram (EEG), electromyogram (EMG), electrooculogram (EOG) and electrocardiogram (ECG) signals are employed to extract time domain and frequency domain features. Subsequently, these signals are input to the 3D convolutional and graph convolutional branches, respectively. The 3D convolution branch can explore the correlations between multi-channel signals and multi-band waves in each channel in the time series, while the graph convolution branch can explore the connections between each channel and each frequency band. In this work, we have developed the proposed multi-channel convolution combined sleep stage classification model (MixSleepNet) using ISRUC datasets (Subgroup 3 and 50 random samples from Subgroup 1). RESULTS: Based on the first expert's label, our generated MixSleepNet yielded an accuracy, F1-score and Cohen kappa scores of 0.830, 0.821 and 0.782, respectively for ISRUC-S3. It obtained accuracy, F1-score and Cohen kappa scores of 0.812, 0.786, and 0.756, respectively for the ISRUC-S1 dataset. In accordance with the evaluations conducted by the second expert, the comprehensive accuracies, F1-scores, and Cohen kappa coefficients for the ISRUC-S3 and ISRUC-S1 datasets are determined to be 0.837, 0.820, 0.789, and 0.829, 0.791, 0.775, respectively. CONCLUSION: The results of the performance metrics by the proposed method are much better than those from all the compared models. Additional experiments were carried out on the ISRUC-S3 sub-dataset to evaluate the contributions of each module towards the classification performance.


Assuntos
Fases do Sono , Sono , Fases do Sono/fisiologia , Fatores de Tempo , Eletroencefalografia/métodos , Eletroculografia/métodos
18.
J Clin Ultrasound ; 52(2): 131-143, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37983736

RESUMO

PURPOSE: The quality of ultrasound images is degraded by speckle and Gaussian noises. This study aims to develop a deep-learning (DL)-based filter for ultrasound image denoising. METHODS: A novel DL-based filter using adaptive residual (AdaRes) learning was proposed. Five image quality metrics (IQMs) and 27 radiomics features were used to evaluate denoising results. The effect of our proposed filter, AdaRes, on four pre-trained convolutional neural network (CNN) classification models and three radiologists was assessed. RESULTS: AdaRes filter was tested on both natural and ultrasound image databases. IQMs results indicate that AdaRes could remove noises in three different noise levels with the highest performances. In addition, a radiomics study proved that AdaRes did not distort tissue textures and it could preserve most radiomics features. AdaRes could also improve the performance classification using CNNs in different settings. Finally, AdaRes also improved the mean overall performance (AUC) of three radiologists from 0.494 to 0.702 in the classification of benign and malignant lesions. CONCLUSIONS: AdaRes filtered out noises on ultrasound images more effectively and can be used as an auxiliary preprocessing step in computer-aided diagnosis systems. Radiologists may use it to remove unwanted noises and improve the ultrasound image quality before the interpretation.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Humanos , Radiômica , Razão Sinal-Ruído , Ultrassonografia
19.
Comput Methods Programs Biomed ; 244: 107932, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38008040

RESUMO

BACKGROUND AND OBJECTIVES: Non-alcoholic fatty liver disease (NAFLD) is a common liver disease with a rapidly growing incidence worldwide. For prognostication and therapeutic decisions, it is important to distinguish the pathological stages of NAFLD: steatosis, steatohepatitis, and liver fibrosis, which are definitively diagnosed on invasive biopsy. Non-invasive ultrasound (US) imaging, including US elastography technique, and clinical parameters can be used to diagnose and grade NAFLD and its complications. Artificial intelligence (AI) is increasingly being harnessed for developing NAFLD diagnostic models based on clinical, biomarker, or imaging data. In this work, we systemically reviewed the literature for AI-enabled NAFLD diagnostic models based on US (including elastography) and clinical (including serological) data. METHODS: We performed a comprehensive search on Google Scholar, Scopus, and PubMed search engines for articles published between January 2005 and June 2023 related to AI models for NAFLD diagnosis based on US and/or clinical parameters using the following search terms: "non-alcoholic fatty liver disease", "non-alcoholic steatohepatitis", "deep learning", "machine learning", "artificial intelligence", "ultrasound imaging", "sonography", "clinical information". RESULTS: We reviewed 64 published models that used either US (including elastography) or clinical data input to detect the presence of NAFLD, non-alcoholic steatohepatitis, and/or fibrosis, and in some cases, the severity of steatosis, inflammation, and/or fibrosis as well. The performances of the published models were summarized, and stratified by data input and algorithms used, which could be broadly divided into machine and deep learning approaches. CONCLUSION: AI models based on US imaging and clinical data can reliably detect NAFLD and its complications, thereby reducing diagnostic costs and the need for invasive liver biopsy. The models offer advantages of efficiency, accuracy, and accessibility, and serve as virtual assistants for specialists to accelerate disease diagnosis and reduce treatment costs for patients and healthcare systems.


Assuntos
Hepatopatia Gordurosa não Alcoólica , Humanos , Hepatopatia Gordurosa não Alcoólica/diagnóstico por imagem , Hepatopatia Gordurosa não Alcoólica/patologia , Inteligência Artificial , Cirrose Hepática , Biomarcadores , Ultrassonografia , Fígado/diagnóstico por imagem , Biópsia
20.
Physiol Meas ; 44(12)2023 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-38081126

RESUMO

Objective.Pre-participation medical screening of athletes is necessary to pinpoint individuals susceptible to cardiovascular events.Approach.The article presents a reinforcement learning (RL)-based multilayer perceptron, termed MLP-RL-CRD, designed to detect cardiovascular risk among athletes. The model underwent training using a publicized dataset that included the anthropological measurements (such as height and weight) and biomedical metrics (covering blood pressure and pulse rate) of 26 002 athletes. To address the data imbalance, a novel RL-based technique was adopted. The problem was framed as a series of sequential decisions in which an agent classified a received instance and received a reward at each level. To resolve the insensitivity to the initialization of conventional gradient-based learning methods, a mutual learning-based artificial bee colony (ML-ABC) was proposed.Main Results.The model outcomes were validated against positive (P) and negative (N) ECG findings that had been labeled by experts to signify individuals 'at risk' and 'not at risk,' respectively. The MLP-RL-CRD approach achieves superior outcomes (F-measure 87.4%; geometric mean 89.6%) compared with other deep models and traditional machine learning techniques. Optimal values for crucial parameters, including the reward function, were identified for the model based on experiments on the study dataset. Ablation studies, which omitted elements of the suggested model, affirmed the autonomous, positive, stepwise influence of these components on performing the model.Significance.This study introduces a novel, effective method for early cardiovascular risk detection in athletes, merging reinforcement learning and multilayer perceptrons, advancing medical screening and predictive healthcare. The results could have far-reaching implications for athlete health management and the broader field of predictive healthcare analytics.


Assuntos
Doenças Cardiovasculares , Humanos , Doenças Cardiovasculares/diagnóstico , Fatores de Risco , Redes Neurais de Computação , Aprendizado de Máquina , Atletas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA