Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
J Ultrasound Med ; 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39032010

ABSTRACT

Artificial intelligence (AI) models can play a more effective role in managing patients with the explosion of digital health records available in the healthcare industry. Machine-learning (ML) and deep-learning (DL) techniques are two methods used to develop predictive models that serve to improve the clinical processes in the healthcare industry. These models are also implemented in medical imaging machines to empower them with an intelligent decision system to aid physicians in their decisions and increase the efficiency of their routine clinical practices. The physicians who are going to work with these machines need to have an insight into what happens in the background of the implemented models and how they work. More importantly, they need to be able to interpret their predictions, assess their performance, and compare them to find the one with the best performance and fewer errors. This review aims to provide an accessible overview of key evaluation metrics for physicians without AI expertise. In this review, we developed four real-world diagnostic AI models (two ML and two DL models) for breast cancer diagnosis using ultrasound images. Then, 23 of the most commonly used evaluation metrics were reviewed uncomplicatedly for physicians. Finally, all metrics were calculated and used practically to interpret and evaluate the outputs of the models. Accessible explanations and practical applications empower physicians to effectively interpret, evaluate, and optimize AI models to ensure safety and efficacy when integrated into clinical practice.

2.
J Ultrasound Med ; 2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39051752

ABSTRACT

OBJECTIVES: Breast cancer is a type of cancer caused by the uncontrolled growth of cells in the breast tissue. In a few cases, erroneous diagnosis of breast cancer by specialists and unnecessary biopsies can lead to various negative consequences. In some cases, radiologic examinations or clinical findings may raise the suspicion of breast cancer, but subsequent detailed evaluations may not confirm cancer. In addition to causing unnecessary anxiety and stress to patients, such diagnosis can also lead to unnecessary biopsy procedures, which are painful, expensive, and prone to misdiagnosis. Therefore, there is a need for the development of more accurate and reliable methods for breast cancer diagnosis. METHODS: In this study, we proposed an artificial intelligence (AI)-based method for automatically classifying breast solid mass lesions as benign vs malignant. In this study, a new breast cancer dataset (Breast-XD) was created with 791 solid mass lesions belonging to 752 different patients aged 18 to 85 years, which were examined by experienced radiologists between 2017 and 2022. RESULTS: Six classifiers, support vector machine (SVM), K-nearest neighbor (K-NN), random forest (RF), decision tree (DT), logistic regression (LR), and XGBoost, were trained on the training samples of the Breast-XD dataset. Then, each classifier made predictions on 159 test data that it had not seen before. The highest classification result was obtained using the explainable XGBoost model (X2GAI) with an accuracy of 94.34%. An explainable structure is also implemented to build the reliability of the developed model. CONCLUSIONS: The results obtained by radiologists and the X2GAI model were compared according to the diagnosis obtained from the biopsy. It was observed that our developed model performed well in cases where experienced radiologists gave false positive results.

3.
Comput Methods Programs Biomed ; 244: 107992, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38218118

ABSTRACT

BACKGROUND AND OBJECTIVE: Sleep staging is an essential step for sleep disorder diagnosis, which is time-intensive and laborious for experts to perform this work manually. Automatic sleep stage classification methods not only alleviate experts from these demanding tasks but also enhance the accuracy and efficiency of the classification process. METHODS: A novel multi-channel biosignal-based model constructed by the combination of a 3D convolutional operation and a graph convolutional operation is proposed for the automated sleep stages using various physiological signals. Both the 3D convolution and graph convolution can aggregate information from neighboring brain areas, which helps to learn intrinsic connections from the biosignals. Electroencephalogram (EEG), electromyogram (EMG), electrooculogram (EOG) and electrocardiogram (ECG) signals are employed to extract time domain and frequency domain features. Subsequently, these signals are input to the 3D convolutional and graph convolutional branches, respectively. The 3D convolution branch can explore the correlations between multi-channel signals and multi-band waves in each channel in the time series, while the graph convolution branch can explore the connections between each channel and each frequency band. In this work, we have developed the proposed multi-channel convolution combined sleep stage classification model (MixSleepNet) using ISRUC datasets (Subgroup 3 and 50 random samples from Subgroup 1). RESULTS: Based on the first expert's label, our generated MixSleepNet yielded an accuracy, F1-score and Cohen kappa scores of 0.830, 0.821 and 0.782, respectively for ISRUC-S3. It obtained accuracy, F1-score and Cohen kappa scores of 0.812, 0.786, and 0.756, respectively for the ISRUC-S1 dataset. In accordance with the evaluations conducted by the second expert, the comprehensive accuracies, F1-scores, and Cohen kappa coefficients for the ISRUC-S3 and ISRUC-S1 datasets are determined to be 0.837, 0.820, 0.789, and 0.829, 0.791, 0.775, respectively. CONCLUSION: The results of the performance metrics by the proposed method are much better than those from all the compared models. Additional experiments were carried out on the ISRUC-S3 sub-dataset to evaluate the contributions of each module towards the classification performance.


Subject(s)
Sleep Stages , Sleep , Sleep Stages/physiology , Time Factors , Electroencephalography/methods , Electrooculography/methods
4.
Med Eng Phys ; 124: 104107, 2024 02.
Article in English | MEDLINE | ID: mdl-38418014

ABSTRACT

Today, depression is a common problem that affects many people all over the world. It can impact a person's mood and quality of life unless identified and treated immediately. Due to the hectic and stressful modern life seems to be, depression has become a leading cause of mental health illnesses. Signals from electroencephalograms (EEG) are frequently used to detect depression. It is difficult, time-consuming, and highly skilled to manually detect depression using EEG data analysis. Hence, in the proposed study, an automated depression detection system using EEG signals is proposed. The proposed study uses a clinically available dataset and dataset provided by the Department of Psychiatry at the Government Medical College (GMC) in Kozhikode, Kerala, India which consisted of 15 depressed patients and 15 healthy subjects and a publically available Multi-modal Open Dataset (MODMA) for Mental-disorder Analysis available at UK Data service reshare that consisted of 24 depressed patients and 29 healthy subjects. In this study, we have developed a novel Deep Wavelet Scattering Network (DWSN) for the automated detection of depression EEG signals. The best-performing classifier is then chosen by feeding the features into several machine-learning algorithms. For the clinically available GMC dataset, Medium Neural Network (MNN) achieved the highest accuracy of 99.95% with a Kappa value of 0.999. Using the suggested methods, the precision, recall, and F1-score are all 1. For the MODMA dataset, Wide Neural Network (WNN) achieved the highest accuracy of 99.3% with a Kappa value of 0.987. Using the suggested methods, the precision, recall, and F1-score are all 0.99. In comparison to all current methodologies, the performance of the suggested research is superior. The proposed method can be used to automatically diagnose depression both at home and in clinical settings.


Subject(s)
Depression , Quality of Life , Humans , Depression/diagnosis , Neural Networks, Computer , Algorithms , Machine Learning , Electroencephalography/methods
5.
Physiol Meas ; 2024 Jan 18.
Article in English | MEDLINE | ID: mdl-38237198

ABSTRACT

Insomnia is a prevalent sleep disorder characterized by difficulties in initiating sleep or experiencing non-restorative sleep. It is a multifaceted condition that impacts both the quantity and quality of an individual's sleep. Recent advancements in machine learning (ML), and deep learning (DL) have enabled automated sleep analysis using physiological signals. This has led to the development of technologies for more accurate detection of various sleep disorders, including insomnia. This paper explores the algorithms and techniques for automatic insomnia detection. Methods: We followed the recommendations given in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) during our process of content discovery. Our review encompasses research papers published between 2015 and 2023, with a specific emphasis on automating the identification of insomnia. From a se- lection of well-regarded journals, we included more than 30 publications dedicated to insomnia detection. In our analysis, we assessed the performance of various meth- ods for detecting insomnia, considering different datasets and physiological signals. A common thread across all the papers we reviewed was the utilization of artificial intel- ligence (AI) models, trained and tested using annotated physiological signals. Upon closer examination, we identified the utilization of 15 distinct algorithms for this de- tection task. Results: Result: The major goal of this research is to conduct a thorough study to categorize, compare, and assess the key traits of automated systems for identifying insomnia. Our analysis offers complete and in-depth information. The essential com- ponents under investigation in the automated technique include the data input source, objective, machine learning (ML) and deep learning (DL) network, training framework, and references to databases. We classified pertinent research studies based on ML and DL model perspectives, considering factors like learning structure and input data types. Conclusion: Based on our review of the studies featured in this paper, we have identi- fied a notable research gap in the current methods for identifying insomnia and oppor- tunities for future advancements in the automation of insomnia detection. While the current techniques have shown promising results, there is still room for improvement in terms of accuracy and reliability. Future developments in technology and machine learning algorithms could help address these limitations and enable more effective and efficient identification of insomnia. .

6.
Comput Methods Programs Biomed ; 254: 108253, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38861878

ABSTRACT

BACKGROUND AND OBJECTIVES: Optical coherence tomography (OCT) has ushered in a transformative era in the domain of ophthalmology, offering non-invasive imaging with high resolution for ocular disease detection. OCT, which is frequently used in diagnosing fundamental ocular pathologies, such as glaucoma and age-related macular degeneration (AMD), plays an important role in the widespread adoption of this technology. Apart from glaucoma and AMD, we will also investigate pertinent pathologies, such as epiretinal membrane (ERM), macular hole (MH), macular dystrophy (MD), vitreomacular traction (VMT), diabetic maculopathy (DMP), cystoid macular edema (CME), central serous chorioretinopathy (CSC), diabetic macular edema (DME), diabetic retinopathy (DR), drusen, glaucomatous optic neuropathy (GON), neovascular AMD (nAMD), myopia macular degeneration (MMD) and choroidal neovascularization (CNV) diseases. This comprehensive review examines the role that OCT-derived images play in detecting, characterizing, and monitoring eye diseases. METHOD: The 2020 PRISMA guideline was used to structure a systematic review of research on various eye conditions using machine learning (ML) or deep learning (DL) techniques. A thorough search across IEEE, PubMed, Web of Science, and Scopus databases yielded 1787 publications, of which 1136 remained after removing duplicates. Subsequent exclusion of conference papers, review papers, and non-open-access articles reduced the selection to 511 articles. Further scrutiny led to the exclusion of 435 more articles due to lower-quality indexing or irrelevance, resulting in 76 journal articles for the review. RESULTS: During our investigation, we found that a major challenge for ML-based decision support is the abundance of features and the determination of their significance. In contrast, DL-based decision support is characterized by a plug-and-play nature rather than relying on a trial-and-error approach. Furthermore, we observed that pre-trained networks are practical and especially useful when working on complex images such as OCT. Consequently, pre-trained deep networks were frequently utilized for classification tasks. Currently, medical decision support aims to reduce the workload of ophthalmologists and retina specialists during routine tasks. In the future, it might be possible to create continuous learning systems that can predict ocular pathologies by identifying subtle changes in OCT images.


Subject(s)
Artificial Intelligence , Retina , Tomography, Optical Coherence , Humans , Tomography, Optical Coherence/methods , Retina/diagnostic imaging , Retinal Diseases/diagnostic imaging , Machine Learning , Deep Learning
7.
Comput Methods Programs Biomed ; 247: 108076, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38422891

ABSTRACT

BACKGROUND AND AIM: Anxiety disorder is common; early diagnosis is crucial for management. Anxiety can induce physiological changes in the brain and heart. We aimed to develop an efficient and accurate handcrafted feature engineering model for automated anxiety detection using ECG signals. MATERIALS AND METHODS: We studied open-access electrocardiography (ECG) data of 19 subjects collected via wearable sensors while they were shown videos that might induce anxiety. Using the Hamilton Anxiety Rating Scale, subjects are categorized into normal, light anxiety, moderate anxiety, and severe anxiety groups. ECGs were divided into non-overlapping 4- (Case 1), 5- (Case 2), and 6-second (Case 3) segments for analysis. We proposed a self-organized dynamic pattern-based feature extraction function-probabilistic binary pattern (PBP)-in which patterns within the function were determined by the probabilities of the input signal-dependent values. This was combined with tunable q-factor wavelet transform to facilitate multileveled generation of feature vectors in both spatial and frequency domains. Neighborhood component analysis and Chi2 functions were used to select features and reduce data dimensionality. Shallow k-nearest neighbors and support vector machine classifiers were used to calculate four (=2 × 2) classifier-wise results per input signal. From the latter, novel self-organized combinational majority voting was applied to calculate an additional five voted results. The optimal final model outcome was chosen from among the nine (classifier-wise and voted) results using a greedy algorithm. RESULTS: Our model achieved classification accuracies of over 98.5 % for all three cases. Ablation studies confirmed the incremental accuracy of PBP-based feature engineering over traditional local binary pattern feature extraction. CONCLUSIONS: The results demonstrated the feasibility and accuracy of our PBP-based feature engineering model for anxiety classification using ECG signals.


Subject(s)
Electrocardiography , Wavelet Analysis , Humans , Algorithms , Anxiety/diagnosis , Anxiety Disorders , Signal Processing, Computer-Assisted
8.
Comput Methods Programs Biomed ; 250: 108200, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38677080

ABSTRACT

BACKGROUND AND OBJECTIVES: Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS: A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS: Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS: Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.


Subject(s)
Artificial Intelligence , Diagnostic Imaging , Humans , Diagnostic Imaging/standards , Image Processing, Computer-Assisted/methods , Multicenter Studies as Topic
9.
Physiol Meas ; 45(5)2024 May 21.
Article in English | MEDLINE | ID: mdl-38697206

ABSTRACT

Objective.Myocarditis poses a significant health risk, often precipitated by viral infections like coronavirus disease, and can lead to fatal cardiac complications. As a less invasive alternative to the standard diagnostic practice of endomyocardial biopsy, which is highly invasive and thus limited to severe cases, cardiac magnetic resonance (CMR) imaging offers a promising solution for detecting myocardial abnormalities.Approach.This study introduces a deep model called ELRL-MD that combines ensemble learning and reinforcement learning (RL) for effective myocarditis diagnosis from CMR images. The model begins with pre-training via the artificial bee colony (ABC) algorithm to enhance the starting point for learning. An array of convolutional neural networks (CNNs) then works in concert to extract and integrate features from CMR images for accurate diagnosis. Leveraging the Z-Alizadeh Sani myocarditis CMR dataset, the model employs RL to navigate the dataset's imbalance by conceptualizing diagnosis as a decision-making process.Main results.ELRL-DM demonstrates remarkable efficacy, surpassing other deep learning, conventional machine learning, and transfer learning models, achieving an F-measure of 88.2% and a geometric mean of 90.6%. Extensive experimentation helped pinpoint the optimal reward function settings and the perfect count of CNNs.Significance.The study addresses the primary technical challenge of inherent data imbalance in CMR imaging datasets and the risk of models converging on local optima due to suboptimal initial weight settings. Further analysis, leaving out ABC and RL components, confirmed their contributions to the model's overall performance, underscoring the effectiveness of addressing these critical technical challenges.


Subject(s)
Deep Learning , Magnetic Resonance Imaging , Myocarditis , Myocarditis/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer
10.
Comput Biol Med ; 173: 108280, 2024 May.
Article in English | MEDLINE | ID: mdl-38547655

ABSTRACT

BACKGROUND: Timely detection of neurodevelopmental and neurological conditions is crucial for early intervention. Specific Language Impairment (SLI) in children and Parkinson's disease (PD) manifests in speech disturbances that may be exploited for diagnostic screening using recorded speech signals. We were motivated to develop an accurate yet computationally lightweight model for speech-based detection of SLI and PD, employing novel feature engineering techniques to mimic the adaptable dynamic weight assignment network capability of deep learning architectures. MATERIALS AND METHODS: In this research, we have introduced an advanced feature engineering model incorporating a novel feature extraction function, the Factor Lattice Pattern (FLP), which is a quantum-inspired method and uses a superposition-like mechanism, making it dynamic in nature. The FLP encompasses eight distinct patterns, from which the most appropriate pattern was discerned based on the data structure. Through the implementation of the FLP, we automatically extracted signal-specific textural features. Additionally, we developed a new feature engineering model to assess the efficacy of the FLP. This model is self-organizing, producing nine potential results and subsequently choosing the optimal one. Our speech classification framework consists of (1) feature extraction using the proposed FLP and a statistical feature extractor; (2) feature selection employing iterative neighborhood component analysis and an intersection-based feature selector; (3) classification via support vector machine and k-nearest neighbors; and (4) outcome determination using combinational majority voting to select the most favorable results. RESULTS: To validate the classification capabilities of our proposed feature engineering model, designed to automatically detect PD and SLI, we employed three speech datasets of PD and SLI patients. Our presented FLP-centric model achieved classification accuracy of more than 95% and 99.79% for all PD and SLI datasets, respectively. CONCLUSIONS: Our results indicate that the proposed model is an accurate alternative to deep learning models in classifying neurological conditions using speech signals.


Subject(s)
Parkinson Disease , Specific Language Disorder , Child , Humans , Speech , Parkinson Disease/diagnosis , Support Vector Machine
11.
Cogn Neurodyn ; 18(4): 1609-1625, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39104684

ABSTRACT

In this study, attention deficit hyperactivity disorder (ADHD), a childhood neurodevelopmental disorder, is being studied alongside its comorbidity, conduct disorder (CD), a behavioral disorder. Because ADHD and CD share commonalities, distinguishing them is difficult, thus increasing the risk of misdiagnosis. It is crucial that these two conditions are not mistakenly identified as the same because the treatment plan varies depending on whether the patient has CD or ADHD. Hence, this study proposes an electroencephalogram (EEG)-based deep learning system known as ADHD/CD-NET that is capable of objectively distinguishing ADHD, ADHD + CD, and CD. The 12-channel EEG signals were first segmented and converted into channel-wise continuous wavelet transform (CWT) correlation matrices. The resulting matrices were then used to train the convolutional neural network (CNN) model, and the model's performance was evaluated using 10-fold cross-validation. Gradient-weighted class activation mapping (Grad-CAM) was also used to provide explanations for the prediction result made by the 'black box' CNN model. Internal private dataset (45 ADHD, 62 ADHD + CD and 16 CD) and external public dataset (61 ADHD and 60 healthy controls) were used to evaluate ADHD/CD-NET. As a result, ADHD/CD-NET achieved classification accuracy, sensitivity, specificity, and precision of 93.70%, 90.83%, 95.35% and 91.85% for the internal evaluation, and 98.19%, 98.36%, 98.03% and 98.06% for the external evaluation. Grad-CAM also identified significant channels that contributed to the diagnosis outcome. Therefore, ADHD/CD-NET can perform temporal localization and choose significant EEG channels for diagnosis, thus providing objective analysis for mental health professionals and clinicians to consider when making a diagnosis. Supplementary Information: The online version contains supplementary material available at 10.1007/s11571-023-10028-2.

12.
Cogn Neurodyn ; 18(2): 383-404, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38699621

ABSTRACT

Fibromyalgia is a soft tissue rheumatism with significant qualitative and quantitative impact on sleep macro and micro architecture. The primary objective of this study is to analyze and identify automatically healthy individuals and those with fibromyalgia using sleep electroencephalography (EEG) signals. The study focused on the automatic detection and interpretation of EEG signals obtained from fibromyalgia patients. In this work, the sleep EEG signals are divided into 15-s and a total of 5358 (3411 healthy control and 1947 fibromyalgia) EEG segments are obtained from 16 fibromyalgia and 16 normal subjects. Our developed model has advanced multilevel feature extraction architecture and hence, we used a new feature extractor called GluPat, inspired by the glucose chemical, with a new pooling approach inspired by the D'hondt selection system. Furthermore, our proposed method incorporated feature selection techniques using iterative neighborhood component analysis and iterative Chi2 methods. These selection mechanisms enabled the identification of discriminative features for accurate classification. In the classification phase, we employed a support vector machine and k-nearest neighbor algorithms to classify the EEG signals with leave-one-record-out (LORO) and tenfold cross-validation (CV) techniques. All results are calculated channel-wise and iterative majority voting is used to obtain generalized results. The best results were determined using the greedy algorithm. The developed model achieved a detection accuracy of 100% and 91.83% with a tenfold and LORO CV strategies, respectively using sleep stage (2 + 3) EEG signals. Our generated model is simple and has linear time complexity.

13.
Comput Biol Med ; 172: 108207, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38489986

ABSTRACT

Artificial Intelligence (AI) techniques are increasingly used in computer-aided diagnostic tools in medicine. These techniques can also help to identify Hypertension (HTN) in its early stage, as it is a global health issue. Automated HTN detection uses socio-demographic, clinical data, and physiological signals. Additionally, signs of secondary HTN can also be identified using various imaging modalities. This systematic review examines related work on automated HTN detection. We identify datasets, techniques, and classifiers used to develop AI models from clinical data, physiological signals, and fused data (a combination of both). Image-based models for assessing secondary HTN are also reviewed. The majority of the studies have primarily utilized single-modality approaches, such as biological signals (e.g., electrocardiography, photoplethysmography), and medical imaging (e.g., magnetic resonance angiography, ultrasound). Surprisingly, only a small portion of the studies (22 out of 122) utilized a multi-modal fusion approach combining data from different sources. Even fewer investigated integrating clinical data, physiological signals, and medical imaging to understand the intricate relationships between these factors. Future research directions are discussed that could build better healthcare systems for early HTN detection through more integrated modeling of multi-modal data sources.


Subject(s)
Hypertension , Medicine , Humans , Artificial Intelligence , Electrocardiography , Hypertension/diagnostic imaging , Magnetic Resonance Angiography
14.
J Diabetes Metab Disord ; 23(1): 773-781, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38932891

ABSTRACT

Purpose: We applied machine learning to study associations between regional body fat distribution and diabetes mellitus in a population of community adults in order to investigate the predictive capability. We retrospectively analyzed a subset of data from the published Fasa cohort study using individual standard classifiers as well as ensemble learning algorithms. Methods: We measured segmental body composition using the Tanita Analyzer BC-418 MA (Tanita Corp, Japan). The following features were input to our machine learning model: fat-free mass, fat percentage, basal metabolic rate, total body water, right arm fat-free mass, right leg fat-free mass, trunk fat-free mass, trunk fat percentage, sex, age, right leg fat percentage, and right arm fat percentage. We performed classification into diabetes vs. no diabetes classes using linear support vector machine, decision tree, stochastic gradient descent, logistic regression, Gaussian naïve Bayes, k-nearest neighbors (k = 3 and k = 4), and multi-layer perceptron, as well as ensemble learning using random forest, gradient boosting, adaptive boosting, XGBoost, and ensemble voting classifiers with Top3 and Top4 algorithms. 4661 subjects (mean age 47.64 ± 9.37 years, range 35 to 70 years; 2155 male, 2506 female) were analyzed and stratified into 571 and 4090 subjects with and without a self-declared history of diabetes, respectively. Results: Age, fat mass, and fat percentages in the legs, arms, and trunk were positively associated with diabetes; fat-free mass in the legs, arms, and trunk, were negatively associated. Using XGBoost, our model attained the best excellent accuracy, precision, recall, and F1-score of 89.96%, 90.20%, 89.65%, and 89.91%, respectively. Conclusions: Our machine learning model showed that regional body fat compositions were predictive of diabetes status.

SELECTION OF CITATIONS
SEARCH DETAIL