Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Biol Med ; 164: 107312, 2023 09.
Article in English | MEDLINE | ID: mdl-37597408

ABSTRACT

BACKGROUND: Epilepsy is one of the most common neurological conditions globally, and the fourth most common in the United States. Recurrent non-provoked seizures characterize it and have huge impacts on the quality of life and financial impacts for affected individuals. A rapid and accurate diagnosis is essential in order to instigate and monitor optimal treatments. There is also a compelling need for the accurate interpretation of epilepsy due to the current scarcity in neurologist diagnosticians and a global inequity in access and outcomes. Furthermore, the existing clinical and traditional machine learning diagnostic methods exhibit limitations, warranting the need to create an automated system using deep learning model for epilepsy detection and monitoring using a huge database. METHOD: The EEG signals from 35 channels were used to train the deep learning-based transformer model named (EpilepsyNet). For each training iteration, 1-min-long data were randomly sampled from each participant. Thereafter, each 5-s epoch was mapped to a matrix using the Pearson Correlation Coefficient (PCC), such that the bottom part of the triangle was discarded and only the upper triangle of the matrix was vectorized as input data. PCC is a reliable method used to measure the statistical relationship between two variables. Based on the 5 s of data, single embedding was performed thereafter to generate a 1-dimensional array of signals. In the final stage, a positional encoding with learnable parameters was added to each correlation coefficient's embedding before being fed to the developed EpilepsyNet as input data to epilepsy EEG signals. The ten-fold cross-validation technique was used to generate the model. RESULTS: Our transformer-based model (EpilepsyNet) yielded high classification accuracy, sensitivity, specificity and positive predictive values of 85%, 82%, 87%, and 82%, respectively. CONCLUSION: The proposed method is both accurate and robust since ten-fold cross-validation was employed to evaluate the performance of the model. Compared to the deep models used in existing studies for epilepsy diagnosis, our proposed method is simple and less computationally intensive. This is the earliest study to have uniquely employed the positional encoding with learnable parameters to each correlation coefficient's embedding together with the deep transformer model, using a huge database of 121 participants for epilepsy detection. With the training and validation of the model using a larger dataset, the same study approach can be extended for the detection of other neurological conditions, with a transformative impact on neurological diagnostics worldwide.


Subject(s)
Epilepsy , Quality of Life , Humans , Epilepsy/diagnosis , Databases, Factual , Machine Learning , Electroencephalography
2.
Cogn Neurodyn ; 17(3): 647-659, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37265658

ABSTRACT

Electroencephalography (EEG) may detect early changes in Alzheimer's disease (AD), a debilitating progressive neurodegenerative disease. We have developed an automated AD detection model using a novel directed graph for local texture feature extraction with EEG signals. The proposed graph was created from a topological map of the macroscopic connectome, i.e., neuronal pathways linking anatomo-functional brain segments involved in visual object recognition and motor response in the primate brain. This primate brain pattern (PBP)-based model was tested on a public AD EEG signal dataset. The dataset comprised 16-channel EEG signal recordings of 12 AD patients and 11 healthy controls. While PBP could generate 448 low-level features per one-dimensional EEG signal, combining it with tunable q-factor wavelet transform created a multilevel feature extractor (which mimicked deep models) to generate 8,512 (= 448 × 19) features per signal input. Iterative neighborhood component analysis was used to choose the most discriminative features (the number of optimal features varied among the individual EEG channels) to feed to a weighted k-nearest neighbor (KNN) classifier for binary classification into AD vs. healthy using both leave-one subject-out (LOSO) and tenfold cross-validations. Iterative majority voting was used to compute subject-level general performance results from the individual channel classification outputs. Channel-wise, as well as subject-level general results demonstrated exemplary performance. In addition, the model attained 100% and 92.01% accuracy for AD vs. healthy classification using the KNN classifier with tenfold and LOSO cross-validations, respectively. Our developed multilevel PBP-based model extracted discriminative features from EEG signals and paved the way for further development of models inspired by the brain connectome.

3.
J Digit Imaging ; 36(4): 1675-1686, 2023 08.
Article in English | MEDLINE | ID: mdl-37131063

ABSTRACT

Microscopic examination of urinary sediments is a common laboratory procedure. Automated image-based classification of urinary sediments can reduce analysis time and costs. Inspired by cryptographic mixing protocols and computer vision, we developed an image classification model that combines a novel Arnold Cat Map (ACM)- and fixed-size patch-based mixer algorithm with transfer learning for deep feature extraction. Our study dataset comprised 6,687 urinary sediment images belonging to seven classes: Cast, Crystal, Epithelia, Epithelial nuclei, Erythrocyte, Leukocyte, and Mycete. The developed model consists of four layers: (1) an ACM-based mixer to generate mixed images from resized 224 × 224 input images using fixed-size 16 × 16 patches; (2) DenseNet201 pre-trained on ImageNet1K to extract 1,920 features from each raw input image, and its six corresponding mixed images were concatenated to form a final feature vector of length 13,440; (3) iterative neighborhood component analysis to select the most discriminative feature vector of optimal length 342, determined using a k-nearest neighbor (kNN)-based loss function calculator; and (4) shallow kNN-based classification with ten-fold cross-validation. Our model achieved 98.52% overall accuracy for seven-class classification, outperforming published models for urinary cell and sediment analysis. We demonstrated the feasibility and accuracy of deep feature engineering using an ACM-based mixer algorithm for image preprocessing combined with pre-trained DenseNet201 for feature extraction. The classification model was both demonstrably accurate and computationally lightweight, making it ready for implementation in real-world image-based urine sediment analysis applications.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Microscopy
4.
Med Eng Phys ; 115: 103971, 2023 05.
Article in English | MEDLINE | ID: mdl-37120169

ABSTRACT

PURPOSE: The classification of medical images is an important priority for clinical research and helps to improve the diagnosis of various disorders. This work aims to classify the neuroradiological features of patients with Alzheimer's disease (AD) using an automatic hand-modeled method with high accuracy. MATERIALS AND METHOD: This work uses two (private and public) datasets. The private dataset consists of 3807 magnetic resonance imaging (MRI) and computer tomography (CT) images belonging to two (normal and AD) classes. The second public (Kaggle AD) dataset contains 6400 MR images. The presented classification model comprises three fundamental phases: feature extraction using an exemplar hybrid feature extractor, neighborhood component analysis-based feature selection, and classification utilizing eight different classifiers. The novelty of this model is feature extraction. Vision transformers inspire this phase, and hence 16 exemplars are generated. Histogram-oriented gradients (HOG), local binary pattern (LBP) and local phase quantization (LPQ) feature extraction functions have been applied to each exemplar/patch and raw brain image. Finally, the created features are merged, and the best features are selected using neighborhood component analysis (NCA). These features are fed to eight classifiers to obtain highest classification performance using our proposed method. The presented image classification model uses exemplar histogram-based features; hence, it is called ExHiF. RESULTS: We have developed the ExHiF model with a ten-fold cross-validation strategy using two (private and public) datasets with shallow classifiers. We have obtained 100% classification accuracy using cubic support vector machine (CSVM) and fine k nearest neighbor (FkNN) classifiers for both datasets. CONCLUSIONS: Our developed model is ready to be validated with more datasets and has the potential to be employed in mental hospitals to assist neurologists in confirming their manual screening of AD using MRI/CT images.


Subject(s)
Alzheimer Disease , Humans , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/pathology , Magnetic Resonance Imaging/methods , Image Interpretation, Computer-Assisted/methods , Brain/diagnostic imaging , Tomography, X-Ray Computed
5.
Med Eng Phys ; 108: 103895, 2022 10.
Article in English | MEDLINE | ID: mdl-36195364

ABSTRACT

Ultrasound (US) is an important imaging modality used to assess breast lesions for malignant features. In the past decade, many machine learning models have been developed for automated discrimination of breast cancer versus normal on US images, but few have classified the images based on the Breast Imaging Reporting and Data System (BI-RADS) classes. This work aimed to develop a model for classifying US breast lesions using a BI-RADS classification framework with a new multi-class US image dataset. We proposed a deep model that combined a novel pyramid triple deep feature generator (PTDFG) with transfer learning based on three pre-trained networks for creating deep features. Bilinear interpolation was applied to decompose the input image into four images of successively smaller dimensions, constituting a four-level pyramid for downstream feature generation with the pre-trained networks. Neighborhood component analysis was applied to the generated features to select each network's 1,000 most informative features, which were fed to support vector machine classifier for automated classification using a ten-fold cross-validation strategy. Our proposed model was validated using a new US image dataset containing 1,038 images divided into eight BI-RADS classes and histopathological results. We defined three classification schemes: Case 1 involved the classification of all images into eight categories; Case 2, classification of breast US images into five BI-RADS classes; and Case 3, classification of BI-RADS 4 lesions into benign versus malignant classes. Our PTDFG-based transfer learning model attained accuracy rates of 79.29%, 80.42%, and 88.67% for Case 1, Case 2, and Case 3, respectively.


Subject(s)
Breast Neoplasms , Ultrasonography, Mammary , Breast/diagnostic imaging , Breast/pathology , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Female , Humans , Machine Learning , Ultrasonography , Ultrasonography, Mammary/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...