RESUMO
Detecting neurological abnormalities such as brain tumors and Alzheimer's disease (AD) using magnetic resonance imaging (MRI) images is an important research topic in the literature. Numerous machine learning models have been used to detect brain abnormalities accurately. This study addresses the problem of detecting neurological abnormalities in MRI. The motivation behind this problem lies in the need for accurate and efficient methods to assist neurologists in the diagnosis of these disorders. In addition, many deep learning techniques have been applied to MRI to develop accurate brain abnormality detection models, but these networks have high time complexity. Hence, a novel hand-modeled feature-based learning network is presented to reduce the time complexity and obtain high classification performance. The model proposed in this work uses a new feature generation architecture named pyramid and fixed-size patch (PFP). The main aim of the proposed PFP structure is to attain high classification performance using essential feature extractors with both multilevel and local features. Furthermore, the PFP feature extractor generates low- and high-level features using a handcrafted extractor. To obtain the high discriminative feature extraction ability of the PFP, we have used histogram-oriented gradients (HOG); hence, it is named PFP-HOG. Furthermore, the iterative Chi2 (IChi2) is utilized to choose the clinically significant features. Finally, the k-nearest neighbors (kNN) with tenfold cross-validation is used for automated classification. Four MRI neurological databases (AD dataset, brain tumor dataset 1, brain tumor dataset 2, and merged dataset) have been utilized to develop our model. PFP-HOG and IChi2-based models attained 100%, 94.98%, 98.19%, and 97.80% using the AD dataset, brain tumor dataset1, brain tumor dataset 2, and merged brain MRI dataset, respectively. These findings not only provide an accurate and robust classification of various neurological disorders using MRI but also hold the potential to assist neurologists in validating manual MRI brain abnormality screening.
Assuntos
Doença de Alzheimer , Neoplasias Encefálicas , Humanos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Aprendizado de Máquina , Doença de Alzheimer/diagnóstico por imagemRESUMO
PURPOSE: The classification of medical images is an important priority for clinical research and helps to improve the diagnosis of various disorders. This work aims to classify the neuroradiological features of patients with Alzheimer's disease (AD) using an automatic hand-modeled method with high accuracy. MATERIALS AND METHOD: This work uses two (private and public) datasets. The private dataset consists of 3807 magnetic resonance imaging (MRI) and computer tomography (CT) images belonging to two (normal and AD) classes. The second public (Kaggle AD) dataset contains 6400 MR images. The presented classification model comprises three fundamental phases: feature extraction using an exemplar hybrid feature extractor, neighborhood component analysis-based feature selection, and classification utilizing eight different classifiers. The novelty of this model is feature extraction. Vision transformers inspire this phase, and hence 16 exemplars are generated. Histogram-oriented gradients (HOG), local binary pattern (LBP) and local phase quantization (LPQ) feature extraction functions have been applied to each exemplar/patch and raw brain image. Finally, the created features are merged, and the best features are selected using neighborhood component analysis (NCA). These features are fed to eight classifiers to obtain highest classification performance using our proposed method. The presented image classification model uses exemplar histogram-based features; hence, it is called ExHiF. RESULTS: We have developed the ExHiF model with a ten-fold cross-validation strategy using two (private and public) datasets with shallow classifiers. We have obtained 100% classification accuracy using cubic support vector machine (CSVM) and fine k nearest neighbor (FkNN) classifiers for both datasets. CONCLUSIONS: Our developed model is ready to be validated with more datasets and has the potential to be employed in mental hospitals to assist neurologists in confirming their manual screening of AD using MRI/CT images.
Assuntos
Doença de Alzheimer , Humanos , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Tomografia Computadorizada por Raios XRESUMO
Ultrasound (US) is an important imaging modality used to assess breast lesions for malignant features. In the past decade, many machine learning models have been developed for automated discrimination of breast cancer versus normal on US images, but few have classified the images based on the Breast Imaging Reporting and Data System (BI-RADS) classes. This work aimed to develop a model for classifying US breast lesions using a BI-RADS classification framework with a new multi-class US image dataset. We proposed a deep model that combined a novel pyramid triple deep feature generator (PTDFG) with transfer learning based on three pre-trained networks for creating deep features. Bilinear interpolation was applied to decompose the input image into four images of successively smaller dimensions, constituting a four-level pyramid for downstream feature generation with the pre-trained networks. Neighborhood component analysis was applied to the generated features to select each network's 1,000 most informative features, which were fed to support vector machine classifier for automated classification using a ten-fold cross-validation strategy. Our proposed model was validated using a new US image dataset containing 1,038 images divided into eight BI-RADS classes and histopathological results. We defined three classification schemes: Case 1 involved the classification of all images into eight categories; Case 2, classification of breast US images into five BI-RADS classes; and Case 3, classification of BI-RADS 4 lesions into benign versus malignant classes. Our PTDFG-based transfer learning model attained accuracy rates of 79.29%, 80.42%, and 88.67% for Case 1, Case 2, and Case 3, respectively.
Assuntos
Neoplasias da Mama , Ultrassonografia Mamária , Mama/diagnóstico por imagem , Mama/patologia , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Feminino , Humanos , Aprendizado de Máquina , Ultrassonografia , Ultrassonografia Mamária/métodosRESUMO
Objectives: Fetal sex determination with ultrasound (US) examination is indicated in pregnancies at risk of X-linked genetic disorders or ambiguous genitalia. However, misdiagnoses often arise due to operator inexperience and technical difficulties while acquiring diagnostic images. We aimed to develop an efficient automated US-based fetal sex classification model that can facilitate efficient screening and reduce misclassification. Methods: We have developed a novel feature engineering model termed PFP-LHCINCA that employs pyramidal fixed-size patch generation with average pooling-based image decomposition, handcrafted feature extraction based on local phase quantization (LPQ), and histogram of oriented gradients (HOG) to extract directional and textural features and used Chi-square iterative neighborhood component analysis feature selection (CINCA), which iteratively selects the most informative feature vector for each image that minimizes calculated feature parameter-derived k-nearest neighbor-based misclassification rates. The model was trained and tested on a sizeable expert-labeled dataset comprising 339 males' and 332 females' fetal US images. One transverse fetal US image per subject zoomed to the genital area and standardized to 256 × 256 size was used for analysis. Fetal sex was annotated by experts on US images and confirmed postnatally. Results: Standard model performance metrics were compared using five shallow classifiers-k-nearest neighbor (kNN), decision tree, naïve Bayes, linear discriminant, and support vector machine (SVM)-with the hyperparameters tuned using a Bayesian optimizer. The PFP-LHCINCA model achieved a sex classification accuracy of ≥88% with all five classifiers and the best accuracy rates (>98%) with kNN and SVM classifiers. Conclusions: US-based fetal sex classification is feasible and accurate using the presented PFP-LHCINCA model. The salutary results support its clinical use for fetal US image screening for sex classification. The model architecture can be modified into deep learning models for training larger datasets.
Assuntos
Máquina de Vetores de Suporte , Teorema de Bayes , Feminino , Humanos , Masculino , Gravidez , UltrassonografiaRESUMO
OBJECTIVE: Parkinson's disease (PD) is a common neurological disorder with variable clinical manifestations and magnetic resonance imaging (MRI) findings. We propose a handcrafted image classification model that can accurately (i) classify different PD stages, (ii) detect comorbid dementia, and (iii) discriminate PD-related motor symptoms. METHODS: Selected image datasets from three PD studies were used to develop the classification model. Our proposed novel automated system was developed in four phases: (i) texture features are extracted from the non-fixed size patches. In the feature extraction phase, a pyramid histogram-oriented gradient (PHOG) image descriptor is used. (ii) In the feature selection phase, four feature selectors: neighborhood component analysis (NCA), Chi2, minimum redundancy maximum relevancy (mRMR), and ReliefF are used to generate four feature vectors. (iii) Two classifiers: k-nearest neighbor (kNN) and support vector machine (SVM) are used in the classification step. A ten-fold cross-validation technique is used to validate the results. (iv) Eight predicted vectors are generated using four selected feature vectors and two classifiers. Finally, iterative majority voting (IMV) is used to attain general classification results. Therefore, this model is named nested patch-PHOG-multiple feature selectors and multiple classifiers-IMV (NP-PHOG-MFSMCIMV). RESULTS: Our presented NP-PHOG-MFSMCIMV model achieved 99.22, 98.70, and 99.53% accuracies for the collected PD stages, PD dementia, and PD symptoms classification datasets, respectively. SIGNIFICANCE: The obtained accuracies (over 98% for all states) demonstrated the performance of developed NP-PHOG-MFSMCIMV model in automated PD state classification.
Assuntos
Doença de Alzheimer , Doença de Parkinson , Humanos , Imageamento por Ressonância Magnética/métodos , Doença de Parkinson/diagnóstico por imagem , Máquina de Vetores de SuporteRESUMO
BACKGROUND: Alzheimer's disease (AD) is one of the most commonly seen brain ailments worldwide. Therefore, many researches have been presented about AD detection and cure. In addition, machine learning models have also been proposed to detect AD promptly. MATERIALS AND METHOD: In this work, a new brain image dataset was collected. This dataset contains two categories, and these categories are healthy and AD. This dataset was collected from 1070 subjects. This work presents an automatic AD detection model to detect AD using brain images automatically. The presented model is called a feed-forward local phase quantization network (LPQNet). LPQNet consists of (i) multilevel feature generation based on LPQ and average pooling, (ii) feature selection using neighborhood component analysis (NCA), and (iii) classification phases. The prime objective of the presented LPQNet is to reach high accuracy with low computational complexity. LPQNet generates features on six levels. Therefore, 256 × 6 = 1536 features are generated from an image, and the most important 256 out 1536 features are selected. The selected 256 features are classified on the conventional classifiers to denote the classification capability of the generated and selected features by LPQNet. RESULTS: The presented LPQNet was tested on three image datasets to demonstrate the universal classification ability of the LPQNet. The proposed LPQNet attained 99.68%, 100%, and 99.64% classification accuracy on the collected AD image dataset, the Harvard Brain Atlas AD dataset, and the Kaggle AD dataset. Moreover, LPQNet attained 99.62% accuracy on the Kaggle AD dataset using four classes. CONCLUSIONS: Moreover, the calculated results from LPQNet are compared to other automatic AD detection models. Comparisons, results, and findings clearly denote the superiority of the presented model. In addition, a new intelligent AD detector application can be developed for use in magnetic resonance (MR) and computed tomography (CT) devices. By using the developed automated AD detector, new generation intelligence MR and CT devices can be developed.