Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 52
Filter
1.
Sensors (Basel) ; 22(17)2022 Aug 24.
Article in English | MEDLINE | ID: mdl-36080813

ABSTRACT

Binary object segmentation is a sub-area of semantic segmentation that could be used for a variety of applications. Semantic segmentation models could be applied to solve binary segmentation problems by introducing only two classes, but the models to solve this problem are more complex than actually required. This leads to very long training times, since there are usually tens of millions of parameters to learn in this category of convolutional neural networks (CNNs). This article introduces a novel abridged VGG-16 and SegNet-inspired reflected architecture adapted for binary segmentation tasks. The architecture has 27 times fewer parameters than SegNet but yields 86% segmentation cross-intersection accuracy and 93% binary accuracy. The proposed architecture is evaluated on a large dataset of depth images collected using the Kinect device, achieving an accuracy of 99.25% in human body shape segmentation and 87% in gender recognition tasks.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Semantics
2.
Sensors (Basel) ; 22(3)2022 Jan 19.
Article in English | MEDLINE | ID: mdl-35161486

ABSTRACT

Alzheimer's disease (AD) is a neurodegenerative disease that affects brain cells, and mild cognitive impairment (MCI) has been defined as the early phase that describes the onset of AD. Early detection of MCI can be used to save patient brain cells from further damage and direct additional medical treatment to prevent its progression. Lately, the use of deep learning for the early identification of AD has generated a lot of interest. However, one of the limitations of such algorithms is their inability to identify changes in the functional connectivity in the functional brain network of patients with MCI. In this paper, we attempt to elucidate this issue with randomized concatenated deep features obtained from two pre-trained models, which simultaneously learn deep features from brain functional networks from magnetic resonance imaging (MRI) images. We experimented with ResNet18 and DenseNet201 to perform the task of AD multiclass classification. A gradient class activation map was used to mark the discriminating region of the image for the proposed model prediction. Accuracy, precision, and recall were used to assess the performance of the proposed system. The experimental analysis showed that the proposed model was able to achieve 98.86% accuracy, 98.94% precision, and 98.89% recall in multiclass classification. The findings indicate that advanced deep learning with MRI images can be used to classify and predict neurodegenerative brain diseases such as AD.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Neurodegenerative Diseases , Alzheimer Disease/diagnostic imaging , Brain/diagnostic imaging , Cognitive Dysfunction/diagnostic imaging , Humans , Magnetic Resonance Imaging , Neuroimaging
3.
Sensors (Basel) ; 22(10)2022 May 20.
Article in English | MEDLINE | ID: mdl-35632297

ABSTRACT

One of the most important strategies for preventative factory maintenance is anomaly detection without the need for dedicated sensors for each industrial unit. The implementation of sound-data-based anomaly detection is an unduly complicated process since factory-collected sound data are frequently corrupted and affected by ordinary production noises. The use of acoustic methods to detect the irregularities in systems has a long history. Unfortunately, limited reference to the implementation of the acoustic approach could be found in the failure detection of industrial machines. This paper presents a systematic review of acoustic approaches in mechanical failure detection in terms of recent implementations and structural extensions. The 52 articles are selected from IEEEXplore, Science Direct and Springer Link databases following the PRISMA methodology for performing systematic literature reviews. The study identifies the research gaps while considering the potential in responding to the challenges of the mechanical failure detection of industrial machines. The results of this study reveal that the use of acoustic emission is still dominant in the research community. In addition, based on the 52 selected articles, research that discusses failure detection in noisy conditions is still very limited and shows that it will still be a challenge in the future.


Subject(s)
Acoustics , Noise
4.
Sensors (Basel) ; 22(9)2022 May 06.
Article in English | MEDLINE | ID: mdl-35591221

ABSTRACT

The identification of human activities from videos is important for many applications. For such a task, three-dimensional (3D) depth images or image sequences (videos) can be used, which represent the positioning information of the objects in a 3D scene obtained from depth sensors. This paper presents a framework to create foreground-background masks from depth images for human body segmentation. The framework can be used to speed up the manual depth image annotation process with no semantics known beforehand and can apply segmentation using a performant algorithm while the user only adjusts the parameters, or corrects the automatic segmentation results, or gives it hints by drawing a boundary of the desired object. The approach has been tested using two different datasets with a human in a real-world closed environment. The solution has provided promising results in terms of reducing the manual segmentation time from the perspective of the processing time as well as the human input time.


Subject(s)
Algorithms , Human Body , Computers , Humans , Image Processing, Computer-Assisted/methods , Semantics
5.
Sensors (Basel) ; 22(6)2022 Mar 13.
Article in English | MEDLINE | ID: mdl-35336395

ABSTRACT

Current research endeavors in the application of artificial intelligence (AI) methods in the diagnosis of the COVID-19 disease has proven indispensable with very promising results. Despite these promising results, there are still limitations in real-time detection of COVID-19 using reverse transcription polymerase chain reaction (RT-PCR) test data, such as limited datasets, imbalance classes, a high misclassification rate of models, and the need for specialized research in identifying the best features and thus improving prediction rates. This study aims to investigate and apply the ensemble learning approach to develop prediction models for effective detection of COVID-19 using routine laboratory blood test results. Hence, an ensemble machine learning-based COVID-19 detection system is presented, aiming to aid clinicians to diagnose this virus effectively. The experiment was conducted using custom convolutional neural network (CNN) models as a first-stage classifier and 15 supervised machine learning algorithms as a second-stage classifier: K-Nearest Neighbors, Support Vector Machine (Linear and RBF), Naive Bayes, Decision Tree, Random Forest, MultiLayer Perceptron, AdaBoost, ExtraTrees, Logistic Regression, Linear and Quadratic Discriminant Analysis (LDA/QDA), Passive, Ridge, and Stochastic Gradient Descent Classifier. Our findings show that an ensemble learning model based on DNN and ExtraTrees achieved a mean accuracy of 99.28% and area under curve (AUC) of 99.4%, while AdaBoost gave a mean accuracy of 99.28% and AUC of 98.8% on the San Raffaele Hospital dataset, respectively. The comparison of the proposed COVID-19 detection approach with other state-of-the-art approaches using the same dataset shows that the proposed method outperforms several other COVID-19 diagnostics methods.


Subject(s)
Artificial Intelligence , COVID-19 , Bayes Theorem , COVID-19/diagnosis , Hematologic Tests , Humans , Machine Learning
6.
Sensors (Basel) ; 22(9)2022 May 01.
Article in English | MEDLINE | ID: mdl-35591146

ABSTRACT

Pedestrian occurrences in images and videos must be accurately recognized in a number of applications that may improve the quality of human life. Radar can be used to identify pedestrians. When distinct portions of an object move in front of a radar, micro-Doppler signals are produced that may be utilized to identify the object. Using a deep-learning network and time-frequency analysis, we offer a method for classifying pedestrians and animals based on their micro-Doppler radar signature features. Based on these signatures, we employed a convolutional neural network (CNN) to recognize pedestrians and animals. The proposed approach was evaluated on the MAFAT Radar Challenge dataset. Encouraging results were obtained, with an AUC (Area Under Curve) value of 0.95 on the public test set and over 0.85 on the final (private) test set. The proposed DNN architecture, in contrast to more common shallow CNN architectures, is one of the first attempts to use such an approach in the domain of radar data. The use of the synthetic radar data, which greatly improved the final result, is the other novel aspect of our work.


Subject(s)
Deep Learning , Pedestrians , Animals , Humans , Neural Networks, Computer , Radar , Ultrasonography, Doppler
7.
Medicina (Kaunas) ; 58(8)2022 Aug 12.
Article in English | MEDLINE | ID: mdl-36013557

ABSTRACT

Background and Objectives: Clinical diagnosis has become very significant in today's health system. The most serious disease and the leading cause of mortality globally is brain cancer which is a key research topic in the field of medical imaging. The examination and prognosis of brain tumors can be improved by an early and precise diagnosis based on magnetic resonance imaging. For computer-aided diagnosis methods to assist radiologists in the proper detection of brain tumors, medical imagery must be detected, segmented, and classified. Manual brain tumor detection is a monotonous and error-prone procedure for radiologists; hence, it is very important to implement an automated method. As a result, the precise brain tumor detection and classification method is presented. Materials and Methods: The proposed method has five steps. In the first step, a linear contrast stretching is used to determine the edges in the source image. In the second step, a custom 17-layered deep neural network architecture is developed for the segmentation of brain tumors. In the third step, a modified MobileNetV2 architecture is used for feature extraction and is trained using transfer learning. In the fourth step, an entropy-based controlled method was used along with a multiclass support vector machine (M-SVM) for the best features selection. In the final step, M-SVM is used for brain tumor classification, which identifies the meningioma, glioma and pituitary images. Results: The proposed method was demonstrated on BraTS 2018 and Figshare datasets. Experimental study shows that the proposed brain tumor detection and classification method outperforms other methods both visually and quantitatively, obtaining an accuracy of 97.47% and 98.92%, respectively. Finally, we adopt the eXplainable Artificial Intelligence (XAI) method to explain the result. Conclusions: Our proposed approach for brain tumor detection and classification has outperformed prior methods. These findings demonstrate that the proposed approach obtained higher performance in terms of both visually and enhanced quantitative evaluation with improved accuracy.


Subject(s)
Brain Neoplasms , Support Vector Machine , Artificial Intelligence , Brain Neoplasms/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer
8.
Sensors (Basel) ; 21(12)2021 Jun 08.
Article in English | MEDLINE | ID: mdl-34201039

ABSTRACT

Majority of current research focuses on a single static object reconstruction from a given pointcloud. However, the existing approaches are not applicable to real world applications such as dynamic and morphing scene reconstruction. To solve this, we propose a novel two-tiered deep neural network architecture, which is capable of reconstructing self-obstructed human-like morphing shapes from a depth frame in conjunction with cameras intrinsic parameters. The tests were performed using on custom dataset generated using a combination of AMASS and MoVi datasets. The proposed network achieved Jaccards' Index of 0.7907 for the first tier, which is used to extract region of interest from the point cloud. The second tier of the network has achieved Earth Mover's distance of 0.0256 and Chamfer distance of 0.276, indicating good experimental results. Further, subjective reconstruction results inspection shows strong predictive capabilities of the network, with the solution being able to reconstruct limb positions from very few object details.


Subject(s)
Imaging, Three-Dimensional , Neural Networks, Computer , Extremities , Humans
9.
Sensors (Basel) ; 21(11)2021 Jun 03.
Article in English | MEDLINE | ID: mdl-34205120

ABSTRACT

Diabetic retinopathy (DR) is the main cause of blindness in diabetic patients. Early and accurate diagnosis can improve the analysis and prognosis of the disease. One of the earliest symptoms of DR are the hemorrhages in the retina. Therefore, we propose a new method for accurate hemorrhage detection from the retinal fundus images. First, the proposed method uses the modified contrast enhancement method to improve the edge details from the input retinal fundus images. In the second stage, a new convolutional neural network (CNN) architecture is proposed to detect hemorrhages. A modified pre-trained CNN model is used to extract features from the detected hemorrhages. In the third stage, all extracted feature vectors are fused using the convolutional sparse image decomposition method, and finally, the best features are selected by using the multi-logistic regression controlled entropy variance approach. The proposed method is evaluated on 1509 images from HRF, DRIVE, STARE, MESSIDOR, DIARETDB0, and DIARETDB1 databases and achieves the average accuracy of 97.71%, which is superior to the previous works. Moreover, the proposed hemorrhage detection system attains better performance, in terms of visual quality and quantitative analysis with high accuracy, in comparison with the state-of-the-art methods.


Subject(s)
Deep Learning , Diabetes Mellitus , Algorithms , Fundus Oculi , Hemorrhage , Humans , Neural Networks, Computer , Retina
10.
Sensors (Basel) ; 21(11)2021 May 26.
Article in English | MEDLINE | ID: mdl-34073427

ABSTRACT

With the majority of research, in relation to 3D object reconstruction, focusing on single static synthetic object reconstruction, there is a need for a method capable of reconstructing morphing objects in dynamic scenes without external influence. However, such research requires a time-consuming creation of real world object ground truths. To solve this, we propose a novel three-staged deep adversarial neural network architecture capable of denoising and refining real-world depth sensor input for full human body posture reconstruction. The proposed network has achieved Earth Mover and Chamfer distances of 0.059 and 0.079 on synthetic datasets, respectively, which indicates on-par experimental results with other approaches, in addition to the ability of reconstructing from maskless real world depth frames. Additional visual inspection to the reconstructed pointclouds has shown that the suggested approach manages to deal with the majority of the real world depth sensor noise, with the exception of large deformities to the depth field.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Recreation
11.
Sensors (Basel) ; 21(21)2021 Nov 02.
Article in English | MEDLINE | ID: mdl-34770595

ABSTRACT

In healthcare, a multitude of data is collected from medical sensors and devices, such as X-ray machines, magnetic resonance imaging, computed tomography (CT), and so on, that can be analyzed by artificial intelligence methods for early diagnosis of diseases. Recently, the outbreak of the COVID-19 disease caused many deaths. Computer vision researchers support medical doctors by employing deep learning techniques on medical images to diagnose COVID-19 patients. Various methods were proposed for COVID-19 case classification. A new automated technique is proposed using parallel fusion and optimization of deep learning models. The proposed technique starts with a contrast enhancement using a combination of top-hat and Wiener filters. Two pre-trained deep learning models (AlexNet and VGG16) are employed and fine-tuned according to target classes (COVID-19 and healthy). Features are extracted and fused using a parallel fusion approach-parallel positive correlation. Optimal features are selected using the entropy-controlled firefly optimization method. The selected features are classified using machine learning classifiers such as multiclass support vector machine (MC-SVM). Experiments were carried out using the Radiopaedia database and achieved an accuracy of 98%. Moreover, a detailed analysis is conducted and shows the improved performance of the proposed scheme.


Subject(s)
COVID-19 , Deep Learning , Animals , Artificial Intelligence , Entropy , Fireflies , Humans , SARS-CoV-2 , Tomography, X-Ray Computed
12.
Sensors (Basel) ; 21(1)2020 Dec 24.
Article in English | MEDLINE | ID: mdl-33374461

ABSTRACT

We propose a deep learning method based on the Region Based Convolutional Neural Networks (R-CNN) architecture for the evaluation of sperm head motility in human semen videos. The neural network performs the segmentation of sperm heads, while the proposed central coordinate tracking algorithm allows us to calculate the movement speed of sperm heads. We have achieved 91.77% (95% CI, 91.11-92.43%) accuracy of sperm head detection on the VISEM (A Multimodal Video Dataset of Human Spermatozoa) sperm sample video dataset. The mean absolute error (MAE) of sperm head vitality prediction was 2.92 (95% CI, 2.46-3.37), while the Pearson correlation between actual and predicted sperm head vitality was 0.969. The results of the experiments presented below will show the applicability of the proposed method to be used in automated artificial insemination workflow.


Subject(s)
Deep Learning , Insemination, Artificial , Semen Analysis , Humans , Male , Neural Networks, Computer , Spermatozoa
13.
Sensors (Basel) ; 20(11)2020 Jun 06.
Article in English | MEDLINE | ID: mdl-32517223

ABSTRACT

We present a model for digital neural impairment screening and self-assessment, which can evaluate cognitive and motor deficits for patients with symptoms of central nervous system (CNS) disorders, such as mild cognitive impairment (MCI), Parkinson's disease (PD), Huntington's disease (HD), or dementia. The data was collected with an Android mobile application that can track cognitive, hand tremor, energy expenditure, and speech features of subjects. We extracted 238 features as the model inputs using 16 tasks, 12 of them were based on a self-administered cognitive testing (SAGE) methodology and others used finger tapping and voice features acquired from the sensors of a smart mobile device (smartphone or tablet). Fifteen subjects were involved in the investigation: 7 patients with neurological disorders (1 with Parkinson's disease, 3 with Huntington's disease, 1 with early dementia, 1 with cerebral palsy, 1 post-stroke) and 8 healthy subjects. The finger tapping, SAGE, energy expenditure, and speech analysis features were used for neural impairment evaluations. The best results were achieved using a fusion of 13 classifiers for combined finger tapping and SAGE features (96.12% accuracy), and using bidirectional long short-term memory (BiLSTM) (94.29% accuracy) for speech analysis features.


Subject(s)
Central Nervous System Diseases/diagnosis , Cognition , Mobile Applications , Motor Disorders/diagnosis , Speech , Humans , Neuropsychological Tests
14.
Sensors (Basel) ; 20(7)2020 Apr 03.
Article in English | MEDLINE | ID: mdl-32260316

ABSTRACT

State-of-the-art intelligent versatile applications provoke the usage of full 3D, depth-based streams, especially in the scenarios of intelligent remote control and communications, where virtual and augmented reality will soon become outdated and are forecasted to be replaced by point cloud streams providing explorable 3D environments of communication and industrial data. One of the most novel approaches employed in modern object reconstruction methods is to use a priori knowledge of the objects that are being reconstructed. Our approach is different as we strive to reconstruct a 3D object within much more difficult scenarios of limited data availability. Data stream is often limited by insufficient depth camera coverage and, as a result, the objects are occluded and data is lost. Our proposed hybrid artificial neural network modifications have improved the reconstruction results by 8.53% which allows us for much more precise filling of occluded object sides and reduction of noise during the process. Furthermore, the addition of object segmentation masks and the individual object instance classification is a leap forward towards a general-purpose scene reconstruction as opposed to a single object reconstruction task due to the ability to mask out overlapping object instances and using only masked object area in the reconstruction process.

15.
Sensors (Basel) ; 19(16)2019 Aug 16.
Article in English | MEDLINE | ID: mdl-31426441

ABSTRACT

We propose a method for generating the synthetic images of human embryo cells that could later be used for classification, analysis, and training, thus resulting in the creation of new synthetic image datasets for research areas lacking real-world data. Our focus was not only to generate the generic image of a cell such, but to make sure that it has all necessary attributes of a real cell image to provide a fully realistic synthetic version. We use human embryo images obtained during cell development processes for training a deep neural network (DNN). The proposed algorithm used generative adversarial network (GAN) to generate one-, two-, and four-cell stage images. We achieved a misclassification rate of 12.3% for the generated images, while the expert evaluation showed the true recognition rate (TRR) of 80.00% (for four-cell images), 86.8% (for two-cell images), and 96.2% (for one-cell images). Texture-based comparison using the Haralick features showed that there is no statistically (using the Student's t-test) significant (p < 0.01) differences between the real and synthetic embryo images except for the sum of variance (for one-cell and four-cell images), and variance and sum of average (for two-cell images) features. The obtained synthetic images can be later adapted to facilitate the development, training, and evaluation of new algorithms for embryo image processing tasks.

16.
Sensors (Basel) ; 19(9)2019 May 07.
Article in English | MEDLINE | ID: mdl-31067769

ABSTRACT

The ability to precisely locate and navigate a partially impaired or a blind person within a building is increasingly important for a wide variety of public safety and localization services. In this paper, we explore indoor localization algorithms using Bluetooth Low Energy (BLE) beacons. We propose using the BLE beacon's received signal strength indication (RSSI) and the geometric distance from the current beacon to the fingerprint point in the framework of fuzzy logic for calculating the Euclidean distance for the subsequent determination of location. According to our results, the fingerprinting algorithm with fuzzy logic type-2 (hesitant fuzzy sets) is fit for use as an indoor localization method with BLE beacons. The average error of localization is only 0.43 m, and the algorithm obtains a navigation precision of 98.2 ± 1%. This precision confirms that the algorithms provide great aid to a visually impaired person in unknown spaces, especially those designed without physical tactile guides, as confirmed by low Fréchet and Hausdorff distance values and high navigation efficiency index (NEI) scores.


Subject(s)
Fuzzy Logic , Spatial Navigation/physiology , Visually Impaired Persons , Wireless Technology , Adult , Algorithms , Female , Humans , Male , Middle Aged , Young Adult
17.
Sensors (Basel) ; 19(7)2019 Mar 31.
Article in English | MEDLINE | ID: mdl-30935104

ABSTRACT

Depth-based reconstruction of three-dimensional (3D) shape of objects is one of core problems in computer vision with a lot of commercial applications. However, the 3D scanning for point cloud-based video streaming is expensive and is generally unattainable to an average user due to required setup of multiple depth sensors. We propose a novel hybrid modular artificial neural network (ANN) architecture, which can reconstruct smooth polygonal meshes from a single depth frame, using a priori knowledge. The architecture of neural network consists of separate nodes for recognition of object type and reconstruction thus allowing for easy retraining and extension for new object types. We performed recognition of nine real-world objects using the neural network trained on the ShapeNetCore model dataset. The results evaluated quantitatively using the Intersection-over-Union (IoU), Completeness, Correctness and Quality metrics, and qualitative evaluation by visual inspection demonstrate the robustness of the proposed architecture with respect to different viewing angles and illumination conditions.

18.
Sensors (Basel) ; 18(5)2018 May 14.
Article in English | MEDLINE | ID: mdl-29757988

ABSTRACT

The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.

19.
Brain Sci ; 14(4)2024 Apr 14.
Article in English | MEDLINE | ID: mdl-38672031

ABSTRACT

This paper presents a novel approach to improving the detection of mild cognitive impairment (MCI) through the use of super-resolved structural magnetic resonance imaging (MRI) and optimized deep learning models. The study introduces enhancements to the perceptual quality of super-resolved 2D structural MRI images using advanced loss functions, modifications to the upscaler part of the generator, and experiments with various discriminators within a generative adversarial training setting. It empirically demonstrates the effectiveness of super-resolution in the MCI detection task, showcasing performance improvements across different state-of-the-art classification models. The paper also addresses the challenge of accurately capturing perceptual image quality, particularly when images contain checkerboard artifacts, and proposes a methodology that incorporates hyperparameter optimization through a Pareto optimal Markov blanket (POMB). This approach systematically explores the hyperparameter space, focusing on reducing overfitting and enhancing model generalizability. The research findings contribute to the field by demonstrating that super-resolution can significantly improve the quality of MRI images for MCI detection, highlighting the importance of choosing an adequate discriminator and the potential of super-resolution as a preprocessing step to boost classification model performance.

20.
Heliyon ; 10(15): e34402, 2024 Aug 15.
Article in English | MEDLINE | ID: mdl-39145034

ABSTRACT

The threat posed by Alzheimer's disease (AD) to human health has grown significantly. However, the precise diagnosis and classification of AD stages remain a challenge. Neuroimaging methods such as structural magnetic resonance imaging (sMRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) have been used to diagnose and categorize AD. However, feature selection approaches that are frequently used to extract additional data from multimodal imaging are prone to errors. This paper suggests using a static pulse-coupled neural network and a Laplacian pyramid to combine sMRI and FDG-PET data. After that, the fused images are used to train the Mobile Vision Transformer (MViT), optimized with Pareto-Optimal Quantum Dynamic Optimization for Neural Architecture Search, while the fused images are augmented to avoid overfitting and then classify unfused MRI and FDG-PET images obtained from the AD Neuroimaging Initiative (ADNI) and Open Access Series of Imaging Studies (OASIS) datasets into various stages of AD. The architectural hyperparameters of MViT are optimized using Quantum Dynamic Optimization, which ensures a Pareto-optimal solution. The Peak Signal-to-Noise Ratio (PSNR), the Mean Squared Error (MSE), and the Structured Similarity Indexing Method (SSIM) are used to measure the quality of the fused image. We found that the fused image was consistent in all metrics, having 0.64 SIMM, 35.60 PSNR, and 0.21 MSE for the FDG-PET image. In the classification of AD vs. cognitive normal (CN), AD vs. mild cognitive impairment (MCI), and CN vs. MCI, the precision of the proposed method is 94.73%, 92.98% and 89.36%, respectively. The sensitivity is 90. 70%, 90. 70%, and 90. 91% while the specificity is 100%, 100%, and 85. 71%, respectively, in the ADNI MRI test data.

SELECTION OF CITATIONS
SEARCH DETAIL