RESUMO
Medical image fusion is a process that aims to merge the important information from images with different modalities of the same organ of the human body to create a more informative fused image. In recent years, deep learning (DL) methods have achieved significant breakthroughs in the field of image fusion because of their great efficiency. The DL methods in image fusion have become an active topic due to their high feature extraction and data representation ability. In this work, stacked sparse auto-encoder (SSAE), a general category of deep neural networks, is exploited in medical image fusion. The SSAE is an efficient technique for unsupervised feature extraction. It has high capability of complex data representation. The proposed fusion method is carried as follows. Firstly, the source images are decomposed into low- and high-frequency coefficient sub-bands with the non-subsampled contourlet transform (NSCT). The NSCT is a flexible multi-scale decomposition technique, and it is superior to traditional decomposition techniques in several aspects. After that, the SSAE is implemented for feature extraction to obtain a sparse and deep representation from high-frequency coefficients. Then, the spatial frequencies are computed for the obtained features to be used for high-frequency coefficient fusion. After that, a maximum-based fusion rule is applied to fuse the low-frequency sub-band coefficients. The final integrated image is acquired by applying the inverse NSCT. The proposed method has been applied and assessed on various groups of medical image modalities. Experimental results prove that the proposed method could effectively merge the multimodal medical images, while preserving the detail information, perfectly.
Assuntos
Algoritmos , Redes Neurais de Computação , HumanosRESUMO
The security of information is necessary for the success of any system. So, there is a need to have a robust mechanism to ensure the verification of any person before allowing him to access the stored data. So, for purposes of increasing the security level and privacy of users against attacks, cancelable biometrics can be utilized. The principal objective of cancelable biometrics is to generate new distorted biometric templates to be stored in biometric databases instead of the original ones. This paper presents effective methods based on different discrete transforms, such as Discrete Fourier Transform (DFT), Fractional Fourier Transform (FrFT), Discrete Cosine Transform (DCT), and Discrete Wavelet Transform (DWT), in addition to matrix rotation to generate cancelable biometric templates, in order to meet revocability and prevent the restoration of the original templates from the generated cancelable ones. Rotated versions of the images are generated in either spatial or transform domains and added together to eliminate the ability to recover the original biometric templates. The cancelability performance is evaluated and tested through extensive simulation results for all proposed methods on a different face and fingerprint datasets. Low Equal Error Rate (EER) values with high AROC values reflect the efficiency of the proposed methods, especially those dependent on DCT and DFrFT. Moreover, a comparative study is performed to evaluate the proposed method with all transformations to select the best one from the security perspective. Furthermore, a comparative analysis is carried out to test the performance of the proposed schemes with the existing schemes. The obtained outcomes reveal the efficiency of the proposed cancelable biometric schemes by introducing an average AROC of 0.998, EER of 0.0023, FAR of 0.008, and FRR of 0.003.
RESUMO
As the demand for high-bandwidth Internet connections continues to surge, industries are exploring innovative ways to harness this connectivity, and smart agriculture stands at the forefront of this evolution. In this paper, we delve into the challenges faced by Internet Service Providers (ISPs) in efficiently managing bandwidth and traffic within their networks. We propose a synergy between two pivotal technologies, Multi-Protocol Label Switching-Traffic Engineering (MPLS-TE) and Diffserv Quality of Service (Diffserv-QoS), which have implications beyond traditional networks and resonate strongly with the realm of smart agriculture. The increasing adoption of technology in agriculture relies heavily on real-time data, remote monitoring, and automated processes. This dynamic nature requires robust and reliable high-bandwidth connections to facilitate data flow between sensors, devices, and central management systems. By optimizing bandwidth utilization through MPLS-TE and implementing traffic control mechanisms with Diffserv-QoS, ISPs can create a resilient network foundation for smart agriculture applications. The integration of MPLS-TE and Diffserv-QoS has resulted in significant enhancements in throughput and a considerable reduction in Jitter. Employment of the IPv4 header has demonstrated impressive outcomes, achieving a throughput of 5.83 Mbps and reducing Jitter to 3 msec.
Assuntos
Algoritmos , Redes de Comunicação de Computadores , Simulação por Computador , Tecnologia sem Fio , AgriculturaRESUMO
Reinforcement of the Internet of Medical Things (IoMT) network security has become extremely significant as these networks enable both patients and healthcare providers to communicate with each other by exchanging medical signals, data, and vital reports in a safe way. To ensure the safe transmission of sensitive information, robust and secure access mechanisms are paramount. Vulnerabilities in these networks, particularly at the access points, could expose patients to significant risks. Among the possible security measures, biometric authentication is becoming a more feasible choice, with a focus on leveraging regularly-monitored biomedical signals like Electrocardiogram (ECG) signals due to their unique characteristics. A notable challenge within all biometric authentication systems is the risk of losing original biometric traits, if hackers successfully compromise the biometric template storage space. Current research endorses replacement of the original biometrics used in access control with cancellable templates. These are produced using encryption or non-invertible transformation, which improves security by enabling the biometric templates to be changed in case an unwanted access is detected. This study presents a comprehensive framework for ECG-based recognition with cancellable templates. This framework may be used for accessing IoMT networks. An innovative methodology is introduced through non-invertible modification of ECG signals using blind signal separation and lightweight encryption. The basic idea here depends on the assumption that if the ECG signal and an auxiliary audio signal for the same person are subjected to a separation algorithm, the algorithm will yield two uncorrelated components through the minimization of a correlation cost function. Hence, the obtained outputs from the separation algorithm will be distorted versions of the ECG as well as the audio signals. The distorted versions of the ECG signals can be treated with a lightweight encryption stage and used as cancellable templates. Security enhancement is achieved through the utilization of the lightweight encryption stage based on a user-specific pattern and XOR operation, thereby reducing the processing burden associated with conventional encryption methods. The proposed framework efficacy is demonstrated through its application on the ECG-ID and MIT-BIH datasets, yielding promising results. The experimental evaluation reveals an Equal Error Rate (EER) of 0.134 on the ECG-ID dataset and 0.4 on the MIT-BIH dataset, alongside an exceptionally large Area under the Receiver Operating Characteristic curve (AROC) of 99.96% for both datasets. These results underscore the framework potential in securing IoMT networks through cancellable biometrics, offering a hybrid security model that combines the strengths of non-invertible transformations and lightweight encryption.
Assuntos
Segurança Computacional , Eletrocardiografia , Internet das Coisas , Eletrocardiografia/métodos , Humanos , Algoritmos , Processamento de Sinais Assistido por Computador , Identificação Biométrica/métodosRESUMO
Diabetic retinopathy (DR) is a significant cause of vision impairment, emphasizing the critical need for early detection and timely intervention to avert visual deterioration. Diagnosing DR is inherently complex, as it necessitates the meticulous examination of intricate retinal images by experienced specialists. This makes the early diagnosis of DR essential for effective treatment and prevention of eventual blindness. Traditional diagnostic methods, relying on human interpretation of medical images, face challenges in terms of accuracy and efficiency. In the present research, we introduce a novel method that offers superior precision in DR diagnosis, compared to traditional methods, by employing advanced deep learning techniques. Central to this approach is the concept of transfer learning. This entails the utilization of pre-existing, well-established models, specifically InceptionResNetv2 and Inceptionv3, to extract features and fine-tune selected layers to cater to the unique requirements of this specific diagnostic task. Concurrently, we also present a newly devised model, DiaCNN, which is tailored for the classification of eye diseases. To prove the efficacy of the proposed methodology, we leveraged the Ocular Disease Intelligent Recognition (ODIR) dataset, which comprises eight different eye disease categories. The results are promising. The InceptionResNetv2 model, incorporating transfer learning, registered an impressive 97.5% accuracy in both the training and testing phases. Its counterpart, the Inceptionv3 model, achieved an even more commendable 99.7% accuracy during training, and 97.5% during testing. Remarkably, the DiaCNN model showcased unparalleled precision, achieving 100% accuracy in training and 98.3% in testing. These figures represent a significant leap in classification accuracy when juxtaposed with existing state-of-the-art diagnostic methods. Such advancements hold immense promise for the future, emphasizing the potential of our proposed technique to revolutionize the accuracy of DR and other eye disease diagnoses. By facilitating earlier detection and more timely interventions, this approach stands poised to significantly reduce the incidence of blindness associated with DR, thus heralding a new era of improved patient outcomes. Therefore, this work, through its novel approach and stellar results, not only pushes the boundaries of DR diagnostic accuracy but also promises a transformative impact in early detection and intervention, aiming to substantially diminish DR-induced blindness and champion enhanced patient care.
Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico , Retina , Algoritmos , CegueiraRESUMO
Acute lower respiratory infection is a leading cause of death in developing countries. Hence, progress has been made for early detection and treatment. There is still a need for improved diagnostic and therapeutic strategies, particularly in resource-limited settings. Chest X-ray and computed tomography (CT) have the potential to serve as effective screening tools for lower respiratory infections, but the use of artificial intelligence (AI) in these areas is limited. To address this gap, we present a computer-aided diagnostic system for chest X-ray and CT images of several common pulmonary diseases, including COVID-19, viral pneumonia, bacterial pneumonia, tuberculosis, lung opacity, and various types of carcinoma. The proposed system depends on super-resolution (SR) techniques to enhance image details. Deep learning (DL) techniques are used for both SR reconstruction and classification, with the InceptionResNetv2 model used as a feature extractor in conjunction with a multi-class support vector machine (MCSVM) classifier. In this paper, we compare the proposed model performance to those of other classification models, such as Resnet101 and Inceptionv3, and evaluate the effectiveness of using both softmax and MCSVM classifiers. The proposed system was tested on three publicly available datasets of CT and X-ray images and it achieved a classification accuracy of 98.028% using a combination of SR and InceptionResNetv2. Overall, our system has the potential to serve as a valuable screening tool for lower respiratory disorders and assist clinicians in interpreting chest X-ray and CT images. In resource-limited settings, it can also provide a valuable diagnostic support.
RESUMO
This paper explores the issue of COVID-19 detection from X-ray images. X-ray images, in general, suffer from low quality and low resolution. That is why the detection of different diseases from X-ray images requires sophisticated algorithms. First of all, machine learning (ML) is adopted on the features extracted manually from the X-ray images. Twelve classifiers are compared for this task. Simulation results reveal the superiority of Gaussian process (GP) and random forest (RF) classifiers. To extend the feasibility of this study, we have modified the feature extraction strategy to give deep features. Four pre-trained models, namely ResNet50, ResNet101, Inception-v3 and InceptionResnet-v2 are adopted in this study. Simulation results prove that InceptionResnet-v2 and ResNet101 with GP classifier achieve the best performance. Moreover, transfer learning (TL) is also introduced in this paper to enhance the COVID-19 detection process. The selected classification hierarchy is also compared with a convolutional neural network (CNN) model built from scratch to prove its quality of classification. Simulation results prove that deep features and TL methods provide the best performance that reached 100% for accuracy.
RESUMO
Smart health surveillance technology has attracted wide attention between patients and professionals or specialists to provide early detection of critical abnormal situations without the need to be in direct contact with the patient. This paper presents a secure smart monitoring portable multivital signal system based on Internet-of-Things (IoT) technology. The implemented system is designed to measure the key health parameters: heart rate (HR), blood oxygen saturation (SpO2), and body temperature, simultaneously. The captured physiological signals are processed and encrypted using the Advanced Encryption Standard (AES) algorithm before sending them to the cloud. An ESP8266 integrated unit is used for processing, encryption, and providing connectivity to the cloud over Wi-Fi. On the other side, trusted medical organization servers receive and decrypt the measurements and display the values on the monitoring dashboard for the authorized specialists. The proposed system measurements are compared with a number of commercial medical devices. Results demonstrate that the measurements of the proposed system are within the 95% confidence interval. Moreover, Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Mean Relative Error (MRE) for the proposed system are calculated as 1.44, 1.12, and 0.012, respectively, for HR, 1.13, 0.92, and 0.009, respectively, for SpO2, and 0.13, 0.11, and 0.003, respectively, for body temperature. These results demonstrate the high accuracy and reliability of the proposed system.
Assuntos
Computação em Nuvem , Internet das Coisas , Comunicação , Humanos , Saturação de Oxigênio , Reprodutibilidade dos TestesRESUMO
For high accuracy classification of DNA sequences through Convolutional Neural Networks (CNNs), it is essential to use an efficient sequence representation that can accelerate similarity comparison between DNA sequences. In addition, CNN networks can be improved by avoiding the dimensionality problem associated with multi-layer CNN features. This paper presents a new approach for classification of bacterial DNA sequences based on a custom layer. A CNN is used with Frequency Chaos Game Representation (FCGR) of DNA. The FCGR is adopted as a sequence representation method with a suitable choice of the frequency k-lengthen words occurrence in DNA sequences. The DNA sequence is mapped using FCGR that produces an image of a gene sequence. This sequence displays both local and global patterns. A pre-trained CNN is built for image classification. First, the image is converted to feature maps through convolutional layers. This is sometimes followed by a down-sampling operation that reduces the spatial size of the feature map and removes redundant spatial information using the pooling layers. The Random Projection (RP) with an activation function, which carries data with a decent variety with some randomness, is suggested instead of the pooling layers. The feature reduction is achieved while keeping the high accuracy for classifying bacteria into taxonomic levels. The simulation results show that the proposed CNN based on RP has a trade-off between accuracy score and processing time.