RESUMEN
We propose a model-free time delay signature (TDS) extraction method for optical chaos systems. The TDS can be identified from time series without prior knowledge of the actual physical processes. In optical chaos secure communication systems, the chaos carrier is usually generated by a laser diode subject to opto-electronic/all-optical time delayed feedback. One of the most important factors to security considerations is the concealment of the TDS. So far, statistical analysis methods such as autocorrelation function (ACF) and delayed mutual information (DMI) are usually used to unveil the TDS. However, the effectiveness of these methods will be reduced when increasing the nonlinearity of chaos systems. Meanwhile, certain TDS concealment strategies have been designed against statistical analysis. In our previous work, convolutional neural network shows its effectiveness on TDS extraction of chaos systems with high loop nonlinearity. However, this method relies on the knowledge of detailed structure of the chaos systems. In this work, we formulate a blind identification method based on long short-term memory neural network (LSTM-NN) model. The method is validated against the two major types of optical chaos systems, i.e. opto-electronic oscillator (OEO) chaos system and laser chaos system based on internal nonlinearity. Moreover, some security enhanced chaotic systems are also studied. The results show that the proposed method has high tolerance to additive noise. Meanwhile, the data amount needed is less than existing methods.
RESUMEN
The spread of the sensors and industrial systems has fostered widespread real-time data processing applications. Massive vector field data (MVFD) are generated by vast distributed sensors and are characterized by high distribution, high velocity, and high volume. As a result, computing such kind of data on centralized cloud faces unprecedented challenges, especially on the processing delay due to the distance between the data source and the cloud. Taking advantages of data source proximity and vast distribution, edge computing is ideal for timely computing on MVFD. Therefore, we are motivated to propose an edge computing based MVFD processing framework. In particular, we notice that the high volume feature of MVFD results in high data transmission delay. To solve this problem, we invent Data Fluidization Schedule (DFS) in our framework to reduce the data block volume and the latency on Input/Output (I/O). We evaluated the efficiency of our framework in a practical application on massive wind field data processing for cyclone recognition. The high efficiency our framework was verified by the fact that it significantly outperformed classical big data processing frameworks Spark and MapReduce.
RESUMEN
Accurate knowledge of network topology is vital for network monitoring and management. Network tomography can probe the underlying topologies of the intervening networks solely by sending and receiving packets between end hosts: the performance correlations of the end-to-end paths between each pair of end hosts can be mapped to the lengths of their shared paths, which could be further used to identify the interior nodes and links. However, such performance correlations are usually heavily affected by the time-varying cross-traffic, making it hard to keep the estimated lengths consistent during different measurement periods, i.e., once inconsistent measurements are collected, a biased inference of the network topology then will be yielded. In this paper, we prove conditions under which it is sufficient to identify the network topology accurately against the time-varying cross-traffic. Our insight is that even though the estimated length of the shared path between two paths might be "zoomed in or out" by the cross-traffic, the network topology can still be recovered faithfully as long as we obtain the relative lengths of the shared paths between any three paths accurately.
RESUMEN
Over 50 million people globally suffer from Alzheimer's disease (AD), emphasizing the need for efficient, early diagnostic tools. Traditional methods like Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans are expensive, bulky, and slow. Microwave-based techniques offer a cost-effective, non-invasive, and portable solution, diverging from conventional neuroimaging practices. This article introduces a deep learning approach for monitoring AD , using realistic numerical brain phantoms to simulate scattered signals via the CST Studio Suite. The obtained data is preprocessed using normalization, standardization, and outlier removal to ensure data integrity. Furthermore, we propose a novel data augmentation technique to enrich the dataset across various AD stages. Our deep learning approach combines Recursive Feature Elimination (RFE) with Principal Component Analysis (PCA) and Autoencoders (AE) for optimal feature selection. Convolution Neural Network (CNN) is combined with Gated Recurrent Unit (GRU), Bidirectional Long Short Term Memory (Bidirectional-LSTM), and Long Short-Term Memory (LSTM) to improve classification performance. The integration of RFE-PCA-AE significantly elevates performance, with the CNN+GRU model achieving an 87% accuracy rate, thus outperforming existing studies.
RESUMEN
Alzheimer's is a progressive neurodegenerative disorder that leads to cognitive impairment and ultimately death. To select the most effective treatment options, it is crucial to diagnose and classify the disease early, as current treatments can only delay its progression. However, previous research on Alzheimer's disease (AD) has had limitations, such as inaccuracies and reliance on a small, unbalanced binary dataset. In this study, we aimed to evaluate the early stages of AD using three multiclass datasets: OASIS, EEG, and ADNI MRI. The research consisted of three phases: pre-processing, feature extraction, and classification using hybrid learning techniques. For the OASIS and ADNI MRI datasets, we computed the mean RGB value and used an averaging filter to enhance the images. We balanced and augmented the dataset to increase its size. In the case of the EEG dataset, we applied a band-pass filter for digital filtering to reduce noise and also balanced the dataset using random oversampling. To extract and classify features, we utilized a hybrid technique consisting of four algorithms: AlexNet-MLP, AlexNet-ETC, AlexNet-AdaBoost, and AlexNet-NB. The results showed that the AlexNet-ETC hybrid algorithm achieved the highest accuracy rate of 95.32% for the OASIS dataset. In the case of the EEG dataset, the AlexNet-MLP hybrid algorithm outperformed other approaches with the highest accuracy of 97.71%. For the ADNI MRI dataset, the AlexNet-MLP hybrid algorithm achieved an accuracy rate of 92.59%. Comparing these results with the current state of the art demonstrates the effectiveness of our findings.