Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
Opt Lett ; 49(8): 2097-2100, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38621085

RESUMEN

The exploitation of the full structure of multimode light fields enables compelling capabilities in many fields including classical and quantum information science. We exploit data-encoding on the optical phase of the pulses of a femtosecond laser source for a photonic implementation of a reservoir computing protocol. Rather than intensity detection, data-reading is done via homodyne detection that accesses combinations of an amplitude and a phase of the field. Numerical and experimental results on nonlinear autoregressive moving average (NARMA) tasks and laser dynamic predictions are shown. We discuss perspectives for quantum-enhanced protocols.

2.
Opt Express ; 32(4): 6733-6747, 2024 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-38439372

RESUMEN

Squeezing is known to be a quantum resource in many applications in metrology, cryptography, and computing, being related to entanglement in multimode settings. In this work, we address the effects of squeezing in neuromorphic machine learning for time-series processing. In particular, we consider a loop-based photonic architecture for reservoir computing and address the effect of squeezing in the reservoir, considering a Hamiltonian with both active and passive coupling terms. Interestingly, squeezing can be either detrimental or beneficial for quantum reservoir computing when moving from ideal to realistic models, accounting for experimental noise. We demonstrate that multimode squeezing enhances its accessible memory, which improves the performance in several benchmark temporal tasks. The origin of this improvement is traced back to the robustness of the reservoir to readout noise, which is increased with squeezing.

3.
Opt Express ; 31(12): 19255-19265, 2023 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-37381344

RESUMEN

Artificial neural networks (ANN) are a groundbreaking technology massively employed in a plethora of fields. Currently, ANNs are mostly implemented through electronic digital computers, but analog photonic implementations are very interesting mainly because of low power consumption and high bandwidth. We recently demonstrated a photonic neuromorphic computing system based on frequency multiplexing that executes ANNs algorithms as reservoir computing and Extreme Learning Machines. Neuron signals are encoded in the amplitude of the lines of a frequency comb, and neuron interconnections are realized through frequency-domain interference. Here we present an integrated programmable spectral filter designed to manipulate the optical frequency comb in our frequency multiplexing neuromorphic computing platform. The programmable filter controls the attenuation of 16 independent wavelength channels with a 20 GHz spacing. We discuss the design and the results of the chip characterization, and we preliminary demonstrate, through a numerical simulation, that the produced chip is suitable for the envisioned neuromorphic computing application.

4.
Phys Rev E ; 108(6-2): 065302, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38243479

RESUMEN

The main motivation of this paper is to introduce the ordinal diversity, a symbolic tool able to quantify the degree of diversity of multiple time series. Analytical, numerical, and experimental analyses illustrate the utility of this measure to quantify how diverse, from an ordinal perspective, a set of many time series is. We have shown that ordinal diversity is able to characterize dynamical richness and dynamical transitions in stochastic processes and deterministic systems, including chaotic regimes. This ordinal tool also serves to identify optimal operating conditions in the machine learning approach of reservoir computing. These results allow us to envision potential applications for the handling and characterization of large amounts of data, paving the way for addressing some of the most pressing issues facing the current big data paradigm.

5.
Phys Rev E ; 106(4-1): 044211, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36397530

RESUMEN

We design scalable neural networks adapted to translational symmetries in dynamical systems, capable of inferring untrained high-dimensional dynamics for different system sizes. We train these networks to predict the dynamics of delay-dynamical and spatiotemporal systems for a single size. Then, we drive the networks by their own predictions. We demonstrate that by scaling the size of the trained network, we can predict the complex dynamics for larger or smaller system sizes. Thus, the network learns from a single example and by exploiting symmetry properties infers entire bifurcation diagrams.

6.
IEEE Trans Neural Netw Learn Syst ; 33(6): 2664-2675, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34460401

RESUMEN

Reservoir computing has emerged as a powerful machine learning paradigm for harvesting nontrivial information processing out of disordered physical systems driven by sequential inputs. To this end, the system observables must become nonlinear functions of the input history. We show that encoding the input to quantum or classical fluctuations of a network of interacting harmonic oscillators can lead to a high performance comparable to that of a standard echo state network in several nonlinear benchmark tasks. This equivalence in performance holds even with a linear Hamiltonian and a readout linear in the system observables. Furthermore, we find that the performance of the network of harmonic oscillators in nonlinear tasks is robust to errors both in input and reservoir observables caused by external noise. For any reservoir computing system with a linear readout, the magnitude of trained weights can either amplify or suppress noise added to reservoir observables. We use this general result to explain why the oscillators are robust to noise and why having precise control over reservoir memory is important for noise robustness in general. Our results pave the way toward reservoir computing harnessing fluctuations in disordered linear systems.

7.
IEEE Trans Neural Netw Learn Syst ; 33(6): 2714-2725, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34662281

RESUMEN

Physical dynamical systems are able to process information in a nontrivial manner. The machine learning paradigm of reservoir computing (RC) provides a suitable framework for information processing in (analog) dynamical systems. The potential of dynamical systems for RC can be quantitatively characterized by the information processing capacity (IPC) measure. Here, we evaluate the IPC measure of a reservoir computer based on a single-analog nonlinear node coupled with delay. We link the extracted IPC measures to the dynamical regime of the reservoir, reporting an experimentally measured nonlinear memory of up to seventh order. In addition, we find a nonhomogeneous distribution of the linear and nonlinear contributions to the IPC as a function of the system operating conditions. Finally, we unveil the role of noise in the IPC of the analog implementation by performing ad hoc numerical simulations. In this manner, we identify the so-called edge of stability as being the most promising operating condition of the experimental implementation for RC purposes in terms of computational power and noise robustness. Similarly, a strong input drive is shown to have beneficial properties, albeit with a reduced memory depth.

8.
Phys Rev Lett ; 127(10): 100502, 2021 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-34533342

RESUMEN

Closed quantum systems exhibit different dynamical regimes, like many-body localization or thermalization, which determine the mechanisms of spread and processing of information. Here we address the impact of these dynamical phases in quantum reservoir computing, an unconventional computing paradigm recently extended into the quantum regime that exploits dynamical systems to solve nonlinear and temporal tasks. We establish that the thermal phase is naturally adapted to the requirements of quantum reservoir computing and report an increased performance at the thermalization transition for the studied tasks. Uncovering the underlying physical mechanisms behind optimal information processing capabilities of spin networks is essential for future experimental implementations and provides a new perspective on dynamical phases.

9.
Entropy (Basel) ; 23(8)2021 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-34441109

RESUMEN

Time-delayed interactions naturally appear in a multitude of real-world systems due to the finite propagation speed of physical quantities. Often, the time scales of the interactions are unknown to an external observer and need to be inferred from time series of observed data. We explore, in this work, the properties of several ordinal-based quantifiers for the identification of time-delays from time series. To that end, we generate artificial time series of stochastic and deterministic time-delay models. We find that the presence of a nonlinearity in the generating model has consequences for the distribution of ordinal patterns and, consequently, on the delay-identification qualities of the quantifiers. Here, we put forward a novel ordinal-based quantifier that is particularly sensitive to nonlinearities in the generating model and compare it with previously-defined quantifiers. We conclude from our analysis on artificially generated data that the proper identification of the presence of a time-delay and its precise value from time series benefits from the complementary use of ordinal-based quantifiers and the standard autocorrelation function. We further validate these tools with a practical example on real-world data originating from the North Atlantic Oscillation weather phenomenon.

10.
Phys Rev E ; 100(4-1): 042215, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31770914

RESUMEN

In this paper, we introduce a model to describe the decay of the number of unobserved ordinal patterns as a function of the time series length in noisy chaotic dynamics. More precisely, we show that a stretched exponential model fits the decay of the number of unobserved ordinal patterns for both discrete and continuous chaotic systems contaminated with observational noise, independently of the noise level and the sampling time. Numerical simulations, obtained from the logistic map and the x coordinate of the Lorenz system, both operating in a totally chaotic dynamics were used as test beds. In addition, we contrast our results with those obtained from pure stochastic dynamics. The fitting parameters, namely, the stretching exponent and the characteristic decay rate, are used to distinguish whether the dynamical nature of the data sequence is stochastic or chaotic. Finally, the analysis of experimental records associated with the hyperchaotic pulsations of an optoelectronic oscillator allows us to illustrate the applicability of the proposed approach in a practical context.

11.
Chaos ; 29(11): 113111, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31779344

RESUMEN

Forecasting the dynamics of chaotic systems from the analysis of their output signals is a challenging problem with applications in most fields of modern science. In this work, we use a laser model to compare the performance of several machine learning algorithms for forecasting the amplitude of upcoming emitted chaotic pulses. We simulate the dynamics of an optically injected semiconductor laser that presents a rich variety of dynamical regimes when changing the parameters. We focus on a particular dynamical regime that can show ultrahigh intensity pulses, reminiscent of rogue waves. We compare the goodness of the forecast for several popular methods in machine learning, namely, deep learning, support vector machine, nearest neighbors, and reservoir computing. Finally, we analyze how their performance for predicting the height of the next optical pulse depends on the amount of noise and the length of the time series used for training.

12.
Comput Methods Programs Biomed ; 169: 1-8, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30638588

RESUMEN

BACKGROUND AND OBJECTIVE: In this work, we develop a fully automatic and real-time ventricular heartbeat classifier based on a single ECG lead. Single ECG lead classifiers can be especially useful for wearable technologies that provide continuous and long-term monitoring of the electrocardiogram. These wearables usually have a few non-standard leads and the quality of the signals depends on the user physical activity. METHODS: The proposed method uses an Echo State Network (ESN) to classify ECG signals following the Association for the Advancement of Medical Instrumentation (AAMI) recommendations with an inter-patient scheme. To achieve real-time classification, the classifier itself and the feature extraction approach are fast and computationally efficient. In addition, our approach allows transferring the knowledge from one database to another without additional training. RESULTS: The classification performance of the proposed model is validated on the MIT-BIH arrhythmia and INCART databases. The sensitivity and precision of the proposed method for MIT-BIH arrhythmia database are 95.3 and 88.8 for the modified lead II and 90.9 and 89.2 for the V1 lead. The results reported are further compared to the existing methodologies in literature. Our methodology is a competitive single lead ventricular heartbeat classifier, that is comparable to state-of-the-art algorithms using multiple leads. CONCLUSIONS: The proposed fully automated, single-lead and real-time heartbeat classifier of ventricular heartbeats reports an improved classification accuracy in different leads of the evaluated databases in comparison with other single lead heartbeat classifiers. These results open the possibility of applying our methodology to wearable long-term monitoring devices with an unconventional placement of the electrodes.


Asunto(s)
Electrocardiografía , Frecuencia Cardíaca , Procesamiento de Señales Asistido por Computador , Dispositivos Electrónicos Vestibles , Ventrículos Cardíacos
13.
PLoS One ; 13(5): e0197597, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29795611

RESUMEN

This work focuses on the experimental data analysis of electroencephalography (EEG) data, in which multiple sensors are recording oscillatory voltage time series. The EEG data analyzed in this manuscript has been acquired using a low-cost commercial headset, the Emotiv EPOC+. Our goal is to compare different techniques for the optimal estimation of collective rhythms from EEG data. To this end, a traditional method such as the principal component analysis (PCA) is compared to more recent approaches to extract a collective rhythm from phase-synchronized data. Here, we extend the work by Schwabedal and Kantz (PRL 116, 104101 (2016)) evaluating the performance of the Kosambi-Hilbert torsion (KHT) method to extract a collective rhythm from multivariate oscillatory time series and compare it to results obtained from PCA. The KHT method takes advantage of the singular value decomposition algorithm and accounts for possible phase lags among different time series and allows to focus the analysis on a specific spectral band, optimally amplifying the signal-to-noise ratio of a common rhythm. We evaluate the performance of these methods for two particular sets of data: EEG data recorded with closed eyes and EEG data recorded while observing a screen flickering at 15 Hz. We found an improvement in the signal-to-noise ratio of the collective signal for the KHT over the PCA, particularly when random temporal shifts are added to the channels.


Asunto(s)
Electroencefalografía/instrumentación , Electroencefalografía/normas , Relación Señal-Ruido , Algoritmos , Ondas Encefálicas , Electroencefalografía/métodos , Humanos , Análisis de Componente Principal , Procesamiento de Señales Asistido por Computador
14.
Front Neuroinform ; 11: 43, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28713260

RESUMEN

Certain differences between brain networks of healthy and epilectic subjects have been reported even during the interictal activity, in which no epileptic seizures occur. Here, magnetoencephalography (MEG) data recorded in the resting state is used to discriminate between healthy subjects and patients with either idiopathic generalized epilepsy or frontal focal epilepsy. Signal features extracted from interictal periods without any epileptiform activity are used to train a machine learning algorithm to draw a diagnosis. This is potentially relevant to patients without frequent or easily detectable spikes. To analyze the data, we use an up-to-date machine learning algorithm and explore the benefits of including different features obtained from the MEG data as inputs to the algorithm. We find that the relative power spectral density of the MEG time-series is sufficient to distinguish between healthy and epileptic subjects with a high prediction accuracy. We also find that a combination of features such as the phase-locked value and the relative power spectral density allow to discriminate generalized and focal epilepsy, when these features are calculated over a filtered version of the signals in certain frequency bands. Machine learning algorithms are currently being applied to the analysis and classification of brain signals. It is, however, less evident to identify the proper features of these signals that are prone to be used in such machine learning algorithms. Here, we evaluate the influence of the input feature selection on a clinical scenario to distinguish between healthy and epileptic subjects. Our results indicate that such distinction is possible with a high accuracy (86%), allowing the discrimination between idiopathic generalized and frontal focal epilepsy types.

15.
Sci Rep ; 7: 43428, 2017 02 24.
Artículo en Inglés | MEDLINE | ID: mdl-28233876

RESUMEN

We present a novel encryption scheme, wherein an encryption key is generated by two distant complex nonlinear units, forced into synchronization by a chaotic driver. The concept is sufficiently generic to be implemented on either photonic, optoelectronic or electronic platforms. The method for generating the key bitstream from the chaotic signals is reconfigurable. Although derived from a deterministic process, the obtained bit series fulfill the randomness conditions as defined by the National Institute of Standards test suite. We demonstrate the feasibility of our concept on an electronic delay oscillator circuit and test the robustness against attacks using a state-of-the-art system identification method.

16.
Opt Express ; 25(3): 2401-2412, 2017 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-29519086

RESUMEN

Photonic implementations of reservoir computing (RC) have been receiving considerable attention due to their excellent performance, hardware, and energy efficiency as well as their speed. Here, we study a particularly attractive all-optical system using optical information injection into a semiconductor laser with delayed feedback. We connect its injection locking, consistency, and memory properties to the RC performance in a non-linear prediction task. We find that for partial injection locking we achieve a good combination of consistency and memory. Therefore, we are able to provide a physical basis identifying operational parameters suitable for prediction.

17.
Opt Lett ; 41(12): 2871-4, 2016 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-27304310

RESUMEN

We experimentally demonstrate a key exchange cryptosystem based on the phenomenon of identical chaos synchronization. In our protocol, the private key is symmetrically generated by the two communicating partners. It is built up from the synchronized bits occurring between two current-modulated bidirectionally coupled semiconductor lasers with additional self-feedback. We analyze the security of the exchanged key and discuss the amplification of its privacy. We demonstrate private key generation rates up to 11 Mbit/s over a public channel.

18.
Artículo en Inglés | MEDLINE | ID: mdl-26172773

RESUMEN

For delay systems the sign of the sub-Lyapunov exponent (sub-LE) determines key dynamical properties. This includes the properties of strong and weak chaos and of consistency. Here we present a robust algorithm based on reconstruction of the local linearized equations of motion, which allows for calculating the sub-LE from time series. The algorithm is inspired by a method introduced by Pyragas for a nondelayed drive-response scheme [K. Pyragas, Phys. Rev. E 56, 5183 (1997)]. In the presented extension to delay systems, the delayed feedback takes over the role of the drive, whereas the response of the low-dimensional node leads to the sub-Lyapunov exponent. Our method is based on a low-dimensional representation of the delay system. We introduce the basic algorithm for a discrete scalar map, extend the concept to scalar continuous delay systems, and give an outlook to the case of a full vector-state system, from which only a scalar observable is recorded.

19.
Artículo en Inglés | MEDLINE | ID: mdl-26082714

RESUMEN

To learn and mimic how the brain processes information has been a major research challenge for decades. Despite the efforts, little is known on how we encode, maintain and retrieve information. One of the hypothesis assumes that transient states are generated in our intricate network of neurons when the brain is stimulated by a sensory input. Based on this idea, powerful computational schemes have been developed. These schemes, known as machine-learning techniques, include artificial neural networks, support vector machine and reservoir computing, among others. In this paper, we concentrate on the reservoir computing (RC) technique using delay-coupled systems. Unlike traditional RC, where the information is processed in large recurrent networks of interconnected artificial neurons, we choose a minimal design, implemented via a simple nonlinear dynamical system subject to a self-feedback loop with delay. This design is not intended to represent an actual brain circuit, but aims at finding the minimum ingredients that allow developing an efficient information processor. This simple scheme not only allows us to address fundamental questions but also permits simple hardware implementations. By reducing the neuro-inspired reservoir computing approach to its bare essentials, we find that nonlinear transient responses of the simple dynamical system enable the processing of information with excellent performance and at unprecedented speed. We specifically explore different hardware implementations and, by that, we learn about the role of nonlinearity, noise, system responses, connectivity structure, and the quality of projection onto the required high-dimensional state space. Besides the relevance for the understanding of basic mechanisms, this scheme opens direct technological opportunities that could not be addressed with previous approaches.

20.
Nat Commun ; 6: 7425, 2015 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-26081000

RESUMEN

In many dynamical systems and complex networks time delays appear naturally in feedback loops or coupling connections of individual elements. Moreover, in a whole class of systems, these delay times can depend on the state of the system. Nevertheless, so far the understanding of the impact of such state-dependent delays remains poor with a particular lack of systematic experimental studies. Here we fill this gap by introducing a conceptually simple photonic system that exhibits dynamics of self-organised switching between two loops with two different delay times, depending on the state of the system. On the basis of experiments and modelling on semiconductor lasers with frequency-selective feedback mirrors, we characterize the switching between the states defined by the individual delays. Our approach opens new perspectives for the study of this class of dynamical systems and enables applications in which the self-organized switching can be exploited.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA