Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 38
Filter
1.
Biomimetics (Basel) ; 9(6)2024 Jun 14.
Article in English | MEDLINE | ID: mdl-38921237

ABSTRACT

Recurrent neural networks (RNNs) transmit information over time through recurrent connections. In contrast, biological neural networks use many other temporal processing mechanisms. One of these mechanisms is the inter-neuron delays caused by varying axon properties. Recently, this feature was implemented in echo state networks (ESNs), a type of RNN, by assigning spatial locations to neurons and introducing distance-dependent inter-neuron delays. These delays were shown to significantly improve ESN task performance. However, thus far, it is still unclear why distance-based delay networks (DDNs) perform better than ESNs. In this paper, we show that by optimizing inter-node delays, the memory capacity of the network matches the memory requirements of the task. As such, networks concentrate their memory capabilities to the points in the past which contain the most information for the task at hand. Moreover, we show that DDNs have a greater total linear memory capacity, with the same amount of non-linear processing power.

2.
Sci Rep ; 13(1): 21399, 2023 Dec 04.
Article in English | MEDLINE | ID: mdl-38049625

ABSTRACT

Photonics-based computing approaches in combination with wavelength division multiplexing offer a potential solution to modern data and bandwidth needs. This paper experimentally takes an important step towards wavelength division multiplexing in an integrated waveguide-based photonic reservoir computing platform by using a single set of readout weights for up to at least 3 ITU-T channels to efficiently scale the data bandwidth when processing a nonlinear signal equalization task on a 28 Gbps modulated on-off keying signal. Using multiple-wavelength training, we obtain bit error rates well below that of the [Formula: see text] forward error correction limit at high fiber input powers of 18 dBm, which result in high nonlinear distortion. The results of the reservoir chip are compared to a tapped delay line filter and clearly show that the system performs nonlinear equalization. This was achieved using only limited post processing which in future work can be implemented in optical hardware as well.

3.
Opt Express ; 31(21): 34843-34854, 2023 Oct 09.
Article in English | MEDLINE | ID: mdl-37859231

ABSTRACT

Integrated photonic reservoir computing has been demonstrated to be able to tackle different problems because of its neural network nature. A key advantage of photonic reservoir computing over other neuromorphic paradigms is its straightforward readout system, which facilitates both rapid training and robust, fabrication variation-insensitive photonic integrated hardware implementation for real-time processing. We present our recent development of a fully-optical, coherent photonic reservoir chip integrated with an optical readout system, capitalizing on these benefits. Alongside the integrated system, we also demonstrate a weight update strategy that is suitable for the integrated optical readout hardware. Using this online training scheme, we successfully solved 3-bit header recognition and delayed XOR tasks at 20 Gbps in real-time, all within the optical domain without excess delays.

4.
Opt Express ; 30(9): 15634-15647, 2022 Apr 25.
Article in English | MEDLINE | ID: mdl-35473279

ABSTRACT

Existing work on coherent photonic reservoir computing (PRC) mostly concentrates on single-wavelength solutions. In this paper, we discuss the opportunities and challenges related to exploiting the wavelength dimension in integrated photonic reservoir computing systems. Different strategies are presented to be able to process several wavelengths in parallel using the same readout. Additionally, we present multiwavelength training techniques that allow to increase the stable operating wavelength range by at least a factor of two. It is shown that a single-readout photonic reservoir system can perform with ≈0% BER on several WDM channels in parallel for bit-level tasks and nonlinear signal equalization. This even when taking manufacturing deviations and laser wavelength drift into account.

5.
J Chem Theory Comput ; 18(3): 1672-1691, 2022 Mar 08.
Article in English | MEDLINE | ID: mdl-35171606

ABSTRACT

Explicit-electron force fields introduce electrons or electron pairs as semiclassical particles in force fields or empirical potentials, which are suitable for molecular dynamics simulations. Even though semiclassical electrons are a drastic simplification compared to a quantum-mechanical electronic wave function, they still retain a relatively detailed electronic model compared to conventional polarizable and reactive force fields. The ability of explicit-electron models to describe chemical reactions and electronic response properties has already been demonstrated, yet the description of short-range interactions for a broad range of chemical systems remains challenging. In this work, we present the electron machine learning potential (eMLP), a new explicit electron force field in which the short-range interactions are modeled with machine learning. The electron pair particles will be located at well-defined positions, derived from localized molecular orbitals or Wannier centers, naturally imposing the correct dielectric and piezoelectric behavior of the system. The eMLP is benchmarked on two newly constructed data sets: eQM7, an extension of the QM7 data set for small molecules, and a data set for the crystalline ß-glycine. It is shown that the eMLP can predict dipole moments, polarizabilities, and IR-spectra of unseen molecules with high precision. Furthermore, a variety of response properties, for example, stiffness or piezoelectric constants, can be accurately reproduced.

6.
Sci Rep ; 11(1): 24152, 2021 12 17.
Article in English | MEDLINE | ID: mdl-34921207

ABSTRACT

Nonlinear activation is a crucial building block of most machine-learning systems. However, unlike in the digital electrical domain, applying a saturating nonlinear function in a neural network in the analog optical domain is not as easy, especially in integrated systems. In this paper, we first investigate in detail the photodetector nonlinearity in two main readout schemes: electrical readout and optical readout. On a 3-bit-delayed XOR task, we show that optical readout trained with backpropagation gives the best performance. Furthermore, we propose an additional saturating nonlinearity coming from a deliberately non-ideal voltage amplifier after the detector. Compared to an all-optical nonlinearity, these two kinds of nonlinearities are extremely easy to obtain at no additional cost, since photodiodes and voltage amplifiers are present in any system. Moreover, not having to design ideal linear amplifiers could relax their design requirements. We show through simulation that for long-distance nonlinear fiber distortion compensation, using only the photodiode nonlinearity in an optical readout delivers BER improvements over three orders of magnitude. Combined with the amplifier saturation nonlinearity, we obtain another three orders of magnitude improvement of the BER.

7.
Opt Express ; 29(20): 30991-30997, 2021 Sep 27.
Article in English | MEDLINE | ID: mdl-34615201

ABSTRACT

Nonlinearity mitigation in optical fiber networks is typically handled by electronic Digital Signal Processing (DSP) chips. Such DSP chips are costly, power-hungry and can introduce high latencies. Therefore, optical techniques are investigated which are more efficient in both power consumption and processing cost. One such a machine learning technique is optical reservoir computing, in which a photonic chip can be trained on certain tasks, with the potential advantages of higher speed, reduced power consumption and lower latency compared to its electronic counterparts. In this paper, experimental results are presented where nonlinear distortions in a 32 GBPS OOK signal are mitigated to below the 0.2 × 10-3 FEC limit using a photonic reservoir. Furthermore, the results of the reservoir chip are compared to a tapped delay line filter to clearly show that the system performs nonlinear equalisation.

8.
Sci Rep ; 11(1): 3102, 2021 Feb 04.
Article in English | MEDLINE | ID: mdl-33542496

ABSTRACT

Using optical hardware for neuromorphic computing has become more and more popular recently, due to its efficient high-speed data processing capabilities and low power consumption. However, there are still some remaining obstacles to realizing the vision of a completely optical neuromorphic computer. One of them is that, depending on the technology used, optical weighting elements may not share the same resolution as in the electrical domain. Moreover, noise of the weighting elements are important considerations as well. In this article, we investigate a new method for improving the performance of optical weighting components, even in the presence of noise and in the case of very low resolution. Our method utilizes an iterative training procedure and is able to select weight connections that are more robust to quantization and noise. As a result, even with only 8 to 32 levels of resolution, in noisy weighting environments, the method can outperform both nearest rounding low-resolution weighting and random rounding weighting by up to several orders of magnitude in terms of bit error rate and can deliver performance very close to full-resolution weighting elements.

9.
Sci Rep ; 11(1): 2701, 2021 Jan 29.
Article in English | MEDLINE | ID: mdl-33514814

ABSTRACT

Photorefractive materials exhibit an interesting plasticity under the influence of an optical field. By extending the finite-difference time-domain method to include the photorefractive effect, we explore how this property can be exploited in the context of neuromorphic computing for telecom applications. By first priming the photorefractive material with a random bit stream, the material reorganizes itself to better recognize simple patterns in the stream. We demonstrate this by simulating a typical reservoir computing setup, which gets a significant performance boost on performing the XOR on two consecutive bits in the stream after this initial priming step.

10.
Sci Rep ; 10(1): 20724, 2020 11 26.
Article in English | MEDLINE | ID: mdl-33244129

ABSTRACT

Machine learning offers promising solutions for high-throughput single-particle analysis in label-free imaging microflow cytomtery. However, the throughput of online operations such as cell sorting is often limited by the large computational cost of the image analysis while offline operations may require the storage of an exceedingly large amount of data. Moreover, the training of machine learning systems can be easily biased by slight drifts of the measurement conditions, giving rise to a significant but difficult to detect degradation of the learned operations. We propose a simple and versatile machine learning approach to perform microparticle classification at an extremely low computational cost, showing good generalization over large variations in particle position. We present proof-of-principle classification of interference patterns projected by flowing transparent PMMA microbeads with diameters of [Formula: see text] and [Formula: see text]. To this end, a simple, cheap and compact label-free microflow cytometer is employed. We also discuss in detail the detection and prevention of machine learning bias in training and testing due to slight drifts of the measurement conditions. Moreover, we investigate the implications of modifying the projected particle pattern by means of a diffraction grating, in the context of optical extreme learning machine implementations.

11.
Sci Rep ; 10(1): 14451, 2020 Sep 02.
Article in English | MEDLINE | ID: mdl-32879360

ABSTRACT

Physical reservoir computing approaches have gained increased attention in recent years due to their potential for low-energy high-performance computing. Despite recent successes, there are bounds to what one can achieve simply by making physical reservoirs larger. Therefore, we argue that a switch from single-reservoir computing to multi-reservoir and even deep physical reservoir computing is desirable. Given that error backpropagation cannot be used directly to train a large class of multi-reservoir systems, we propose an alternative framework that combines the power of backpropagation with the speed and simplicity of classic training algorithms. In this work we report our findings on a conducted experiment to evaluate the general feasibility of our approach. We train a network of 3 Echo State Networks to perform the well-known NARMA-10 task, where we use intermediate targets derived through backpropagation. Our results indicate that our proposed method is well-suited to train multi-reservoir systems in an efficient way.

12.
J Interv Cardiol ; 2020: 9843275, 2020.
Article in English | MEDLINE | ID: mdl-32549802

ABSTRACT

Anatomic landmark detection is crucial during preoperative planning of transcatheter aortic valve implantation (TAVI) to select the proper device size and assess the risk of complications. The detection is currently a time-consuming manual process influenced by the image quality and subject to operator variability. In this work, we propose a novel automatic method to detect the relevant aortic landmarks from MDCT images using deep learning techniques. We trained three convolutional neural networks (CNNs) with 344 multidetector computed tomography (MDCT) acquisitions to detect five anatomical landmarks relevant for TAVI planning: the three basal attachment points of the aortic valve leaflets and the left and right coronary ostia. The detection strategy used these three CNN models to analyse a single MDCT image and yield three segmentation volumes as output. These segmentation volumes were averaged into one final segmentation volume, and the final predicted landmarks were obtained during a postprocessing step. Finally, we constructed the aortic annular plane, defined by the three predicted hinge points, and measured the distances from this plane to the predicted coronary ostia (i.e., coronary height). The methodology was validated on 100 patients. The automatic landmark detection was able to detect all the landmarks and showed high accuracy as the median distance between the ground truth and predictions is lower than the interobserver variations (1.5 mm [1.1-2.1], 2.0 mm [1.3-2.8] with a paired difference -0.5 ± 1.3 mm and p value <0.001). Furthermore, a high correlation is observed between predicted and manually measured coronary heights (for both R 2 = 0.8). The image analysis time per patient was below one second. The proposed method is accurate, fast, and reproducible. Embedding this tool based on deep learning in the preoperative planning routine may have an impact in the TAVI environments by reducing the time and cost and improving accuracy.


Subject(s)
Aortic Valve Stenosis/diagnostic imaging , Aortic Valve Stenosis/surgery , Aortic Valve/diagnostic imaging , Multidetector Computed Tomography , Transcatheter Aortic Valve Replacement , Aged , Aged, 80 and over , Aortic Valve/surgery , Female , Heart Valve Prosthesis , Humans , Male , Observer Variation , Reproducibility of Results , Retrospective Studies
13.
J Interv Cardiol ; 2019: 3591314, 2019.
Article in English | MEDLINE | ID: mdl-31777469

ABSTRACT

The number of transcatheter aortic valve implantation (TAVI) procedures is expected to increase significantly in the coming years. Improving efficiency will become essential for experienced operators performing large TAVI volumes, while new operators will require training and may benefit from accurate support. In this work, we present a fast deep learning method that can predict aortic annulus perimeter and area automatically from aortic annular plane images. We propose a method combining two deep convolutional neural networks followed by a postprocessing step. The models were trained with 355 patients using modern deep learning techniques, and the method was evaluated on another 118 patients. The method was validated against an interoperator variability study of the same 118 patients. The differences between the manually obtained aortic annulus measurements and the automatic predictions were similar to the differences between two independent observers (paired diff. of 3.3 ± 16.8 mm2 vs. 1.3 ± 21.1 mm2 for the area and a paired diff. of 0.6 ± 1.7 mm vs. 0.2 ± 2.5 mm for the perimeter). The area and perimeter were used to retrieve the suggested prosthesis sizes for the Edwards Sapien 3 and the Medtronic Evolut device retrospectively. The automatically obtained device size selections accorded well with the device sizes selected by operator 1. The total analysis time from aortic annular plane to prosthesis size was below one second. This study showed that automated TAVI device size selection using the proposed method is fast, accurate, and reproducible. Comparison with the interobserver variability has shown the reliability of the strategy, and embedding this tool based on deep learning in the preoperative planning routine has the potential to increase the efficiency while ensuring accuracy.


Subject(s)
Aortic Valve/diagnostic imaging , Heart Valve Prosthesis , Transcatheter Aortic Valve Replacement/instrumentation , Aged, 80 and over , Aortic Valve Stenosis/surgery , Deep Learning , Female , Humans , Male , Multidetector Computed Tomography , Neural Networks, Computer , Prosthesis Design , Retrospective Studies
14.
Front Neurorobot ; 13: 71, 2019.
Article in English | MEDLINE | ID: mdl-31555118

ABSTRACT

In traditional robotics, model-based controllers are usually needed in order to bring a robotic plant to the next desired state, but they present critical issues when the dimensionality of the control problem increases and disturbances from the external environment affect the system behavior, in particular during locomotion tasks. It is generally accepted that the motion control of quadruped animals is performed by neural circuits located in the spinal cord that act as a Central Pattern Generator and can generate appropriate locomotion patterns. This is thought to be the result of evolutionary processes that have optimized this network. On top of this, fine motor control is learned during the lifetime of the animal thanks to the plastic connections of the cerebellum that provide descending corrective inputs. This research aims at understanding and identifying the possible advantages of using learning during an evolution-inspired optimization for finding the best locomotion patterns in a robotic locomotion task. Accordingly, we propose a comparative study between two bio-inspired control architectures for quadruped legged robots where learning takes place either during the evolutionary search or only after that. The evolutionary process is carried out in a simulated environment, on a quadruped legged robot. To verify the possibility of overcoming the reality gap, the performance of both systems has been analyzed by changing the robot dynamics and its interaction with the external environment. Results show better performance metrics for the robotic agent whose locomotion method has been discovered by applying the adaptive module during the evolutionary exploration for the locomotion trajectories. Even when the motion dynamics and the interaction with the environment is altered, the locomotion patterns found on the learning robotic system are more stable, both in the joint and in the task space.

15.
Sci Rep ; 9(1): 5918, 2019 Apr 11.
Article in English | MEDLINE | ID: mdl-30976036

ABSTRACT

We propose a new method for performing photonic circuit simulations based on the scatter matrix formalism. We leverage the popular deep-learning framework PyTorch to reimagine photonic circuits as sparsely connected complex-valued neural networks. This allows for highly parallel simulation of large photonic circuits on graphical processing units in time and frequency domain while all parameters of each individual component can easily be optimized with well-established machine learning algorithms such as backpropagation.

16.
Front Neurorobot ; 13: 9, 2019.
Article in English | MEDLINE | ID: mdl-30983987

ABSTRACT

Designing controllers for compliant, underactuated robots is challenging and usually requires a learning procedure. Learning robotic control in simulated environments can speed up the process whilst lowering risk of physical damage. Since perfect simulations are unfeasible, several techniques are used to improve transfer to the real world. Here, we investigate the impact of randomizing body parameters during learning of CPG controllers in simulation. The controllers are evaluated on our physical quadruped robot. We find that body randomization in simulation increases chances of finding gaits that function well on the real robot.

17.
Front Neurorobot ; 13: 6, 2019.
Article in English | MEDLINE | ID: mdl-30899218

ABSTRACT

An important field in robotics is the optimization of controllers. Currently, robots are often treated as a black box in this optimization process, which is the reason why derivative-free optimization methods such as evolutionary algorithms or reinforcement learning are omnipresent. When gradient-based methods are used, models are kept small or rely on finite difference approximations for the Jacobian. This method quickly grows expensive with increasing numbers of parameters, such as found in deep learning. We propose the implementation of a modern physics engine, which can differentiate control parameters. This engine is implemented for both CPU and GPU. Firstly, this paper shows how such an engine speeds up the optimization process, even for small problems. Furthermore, it explains why this is an alternative approach to deep Q-learning, for using deep learning in robotics. Finally, we argue that this is a big step for deep learning in robotics, as it opens up new possibilities to optimize robots, both in hardware and software.

18.
IEEE Trans Neural Netw Learn Syst ; 30(7): 1943-1953, 2019 Jul.
Article in English | MEDLINE | ID: mdl-30387749

ABSTRACT

As Moore's law comes to an end, neuromorphic approaches to computing are on the rise. One of these, passive photonic reservoir computing, is a strong candidate for computing at high bitrates (>10 Gb/s) and with low energy consumption. Currently though, both benefits are limited by the necessity to perform training and readout operations in the electrical domain. Thus, efforts are currently underway in the photonic community to design an integrated optical readout, which allows to perform all operations in the optical domain. In addition to the technological challenge of designing such a readout, new algorithms have to be designed in order to train it. Foremost, suitable algorithms need to be able to deal with the fact that the actual on-chip reservoir states are not directly observable. In this paper, we investigate several options for such a training algorithm and propose a solution in which the complex states of the reservoir can be observed by appropriately setting the readout weights, while iterating over a predefined input sequence. We perform numerical simulations in order to compare our method with an ideal baseline requiring full observability as well as with an established black-box optimization approach (CMA-ES).

19.
Opt Express ; 26(7): 7955-7964, 2018 Apr 02.
Article in English | MEDLINE | ID: mdl-29715770

ABSTRACT

We propose a new design for a passive photonic reservoir computer on a silicon photonics chip which can be used in the context of optical communication applications, and study it through detailed numerical simulations. The design consists of a photonic crystal cavity with a quarter-stadium shape, which is known to foster interesting mixing dynamics. These mixing properties turn out to be very useful for memory-dependent optical signal processing tasks, such as header recognition. The proposed, ultra-compact photonic crystal cavity exhibits a memory of up to 6 bits, while simultaneously accepting bitrates in a wide region of operation. Moreover, because of the inherent low losses in a high-Q photonic crystal cavity, the proposed design is very power efficient.

20.
Neuroimage Clin ; 17: 10-15, 2018.
Article in English | MEDLINE | ID: mdl-29527470

ABSTRACT

Objective: To diagnose and lateralise temporal lobe epilepsy (TLE) by building a classification system that uses directed functional connectivity patterns estimated during EEG periods without visible pathological activity. Methods: Resting-state high-density EEG recording data from 20 left TLE patients, 20 right TLE patients and 35 healthy controls was used. Epochs without interictal spikes were selected. The cortical source activity was obtained for 82 regions of interest and whole-brain directed functional connectivity was estimated in the theta, alpha and beta frequency bands. These connectivity values were then used to build a classification system based on two two-class Random Forests classifiers: TLE vs healthy controls and left vs right TLE. Feature selection and classifier training were done in a leave-one-out procedure to compute the mean classification accuracy. Results: The diagnosis and lateralization classifiers achieved a high accuracy (90.7% and 90.0% respectively), sensitivity (95.0% and 90.0% respectively) and specificity (85.7% and 90.0% respectively). The most important features for diagnosis were the outflows from left and right medial temporal lobe, and for lateralization the right anterior cingulate cortex. The interaction between features was important to achieve correct classification. Significance: This is the first study to automatically diagnose and lateralise TLE based on EEG. The high accuracy achieved demonstrates the potential of directed functional connectivity estimated from EEG periods without visible pathological activity for helping in the diagnosis and lateralization of TLE.


Subject(s)
Brain Waves/physiology , Electronic Data Processing/methods , Epilepsy, Temporal Lobe/diagnosis , Epilepsy, Temporal Lobe/physiopathology , Area Under Curve , Electroencephalography , Female , Follow-Up Studies , Functional Laterality/physiology , Humans , Machine Learning , Male , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL