Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Opt Express ; 31(21): 34843-34854, 2023 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-37859231

RESUMO

Integrated photonic reservoir computing has been demonstrated to be able to tackle different problems because of its neural network nature. A key advantage of photonic reservoir computing over other neuromorphic paradigms is its straightforward readout system, which facilitates both rapid training and robust, fabrication variation-insensitive photonic integrated hardware implementation for real-time processing. We present our recent development of a fully-optical, coherent photonic reservoir chip integrated with an optical readout system, capitalizing on these benefits. Alongside the integrated system, we also demonstrate a weight update strategy that is suitable for the integrated optical readout hardware. Using this online training scheme, we successfully solved 3-bit header recognition and delayed XOR tasks at 20 Gbps in real-time, all within the optical domain without excess delays.

2.
Opt Express ; 30(9): 15634-15647, 2022 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-35473279

RESUMO

Existing work on coherent photonic reservoir computing (PRC) mostly concentrates on single-wavelength solutions. In this paper, we discuss the opportunities and challenges related to exploiting the wavelength dimension in integrated photonic reservoir computing systems. Different strategies are presented to be able to process several wavelengths in parallel using the same readout. Additionally, we present multiwavelength training techniques that allow to increase the stable operating wavelength range by at least a factor of two. It is shown that a single-readout photonic reservoir system can perform with ≈0% BER on several WDM channels in parallel for bit-level tasks and nonlinear signal equalization. This even when taking manufacturing deviations and laser wavelength drift into account.

3.
Opt Express ; 29(20): 30991-30997, 2021 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-34615201

RESUMO

Nonlinearity mitigation in optical fiber networks is typically handled by electronic Digital Signal Processing (DSP) chips. Such DSP chips are costly, power-hungry and can introduce high latencies. Therefore, optical techniques are investigated which are more efficient in both power consumption and processing cost. One such a machine learning technique is optical reservoir computing, in which a photonic chip can be trained on certain tasks, with the potential advantages of higher speed, reduced power consumption and lower latency compared to its electronic counterparts. In this paper, experimental results are presented where nonlinear distortions in a 32 GBPS OOK signal are mitigated to below the 0.2 × 10-3 FEC limit using a photonic reservoir. Furthermore, the results of the reservoir chip are compared to a tapped delay line filter to clearly show that the system performs nonlinear equalisation.

4.
J Interv Cardiol ; 2020: 9843275, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32549802

RESUMO

Anatomic landmark detection is crucial during preoperative planning of transcatheter aortic valve implantation (TAVI) to select the proper device size and assess the risk of complications. The detection is currently a time-consuming manual process influenced by the image quality and subject to operator variability. In this work, we propose a novel automatic method to detect the relevant aortic landmarks from MDCT images using deep learning techniques. We trained three convolutional neural networks (CNNs) with 344 multidetector computed tomography (MDCT) acquisitions to detect five anatomical landmarks relevant for TAVI planning: the three basal attachment points of the aortic valve leaflets and the left and right coronary ostia. The detection strategy used these three CNN models to analyse a single MDCT image and yield three segmentation volumes as output. These segmentation volumes were averaged into one final segmentation volume, and the final predicted landmarks were obtained during a postprocessing step. Finally, we constructed the aortic annular plane, defined by the three predicted hinge points, and measured the distances from this plane to the predicted coronary ostia (i.e., coronary height). The methodology was validated on 100 patients. The automatic landmark detection was able to detect all the landmarks and showed high accuracy as the median distance between the ground truth and predictions is lower than the interobserver variations (1.5 mm [1.1-2.1], 2.0 mm [1.3-2.8] with a paired difference -0.5 ± 1.3 mm and p value <0.001). Furthermore, a high correlation is observed between predicted and manually measured coronary heights (for both R 2 = 0.8). The image analysis time per patient was below one second. The proposed method is accurate, fast, and reproducible. Embedding this tool based on deep learning in the preoperative planning routine may have an impact in the TAVI environments by reducing the time and cost and improving accuracy.


Assuntos
Estenose da Valva Aórtica/diagnóstico por imagem , Estenose da Valva Aórtica/cirurgia , Valva Aórtica/diagnóstico por imagem , Tomografia Computadorizada Multidetectores , Substituição da Valva Aórtica Transcateter , Idoso , Idoso de 80 Anos ou mais , Valva Aórtica/cirurgia , Feminino , Próteses Valvulares Cardíacas , Humanos , Masculino , Variações Dependentes do Observador , Reprodutibilidade dos Testes , Estudos Retrospectivos
5.
J Interv Cardiol ; 2019: 3591314, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31777469

RESUMO

The number of transcatheter aortic valve implantation (TAVI) procedures is expected to increase significantly in the coming years. Improving efficiency will become essential for experienced operators performing large TAVI volumes, while new operators will require training and may benefit from accurate support. In this work, we present a fast deep learning method that can predict aortic annulus perimeter and area automatically from aortic annular plane images. We propose a method combining two deep convolutional neural networks followed by a postprocessing step. The models were trained with 355 patients using modern deep learning techniques, and the method was evaluated on another 118 patients. The method was validated against an interoperator variability study of the same 118 patients. The differences between the manually obtained aortic annulus measurements and the automatic predictions were similar to the differences between two independent observers (paired diff. of 3.3 ± 16.8 mm2 vs. 1.3 ± 21.1 mm2 for the area and a paired diff. of 0.6 ± 1.7 mm vs. 0.2 ± 2.5 mm for the perimeter). The area and perimeter were used to retrieve the suggested prosthesis sizes for the Edwards Sapien 3 and the Medtronic Evolut device retrospectively. The automatically obtained device size selections accorded well with the device sizes selected by operator 1. The total analysis time from aortic annular plane to prosthesis size was below one second. This study showed that automated TAVI device size selection using the proposed method is fast, accurate, and reproducible. Comparison with the interobserver variability has shown the reliability of the strategy, and embedding this tool based on deep learning in the preoperative planning routine has the potential to increase the efficiency while ensuring accuracy.


Assuntos
Valva Aórtica/diagnóstico por imagem , Próteses Valvulares Cardíacas , Substituição da Valva Aórtica Transcateter/instrumentação , Idoso de 80 Anos ou mais , Estenose da Valva Aórtica/cirurgia , Aprendizado Profundo , Feminino , Humanos , Masculino , Tomografia Computadorizada Multidetectores , Redes Neurais de Computação , Desenho de Prótese , Estudos Retrospectivos
6.
Opt Express ; 26(7): 7955-7964, 2018 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-29715770

RESUMO

We propose a new design for a passive photonic reservoir computer on a silicon photonics chip which can be used in the context of optical communication applications, and study it through detailed numerical simulations. The design consists of a photonic crystal cavity with a quarter-stadium shape, which is known to foster interesting mixing dynamics. These mixing properties turn out to be very useful for memory-dependent optical signal processing tasks, such as header recognition. The proposed, ultra-compact photonic crystal cavity exhibits a memory of up to 6 bits, while simultaneously accepting bitrates in a wide region of operation. Moreover, because of the inherent low losses in a high-Q photonic crystal cavity, the proposed design is very power efficient.

7.
Opt Express ; 25(24): 30526-30538, 2017 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-29221080

RESUMO

The computational power required to classify cell holograms is a major limit to the throughput of label-free cell sorting based on digital holographic microscopy. In this work, a simple integrated photonic stage comprising a collection of silica pillar scatterers is proposed as an effective nonlinear mixing interface between the light scattered by a cell and an image sensor. The light processing provided by the photonic stage allows for the use of a simple linear classifier implemented in the electric domain and applied on a limited number of pixels. A proof-of-concept of the presented machine learning technique, which is based on the extreme learning machine (ELM) paradigm, is provided by the classification results on samples generated by 2D FDTD simulations of cells in a microfluidic channel.


Assuntos
Holografia/métodos , Aprendizado de Máquina , Algoritmos , Fenômenos Fisiológicos Celulares
8.
Neural Comput ; 27(3): 725-47, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25602769

RESUMO

In the quest for alternatives to traditional complementary metal-oxide-semiconductor, it is being suggested that digital computing efficiency and power can be improved by matching the precision to the application. Many applications do not need the high precision that is being used today. In particular, large gains in area and power efficiency could be achieved by dedicated analog realizations of approximate computing engines. In this work we explore the use of memristor networks for analog approximate computation, based on a machine learning framework called reservoir computing. Most experimental investigations on the dynamics of memristors focus on their nonvolatile behavior. Hence, the volatility that is present in the developed technologies is usually unwanted and is not included in simulation models. In contrast, in reservoir computing, volatility is not only desirable but necessary. Therefore, in this work, we propose two different ways to incorporate it into memristor simulation models. The first is an extension of Strukov's model, and the second is an equivalent Wiener model approximation. We analyze and compare the dynamical properties of these models and discuss their implications for the memory and the nonlinear processing capacity of memristor networks. Our results indicate that device variability, increasingly causing problems in traditional computer design, is an asset in the context of reservoir computing. We conclude that although both models could lead to useful memristor-based reservoir computing systems, their computational performance will differ. Therefore, experimental modeling research is required for the development of accurate volatile memristor models.

9.
Biomimetics (Basel) ; 9(6)2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38921237

RESUMO

Recurrent neural networks (RNNs) transmit information over time through recurrent connections. In contrast, biological neural networks use many other temporal processing mechanisms. One of these mechanisms is the inter-neuron delays caused by varying axon properties. Recently, this feature was implemented in echo state networks (ESNs), a type of RNN, by assigning spatial locations to neurons and introducing distance-dependent inter-neuron delays. These delays were shown to significantly improve ESN task performance. However, thus far, it is still unclear why distance-based delay networks (DDNs) perform better than ESNs. In this paper, we show that by optimizing inter-node delays, the memory capacity of the network matches the memory requirements of the task. As such, networks concentrate their memory capabilities to the points in the past which contain the most information for the task at hand. Moreover, we show that DDNs have a greater total linear memory capacity, with the same amount of non-linear processing power.

10.
Opt Express ; 21(23): 28922-32, 2013 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-24514406

RESUMO

Recently, we have theoretically demonstrated that optically injected microdisk lasers can be tuned in a class I excitable regime, where they are sensitive to both inhibitory and excitatory external input pulses. In this paper, we propose, using simulations, a topology that allows the disks to react on excitations from other disks. Phase tuning of the intermediate connections allows to control the disk response. Additionally, we investigate the sensitivity of the disk circuit to deviations in driving current and locking signal wavelength detuning. Using state-of-the-art fabrication techniques for microdisk laser, the standard deviation of the lasing wavelength is still about one order of magnitude too large. Therefore, compensation techniques, such as wavelength tuning by heating, are necessary.

11.
Opt Express ; 21(22): 26182-91, 2013 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-24216842

RESUMO

We demonstrate class I excitability in optically injected microdisk lasers, and propose a possible optical spiking neuron design. The neuron has a clear threshold and an integrating behavior, leading to an output rate-input rate dependency that is comparable to the characteristic of sigmoidal artificial neurons. We also show that the optical phase of the input pulses has influence on the neuron response, and can be used to create inhibitory, as well as excitatory perturbations.

12.
Sci Rep ; 13(1): 21399, 2023 Dec 04.
Artigo em Inglês | MEDLINE | ID: mdl-38049625

RESUMO

Photonics-based computing approaches in combination with wavelength division multiplexing offer a potential solution to modern data and bandwidth needs. This paper experimentally takes an important step towards wavelength division multiplexing in an integrated waveguide-based photonic reservoir computing platform by using a single set of readout weights for up to at least 3 ITU-T channels to efficiently scale the data bandwidth when processing a nonlinear signal equalization task on a 28 Gbps modulated on-off keying signal. Using multiple-wavelength training, we obtain bit error rates well below that of the [Formula: see text] forward error correction limit at high fiber input powers of 18 dBm, which result in high nonlinear distortion. The results of the reservoir chip are compared to a tapped delay line filter and clearly show that the system performs nonlinear equalization. This was achieved using only limited post processing which in future work can be implemented in optical hardware as well.

13.
Opt Express ; 20(18): 20292-308, 2012 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-23037081

RESUMO

To emulate a spiking neuron, a photonic component needs to be excitable. In this paper, we theoretically simulate and experimentally demonstrate cascadable excitability near a self-pulsation regime in high-Q-factor silicon-on-insulator microrings. For the theoretical study we use Coupled Mode Theory. While neglecting the fast energy and phase dynamics of the cavity light, we can still preserve the most important microring dynamics, by only keeping the temperature difference with the surroundings and the amount of free carriers as dynamical variables of the system. Therefore we can analyse the microring dynamics in a 2D phase portrait. For some wavelengths, when changing the input power, the microring undergoes a subcritical Andronov-Hopf bifurcation at the self-pulsation onset. As a consequence the system shows class II excitability. Experimental single ring excitability and self-pulsation behaviour follows the theoretic predictions. Moreover, simulations and experiments show that this excitation mechanism is cascadable.


Assuntos
Potenciais de Ação/fisiologia , Biomimética/instrumentação , Modelos Neurológicos , Neurônios/fisiologia , Dispositivos Ópticos , Animais , Simulação por Computador , Retroalimentação , Humanos
14.
J Chem Theory Comput ; 18(3): 1672-1691, 2022 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-35171606

RESUMO

Explicit-electron force fields introduce electrons or electron pairs as semiclassical particles in force fields or empirical potentials, which are suitable for molecular dynamics simulations. Even though semiclassical electrons are a drastic simplification compared to a quantum-mechanical electronic wave function, they still retain a relatively detailed electronic model compared to conventional polarizable and reactive force fields. The ability of explicit-electron models to describe chemical reactions and electronic response properties has already been demonstrated, yet the description of short-range interactions for a broad range of chemical systems remains challenging. In this work, we present the electron machine learning potential (eMLP), a new explicit electron force field in which the short-range interactions are modeled with machine learning. The electron pair particles will be located at well-defined positions, derived from localized molecular orbitals or Wannier centers, naturally imposing the correct dielectric and piezoelectric behavior of the system. The eMLP is benchmarked on two newly constructed data sets: eQM7, an extension of the QM7 data set for small molecules, and a data set for the crystalline ß-glycine. It is shown that the eMLP can predict dipole moments, polarizabilities, and IR-spectra of unseen molecules with high precision. Furthermore, a variety of response properties, for example, stiffness or piezoelectric constants, can be accurately reproduced.

15.
Sci Rep ; 11(1): 3102, 2021 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-33542496

RESUMO

Using optical hardware for neuromorphic computing has become more and more popular recently, due to its efficient high-speed data processing capabilities and low power consumption. However, there are still some remaining obstacles to realizing the vision of a completely optical neuromorphic computer. One of them is that, depending on the technology used, optical weighting elements may not share the same resolution as in the electrical domain. Moreover, noise of the weighting elements are important considerations as well. In this article, we investigate a new method for improving the performance of optical weighting components, even in the presence of noise and in the case of very low resolution. Our method utilizes an iterative training procedure and is able to select weight connections that are more robust to quantization and noise. As a result, even with only 8 to 32 levels of resolution, in noisy weighting environments, the method can outperform both nearest rounding low-resolution weighting and random rounding weighting by up to several orders of magnitude in terms of bit error rate and can deliver performance very close to full-resolution weighting elements.

16.
Sci Rep ; 11(1): 2701, 2021 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-33514814

RESUMO

Photorefractive materials exhibit an interesting plasticity under the influence of an optical field. By extending the finite-difference time-domain method to include the photorefractive effect, we explore how this property can be exploited in the context of neuromorphic computing for telecom applications. By first priming the photorefractive material with a random bit stream, the material reorganizes itself to better recognize simple patterns in the stream. We demonstrate this by simulating a typical reservoir computing setup, which gets a significant performance boost on performing the XOR on two consecutive bits in the stream after this initial priming step.

17.
Sci Rep ; 11(1): 24152, 2021 12 17.
Artigo em Inglês | MEDLINE | ID: mdl-34921207

RESUMO

Nonlinear activation is a crucial building block of most machine-learning systems. However, unlike in the digital electrical domain, applying a saturating nonlinear function in a neural network in the analog optical domain is not as easy, especially in integrated systems. In this paper, we first investigate in detail the photodetector nonlinearity in two main readout schemes: electrical readout and optical readout. On a 3-bit-delayed XOR task, we show that optical readout trained with backpropagation gives the best performance. Furthermore, we propose an additional saturating nonlinearity coming from a deliberately non-ideal voltage amplifier after the detector. Compared to an all-optical nonlinearity, these two kinds of nonlinearities are extremely easy to obtain at no additional cost, since photodiodes and voltage amplifiers are present in any system. Moreover, not having to design ideal linear amplifiers could relax their design requirements. We show through simulation that for long-distance nonlinear fiber distortion compensation, using only the photodiode nonlinearity in an optical readout delivers BER improvements over three orders of magnitude. Combined with the amplifier saturation nonlinearity, we obtain another three orders of magnitude improvement of the BER.

18.
Sci Rep ; 10(1): 14451, 2020 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-32879360

RESUMO

Physical reservoir computing approaches have gained increased attention in recent years due to their potential for low-energy high-performance computing. Despite recent successes, there are bounds to what one can achieve simply by making physical reservoirs larger. Therefore, we argue that a switch from single-reservoir computing to multi-reservoir and even deep physical reservoir computing is desirable. Given that error backpropagation cannot be used directly to train a large class of multi-reservoir systems, we propose an alternative framework that combines the power of backpropagation with the speed and simplicity of classic training algorithms. In this work we report our findings on a conducted experiment to evaluate the general feasibility of our approach. We train a network of 3 Echo State Networks to perform the well-known NARMA-10 task, where we use intermediate targets derived through backpropagation. Our results indicate that our proposed method is well-suited to train multi-reservoir systems in an efficient way.

19.
Sci Rep ; 10(1): 20724, 2020 11 26.
Artigo em Inglês | MEDLINE | ID: mdl-33244129

RESUMO

Machine learning offers promising solutions for high-throughput single-particle analysis in label-free imaging microflow cytomtery. However, the throughput of online operations such as cell sorting is often limited by the large computational cost of the image analysis while offline operations may require the storage of an exceedingly large amount of data. Moreover, the training of machine learning systems can be easily biased by slight drifts of the measurement conditions, giving rise to a significant but difficult to detect degradation of the learned operations. We propose a simple and versatile machine learning approach to perform microparticle classification at an extremely low computational cost, showing good generalization over large variations in particle position. We present proof-of-principle classification of interference patterns projected by flowing transparent PMMA microbeads with diameters of [Formula: see text] and [Formula: see text]. To this end, a simple, cheap and compact label-free microflow cytometer is employed. We also discuss in detail the detection and prevention of machine learning bias in training and testing due to slight drifts of the measurement conditions. Moreover, we investigate the implications of modifying the projected particle pattern by means of a diffraction grating, in the context of optical extreme learning machine implementations.

20.
Sci Rep ; 9(1): 5918, 2019 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-30976036

RESUMO

We propose a new method for performing photonic circuit simulations based on the scatter matrix formalism. We leverage the popular deep-learning framework PyTorch to reimagine photonic circuits as sparsely connected complex-valued neural networks. This allows for highly parallel simulation of large photonic circuits on graphical processing units in time and frequency domain while all parameters of each individual component can easily be optimized with well-established machine learning algorithms such as backpropagation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA