RESUMO
We report a method for the phase reconstruction of an ultrashort laser pulse based on the deep learning of the nonlinear spectral changes induce by self-phase modulation. The neural networks were trained on simulated pulses with random initial phases and spectra, with pulse durations between 8.5 and 65 fs. The reconstruction is valid with moderate spectral resolution, and is robust to noise. The method was validated on experimental data produced from an ultrafast laser system, where near real-time phase reconstructions were performed. This method can be used in systems with known linear and nonlinear responses, even when the fluence is not known, making this method ideal for difficult to measure beams such as the high energy, large aperture beams produced in petawatt systems.
Assuntos
Aprendizado Profundo , Lasers , LuzRESUMO
Objective.Motor-imagery (MI) classification base on electroencephalography (EEG) has been long studied in neuroscience and more recently widely used in healthcare applications such as mobile assistive robots and neurorehabilitation. In particular, EEG-based MI classification methods that rely on convolutional neural networks (CNNs) have achieved relatively high classification accuracy. However, naively training CNNs to classify raw EEG data from all channels, especially for high-density EEG, is computationally demanding and requires huge training sets. It often also introduces many irrelevant input features, making it difficult for the CNN to extract the informative ones. This problem is compounded by a dearth of training data, which is particularly acute for MI tasks, because these are cognitively demanding and thus fatigue inducing.Approach.To address these issues, we proposed an end-to-end CNN-based neural network with attentional mechanism together with different data augmentation (DA) techniques. We tested it on two benchmark MI datasets, brain-computer interface (BCI) competition IV 2a and 2b. In addition, we collected a new dataset, recorded using high-density EEG, and containing both MI and motor execution (ME) tasks, which we share with the community.Main results.Our proposed neural-network architecture outperformed all state-of-the-art methods that we found in the literature, with and without DA, reaching an average classification accuracy of 93.6% and 87.83% on BCI 2a and 2b, respectively. We also directly compare decoding of MI and ME tasks. Focusing on MI classification, we find optimal channel configurations and the best DA techniques as well as investigate combining data across participants and the role of transfer learning.Significance.Our proposed approach improves the classification accuracy for MI in the benchmark datasets. In addition, collecting our own dataset enables us to compare MI and ME and investigate various aspects of EEG decoding critical for neuroscience and BCI.
Assuntos
Interfaces Cérebro-Computador , Algoritmos , Eletroencefalografia , Humanos , Imaginação , Redes Neurais de ComputaçãoRESUMO
Neural networks can emulate nonlinear physical systems with high accuracy, yet they may produce physically inconsistent results when violating fundamental constraints. Here, we introduce a systematic way of enforcing nonlinear analytic constraints in neural networks via constraints in the architecture or the loss function. Applied to convective processes for climate modeling, architectural constraints enforce conservation laws to within machine precision without degrading performance. Enforcing constraints also reduces errors in the subsets of the outputs most impacted by the constraints.
RESUMO
Weight-sharing is one of the pillars behind Convolutional Neural Networks and their successes. However, in physical neural systems such as the brain, weight-sharing is implausible. This discrepancy raises the fundamental question of whether weight-sharing is necessary. If so, to which degree of precision? If not, what are the alternatives? The goal of this study is to investigate these questions, primarily through simulations where the weight-sharing assumption is relaxed. Taking inspiration from neural circuitry, we explore the use of Free Convolutional Networks and neurons with variable connection patterns. Using Free Convolutional Networks, we show that while weight-sharing is a pragmatic optimization approach, it is not a necessity in computer vision applications. Furthermore, Free Convolutional Networks match the performance observed in standard architectures when trained using properly translated data (akin to video). Under the assumption of translationally augmented data, Free Convolutional Networks learn translationally invariant representations that yield an approximate form of weight-sharing.