RESUMO
The growing trend toward high-throughput proteomics demands rapid liquid chromatography-mass spectrometry (LC-MS) cycles that limit the available time to gather the large numbers of MS/MS fragmentation spectra required for identification. Orbitrap analyzers scale performance with acquisition time and necessarily sacrifice sensitivity and resolving power to deliver higher acquisition rates. We developed a new mass spectrometer that combines a mass-resolving quadrupole, the Orbitrap, and the novel Asymmetric Track Lossless (Astral) analyzer. The new hybrid instrument enables faster acquisition of high-resolution accurate mass (HRAM) MS/MS spectra compared with state-of-the-art mass spectrometers. Accordingly, new proteomics methods were developed that leverage the strengths of each HRAM analyzer, whereby the Orbitrap analyzer performs full scans with a high dynamic range and resolution, synchronized with the Astral analyzer's acquisition of fast and sensitive HRAM MS/MS scans. Substantial improvements are demonstrated over previous methods using current state-of-the-art mass spectrometers.
RESUMO
Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.