ABSTRACT
Important decisions in local agricultural policy and practice often hinge on the soil's chemical composition. Raman spectroscopy offers a rapid noninvasive means to quantify the constituents of complex organic systems. But the application of Raman spectroscopy to soils presents a multifaceted challenge due to organic/mineral compositional complexity and spectral interference arising from overwhelming fluorescence. The present work compares methodologies with the capacity to help overcome common obstacles that arise in the analysis of soils. We created conditions representative of these challenges by combining varying proportions of six amino acids commonly found in soils with fluorescent bentonite clay and coarse mineral components. Referring to an extensive data set of Raman spectra, we compare the performance of the convolutional neural network (CNN) and partial least-squares regression (PLSR) multivariate models for amino acid composition. Strategies employing volume-averaged spectral sampling and data preprocessing algorithms improve the predictive power of these models. Our average test R2 for PLSR models exceeds 0.89 and approaches 0.98, depending on the complexity of the matrix, whereas CNN yields an R2 range from 0.91 to 0.97, demonstrating that classic PLSR and CNN perform comparably, except in cases where the signal-to-noise ratio of the organic component is very low, whereupon CNN models outperform. Artificially isolating two of the most prevalent obstacles in evaluating the Raman spectra of soils, we have characterized the effect of each obstacle on the performance of machine learning models in the absence of other complexities. These results highlight important considerations and modeling strategies necessary to improve the Raman analysis of organic compounds in complex mixtures in the presence of mineral spectral components and significant fluorescence.
ABSTRACT
Smooth pursuit eye movements and visual motion perception rely on the integration of current sensory signals with past experience. Experience shapes our expectation of current visual events and can drive eye movement responses made in anticipation of a target, such as anticipatory pursuit. Previous research revealed consistent effects of expectation on anticipatory pursuit-eye movements follow the expected target direction or speed-and contrasting effects on motion perception, but most studies considered either eye movement or perceptual responses. The current study directly compared effects of direction expectation on perception and anticipatory pursuit within the same direction discrimination task to investigate whether both types of responses are affected similarly or differently. Observers (n = 10) viewed high-coherence random-dot kinematograms (RDKs) moving rightward and leftward with a probability of 50%, 70%, or 90% in a given block of trials to build up an expectation of motion direction. They were asked to judge motion direction of interleaved low-coherence RDKs (0%-15%). Perceptual judgements were compared with changes in anticipatory pursuit eye movements as a function of probability. Results show that anticipatory pursuit velocity scaled with probability and followed direction expectation (attraction bias), whereas perceptual judgments were biased opposite to direction expectation (repulsion bias). Control experiments suggest that the repulsion bias in perception was not caused by retinal slip induced by anticipatory pursuit, or by motion adaptation. We conclude that direction expectation can be processed differently for perception and anticipatory pursuit.NEW & NOTEWORTHY We show that expectations about motion direction that are based on long-term trial history affect perception and anticipatory pursuit differently. Whereas anticipatory pursuit direction was coherent with the expected motion direction (attraction bias), perception was biased opposite to the expected direction (repulsion bias). These opposite biases potentially reveal different ways in which perception and action utilize prior information and support the idea of different information processing for perception and pursuit.
Subject(s)
Anticipation, Psychological/physiology , Motion Perception/physiology , Motivation/physiology , Photic Stimulation/methods , Pursuit, Smooth/physiology , Adult , Eye Movements/physiology , Female , Humans , Male , Young AdultABSTRACT
Humans and other animals move their eyes in anticipation to compensate for sensorimotor delays. Such anticipatory eye movements can be driven by the expectation of a future visual object or event. Here we investigate whether such anticipatory responses extend to ocular torsion, the eyes' rotation about the line of sight. We recorded three-dimensional eye position in head-fixed healthy human adults who tracked a rotating dot pattern moving horizontally across a computer screen. This kind of stimulus triggers smooth pursuit with a horizontal and torsional component. In three experiments, we elicited expectation of stimulus rotation by repeatedly showing the same rotation (Experiment 1), or by using different types of higher-level symbolic cues indicating the rotation of the upcoming target (Experiments 2 and 3). Across all experiments, results reveal reliable anticipatory horizontal smooth pursuit. However, anticipatory torsion was only elicited by stimulus repetition, but not by symbolic cues. In summary, torsion can be made in anticipation of an upcoming visual event only when low-level motion signals are accumulated by repetition. Higher-level cognitive mechanisms related to a symbolic cue reliably evoke anticipatory pursuit but did not modulate torsion. These findings indicate that anticipatory torsion and anticipatory pursuit are at least partly decoupled and might be controlled separately.
Subject(s)
Motion Perception/physiology , Pursuit, Smooth/physiology , Adult , Cues , Eye Movements/physiology , Female , Humans , Imaging, Three-Dimensional , Male , Torsion, Mechanical , Young AdultABSTRACT
An important aspect of embodied approaches to cognition is the idea that human cognition does not occur simply in the brain, but is influenced by a complex bi-directional interplay between the brain, body, and external environment. Though embodied cognition is often studied in a controlled laboratory setting, by its very nature it can arise spontaneously in everyday life (e.g., gesturing). A recent paper by Chisholm et al. suggested that leaning while playing a video game may be another instance of a natural spontaneous expression of embodied cognition that can be studied to gain insight into a person's ongoing covert cognition. Consistent with this proposal, Chisholm et al. found that, like gestures, leaning increases when cognitive demand is increased. However, in Chisholm et al., immersion also increased with cognitive demand. We argue that their test to exclude it as a contributing factor-by holding cognitive demand constant while manipulating immersion-was limited. Despite their test, it remains possible and plausible that cognitive demand has an effect on leaning only when immersion increases. To address this issue, the present study systematically varied demand and immersion. We replicate Chisholm et al.'s finding that leaning increases with cognitive load. We also show that the effect of load is not influenced by a robust and reliable change in immersion. Collectively our results provide new and converging evidence that spontaneous overt embodiment of an individual's intention is modulated by cognitive demand, and emphasises the utility of using natural behaviours to understand the embodiment of cognition.