Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
NPJ Microgravity ; 10(1): 95, 2024 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-39367015

RESUMEN

Altering posture relative to the direction of gravity, or exposure to microgravity has been shown to affect many aspects of perception, including size perception. Our aims in this study were to investigate whether changes in posture and long-term exposure to microgravity bias the visual perception of object height and to test whether any such biases are accompanied by changes in precision. We also explored the possibility of sex/gender differences. Two cohorts of participants (12 astronauts and 20 controls, 50% women) varied the size of a virtual square in a simulated corridor until it was perceived to match a reference stick held in their hands. Astronauts performed the task before, twice during, and twice after an extended stay onboard the International Space Station. On Earth, they performed the task of sitting upright and lying supine. Earth-bound controls also completed the task five times with test sessions spaced similarly to the astronauts; to simulate the microgravity sessions on the ISS they lay supine. In contrast to earlier studies, we found no immediate effect of microgravity exposure on perceived object height. However, astronauts robustly underestimated the height of the square relative to the haptic reference and these estimates were significantly smaller 60 days or more after their return to Earth. No differences were found in the precision of the astronauts' judgments. Controls underestimated the height of the square when supine relative to sitting in their first test session (simulating Pre-Flight) but not in later sessions. While these results are largely inconsistent with previous results in the literature, a posture-dependent effect of simulated eye height might provide a unifying explanation. We were unable to make any firm statements related to sex/gender differences. We conclude that no countermeasures are required to mitigate the acute effects of microgravity exposure on object height perception. However, space travelers should be warned about late-emerging and potentially long-lasting changes in this perceptual skill.

2.
PLoS One ; 19(10): e0311992, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39392815

RESUMEN

It is a well-established finding that more informative optic flow (e.g., faster, denser, or presented over a larger portion of the visual field) yields decreased variability in heading judgements. Current models of heading perception further predict faster processing under such circumstances, which has, however, not been supported empirically so far. In this study, we validate a novel continuous psychophysics paradigm by replicating the effect of the speed and density of optic flow on variability in performance, and we investigate how these manipulations affect the temporal dynamics. To this end, we tested 30 participants in a continuous psychophysics paradigm administered in Virtual Reality. We immersed them in a simple virtual environment where they experienced four 90-second blocks of optic flow where their linear heading direction (no simulated rotation) at any given moment was determined by a random walk. We asked them to continuously indicate with a joystick the direction in which they perceived themselves to be moving. In each of the four blocks they experienced a different combination of simulated self-motion speeds (SLOW and FAST) and density of optic flow (SPARSE and DENSE). Using a Cross-Correlogram Analysis, we determined that participants reacted faster and displayed lower variability in their performance in the FAST and DENSE conditions than in the SLOW and SPARSE conditions, respectively. Using a Kalman Filter-based analysis approach, we found a similar pattern, where the fitted perceptual noise parameters were higher for SLOW and SPARSE. While replicating previous results on variability, we show that more informative optic flow can speed up heading judgements, while at the same time validating a continuous psychophysics as an efficient method for studying heading perception.


Asunto(s)
Percepción de Movimiento , Flujo Optico , Psicofísica , Humanos , Psicofísica/métodos , Masculino , Femenino , Adulto , Percepción de Movimiento/fisiología , Adulto Joven , Flujo Optico/fisiología , Realidad Virtual
3.
PLoS One ; 19(9): e0305661, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39321156

RESUMEN

Although estimating travel distance is essential to our ability to move through the world, our distance estimates can be inaccurate. These odometric errors occur because people tend to perceive that they have moved further than they had. Many of the studies investigating the perception of travel distance have primarily used forward translational movements, and postulate that perceived travel distance results from integration over distance and is independent of travel speed. Speed effects would imply integration over time as well as space. To examine travel distance perception with different directions and speeds, we used virtual reality (VR) to elicit visually induced self-motion. Participants (n = 15) were physically stationary while being visually "moved" through a virtual corridor, either judging distances by stopping at a previously seen target (Move-To-Target Task) or adjusting a target to the previous movement made (Adjust-Target Task). We measured participants' perceived travel distance over a range of speeds (1-5 m/s) and distances in four directions (up, down, forward, backward). We show that the simulated speed and direction of motion differentially affect the gain (perceived travel distance / actual travel distance). For the Adjust-Target task, forwards motion was associated with smaller gains than either backward, up, or down motion. For the Move-To-Target task, backward motion was associated with smaller gains than either forward, up or down motion. For both tasks, motion at the slower speed was associated with higher gains than the faster speeds. These results show that transforming visual motion into travel distance differs depending on the speed and direction of optic flow being perceived. We also found that a common model used to study the perception of travel distance was a better fit for the forward direction compared to the others. This implies that the model should be modified for these different non-forward motion directions.


Asunto(s)
Percepción de Distancia , Percepción de Movimiento , Humanos , Masculino , Femenino , Percepción de Distancia/fisiología , Adulto , Percepción de Movimiento/fisiología , Adulto Joven , Realidad Virtual , Movimiento (Física) , Movimiento/fisiología
4.
Hum Mov Sci ; 96: 103250, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38964027

RESUMEN

Movement sonification can improve motor control in both healthy subjects (e.g., learning or refining a sport skill) and those with sensorimotor deficits (e.g., stroke patients and deafferented individuals). It is not known whether improved motor control and learning from movement sonification are driven by feedback-based real-time ("online") trajectory adjustments, adjustments to internal models over multiple trials, or both. We searched for evidence of online trajectory adjustments (muscle twitches) in response to movement sonification feedback by comparing the kinematics and error of reaches made with online (i.e., real-time) and terminal sonification feedback. We found that reaches made with online feedback were significantly more jerky than reaches made with terminal feedback, indicating increased muscle twitching (i.e., online trajectory adjustment). Using a between-subject design, we found that online feedback was associated with improved motor learning of a reach path and target over terminal feedback; however, using a within-subjects design, we found that switching participants who had learned with online sonification feedback to terminal feedback was associated with a decrease in error. Thus, our results suggest that, with our task and sonification, movement sonification leads to online trajectory adjustments which improve internal models over multiple trials, but which themselves are not helpful online corrections.


Asunto(s)
Desempeño Psicomotor , Humanos , Desempeño Psicomotor/fisiología , Masculino , Fenómenos Biomecánicos , Femenino , Adulto Joven , Adulto , Retroalimentación Sensorial , Destreza Motora/fisiología , Orientación , Músculo Esquelético/fisiología , Movimiento/fisiología , Aprendizaje
5.
NPJ Microgravity ; 10(1): 28, 2024 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-38480736

RESUMEN

Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer's perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target's previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post-flight. No difference was detected between the astronauts' performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.

6.
PLoS One ; 19(3): e0295110, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38483949

RESUMEN

To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigated this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants were shown a ball moving laterally which disappeared after a certain time. They then indicated by button press when they thought the ball would have hit a target rectangle positioned in the environment. While the ball was visible, participants sometimes experienced simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task was a two-interval forced choice task in which participants judged which of two motions was faster: in one interval they saw the same ball they observed in the first task while in the other they saw a ball cloud whose speed was controlled by a PEST staircase. While observing the single ball, they were again moved visually either in the same or opposite direction as the ball or they remained static. We found the expected biases in estimated time-to-contact, while for the speed estimation task, this was only the case when the ball and observer were moving in opposite directions. Our hypotheses regarding precision were largely unsupported by the data. Overall, we draw several conclusions from this experiment: first, incomplete flow parsing can affect motion prediction. Further, it suggests that time-to-contact estimation and speed judgements are determined by partially different mechanisms. Finally, and perhaps most strikingly, there appear to be certain compensatory mechanisms at play that allow for much higher-than-expected precision when observers are experiencing self-motion-even when self-motion is simulated only visually.


Asunto(s)
Percepción de Movimiento , Humanos , Movimiento (Física) , Factores de Tiempo , Retina , Sesgo
7.
Perception ; 53(3): 197-207, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38304970

RESUMEN

Aristotle believed that objects fell at a constant velocity. However, Galileo Galilei showed that when an object falls, gravity causes it to accelerate. Regardless, Aristotle's claim raises the possibility that people's visual perception of falling motion might be biased away from acceleration towards constant velocity. We tested this idea by requiring participants to judge whether a ball moving in a simulated naturalistic setting appeared to accelerate or decelerate as a function of its motion direction and the amount of acceleration/deceleration. We found that the point of subjective constant velocity (PSCV) differed between up and down but not between left and right motion directions. The PSCV difference between up and down indicated that more acceleration was needed for a downward-falling object to appear at constant velocity than for an upward "falling" object. We found no significant differences in sensitivity to acceleration for the different motion directions. Generalized linear mixed modeling determined that participants relied predominantly on acceleration when making these judgments. Our results support the idea that Aristotle's belief may in part be due to a bias that reduces the perceived magnitude of acceleration for falling objects, a bias not revealed in previous studies of the perception of visual motion.


Asunto(s)
Percepción de Movimiento , Humanos , Aceleración , Percepción Visual , Gravitación
8.
Sci Rep ; 13(1): 20075, 2023 11 16.
Artículo en Inglés | MEDLINE | ID: mdl-37974023

RESUMEN

Changes in perceived eye height influence visually perceived object size in both the real world and in virtual reality. In virtual reality, conflicts can arise between the eye height in the real world and the eye height simulated in a VR application. We hypothesized that participants would be influenced more by variation in simulated eye height when they had a clear expectation about their eye height in the real world such as when sitting or standing, and less so when they did not have a clear estimate of the distance between their eyes and the real-life ground plane, e.g., when lying supine. Using virtual reality, 40 participants compared the height of a red square simulated at three different distances (6, 12, and 18 m) against the length of a physical stick (38.1 cm) held in their hands. They completed this task in all combinations of four real-life postures (supine, sitting, standing, standing on a table) and three simulated eye heights that corresponded to each participant's real-world eye height (123cm sitting; 161cm standing; 201cm on table; on average). Confirming previous results, the square's perceived size varied inversely with simulated eye height. Variations in simulated eye height affected participants' perception of size significantly more when sitting than in the other postures (supine, standing, standing on a table). This shows that real-life posture can influence the perception of size in VR. However, since simulated eye height did not affect size estimates less in the lying supine than in the standing position, our hypothesis that humans would be more influenced by variations in eye height when they had a reliable estimate of the distance between their eyes and the ground plane in the real world was not fully confirmed.


Asunto(s)
Postura , Percepción del Tamaño , Humanos , Posición de Pie , Ojo , Sedestación
9.
PLoS One ; 18(1): e0267983, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36716328

RESUMEN

To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigate this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants are shown a ball moving laterally which disappears after a certain time. They then indicate by button press when they think the ball would have hit a target rectangle positioned in the environment. While the ball is visible, participants sometimes experience simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task is a two-interval forced choice task in which participants judge which of two motions is faster: in one interval they see the same ball they observed in the first task while in the other they see a ball cloud whose speed is controlled by a PEST staircase. While observing the single ball, they are again moved visually either in the same or opposite direction as the ball or they remain static. We expect participants to overestimate the speed of a ball that moves opposite to their simulated self-motion (speed estimation task), which should then lead them to underestimate the time it takes the ball to reach the target rectangle (prediction task). Seeing the ball during visually simulated self-motion should increase variability in both tasks. We expect to find performance in both tasks to be correlated, both in accuracy and precision.


Asunto(s)
Percepción de Movimiento , Humanos , Movimiento (Física) , Factores de Tiempo , Retina , Sesgo
10.
Atten Percept Psychophys ; 84(1): 25-46, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34704212

RESUMEN

Judging object speed during observer self-motion requires disambiguating retinal stimulation from two sources: self-motion and object motion. According to the Flow Parsing hypothesis, observers estimate their own motion, then subtract the retinal corresponding motion from the total retinal stimulation and interpret the remaining stimulation as pertaining to object motion. Subtracting noisier self-motion information from retinal input should lead to a decrease in precision. Furthermore, when self-motion is only simulated visually, self-motion is likely to be underestimated, yielding an overestimation of target speed when target and observer move in opposite directions and an underestimation when they move in the same direction. We tested this hypothesis with a two-alternative forced-choice task in which participants judged which of two motions, presented in an immersive 3D environment, was faster. One motion interval contained a ball cloud whose speed was selected dynamically according to a PEST staircase, while the other contained one big target travelling laterally at a fixed speed. While viewing the big target, participants were either static or experienced visually simulated lateral self-motion in the same or opposite direction of the target. Participants were not significantly biased in either motion profile, and precision was only significantly lower when participants moved visually in the direction opposite to the target. We conclude that, when immersed in an ecologically valid 3D environment with rich self-motion cues, participants perceive an object's speed accurately at a small precision cost, even when self-motion is simulated only visually.


Asunto(s)
Percepción de Movimiento , Señales (Psicología) , Humanos , Movimiento (Física) , Estimulación Luminosa , Retina , Percepción Visual
12.
Sci Rep ; 11(1): 7108, 2021 03 29.
Artículo en Inglés | MEDLINE | ID: mdl-33782443

RESUMEN

In a 2-alternative forced-choice protocol, observers judged the duration of ball motions shown on an immersive virtual-reality display as approaching in the sagittal plane along parabolic trajectories compatible with Earth gravity effects. In different trials, the ball shifted along the parabolas with one of three different laws of motion: constant tangential velocity, constant vertical velocity, or gravitational acceleration. Only the latter motion was fully consistent with Newton's laws in the Earth gravitational field, whereas the motions with constant velocity profiles obeyed the spatio-temporal constraint of parabolic paths dictated by gravity but violated the kinematic constraints. We found that the discrimination of duration was accurate and precise for all types of motions, but the discrimination for the trajectories at constant tangential velocity was slightly but significantly more precise than that for the trajectories at gravitational acceleration or constant vertical velocity. The results are compatible with a heuristic internal representation of gravity effects that can be engaged when viewing projectiles shifting along parabolic paths compatible with Earth gravity, irrespective of the specific kinematics. Opportunistic use of a moving frame attached to the target may favour visual tracking of targets with constant tangential velocity, accounting for the slightly superior duration discrimination.

13.
PLoS One ; 15(8): e0236732, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32813686

RESUMEN

Humans expect downwards moving objects to accelerate and upwards moving objects to decelerate. These results have been interpreted as humans maintaining an internal model of gravity. We have previously suggested an interpretation of these results within a Bayesian framework of perception: earth gravity could be represented as a Strong Prior that overrules noisy sensory information (Likelihood) and therefore attracts the final percept (Posterior) very strongly. Based on this framework, we use published data from a timing task involving gravitational motion to determine the mean and the standard deviation of the Strong Earth Gravity Prior. To get its mean, we refine a model of mean timing errors we proposed in a previous paper (Jörges & López-Moliner, 2019), while expanding the range of conditions under which it yields adequate predictions of performance. This underscores our previous conclusion that the gravity prior is likely to be very close to 9.81 m/s2. To obtain the standard deviation, we identify different sources of sensory and motor variability reflected in timing errors. We then model timing responses based on quantitative assumptions about these sensory and motor errors for a range of standard deviations of the earth gravity prior, and find that a standard deviation of around 2 m/s2 makes for the best fit. This value is likely to represent an upper bound, as there are strong theoretical reasons along with supporting empirical evidence for the standard deviation of the earth gravity being lower than this value.


Asunto(s)
Gravitación , Modelos Estadísticos , Adulto , Planeta Tierra , Femenino , Humanos , Masculino , Adulto Joven
14.
Sci Rep ; 9(1): 14094, 2019 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-31575901

RESUMEN

There is evidence that humans rely on an earth gravity (9.81 m/s²) prior for a series of tasks involving perception and action, the reason being that gravity helps predict future positions of moving objects. Eye-movements in turn are partially guided by predictions about observed motion. Thus, the question arises whether knowledge about gravity is also used to guide eye-movements: If humans rely on a representation of earth gravity for the control of eye movements, earth-gravity-congruent motion should elicit improved visual pursuit. In a pre-registered experiment, we presented participants (n = 10) with parabolic motion governed by six different gravities (-1/0.7/0.85/1/1.15/1.3 g), two initial vertical velocities and two initial horizontal velocities in a 3D environment. Participants were instructed to follow the target with their eyes. We tracked their gaze and computed the visual gain (velocity of the eyes divided by velocity of the target) as proxy for the quality of pursuit. An LMM analysis with gravity condition as fixed effect and intercepts varying per subject showed that the gain was lower for -1 g than for 1 g (by -0.13, SE = 0.005). This model was significantly better than a null model without gravity as fixed effect (p < 0.001), supporting our hypothesis. A comparison of 1 g and the remaining gravity conditions revealed that 1.15 g (by 0.043, SE = 0.005) and 1.3 g (by 0.065, SE = 0.005) were associated with lower gains, while 0.7 g (by 0.054, SE = 0.005) and 0.85 g (by 0.029, SE = 0.005) were associated with higher gains. This model was again significantly better than a null model (p < 0.001), contradicting our hypothesis. Post-hoc analyses reveal that confounds in the 0.7/0.85/1/1.15/1.3 g condition may be responsible for these contradicting results. Despite these discrepancies, our data thus provide some support for the hypothesis that internalized knowledge about earth gravity guides eye movements.


Asunto(s)
Movimientos Oculares , Gravitación , Adulto , Movimientos Oculares/fisiología , Femenino , Humanos , Masculino , Movimiento (Física) , Percepción de Movimiento , Estimulación Luminosa , Adulto Joven
15.
Vision Res ; 149: 47-58, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29913247

RESUMEN

Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s2. This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (y) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on y and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone.


Asunto(s)
Gravitación , Juicio/fisiología , Percepción de Movimiento/fisiología , Aceleración , Adulto , Teorema de Bayes , Señales (Psicología) , Discriminación en Psicología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Flujo Optico/fisiología , Estimulación Luminosa/métodos , Reproducibilidad de los Resultados , Realidad Virtual , Adulto Joven
16.
Front Hum Neurosci ; 11: 203, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28503140

RESUMEN

In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality), or visually and bodily (space travel). As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called "strong prior". As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA