Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 111
Filtrar
1.
NPJ Microgravity ; 10(1): 28, 2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38480736

RESUMO

Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer's perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target's previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post-flight. No difference was detected between the astronauts' performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.

2.
PLoS One ; 19(3): e0295110, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38483949

RESUMO

To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigated this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants were shown a ball moving laterally which disappeared after a certain time. They then indicated by button press when they thought the ball would have hit a target rectangle positioned in the environment. While the ball was visible, participants sometimes experienced simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task was a two-interval forced choice task in which participants judged which of two motions was faster: in one interval they saw the same ball they observed in the first task while in the other they saw a ball cloud whose speed was controlled by a PEST staircase. While observing the single ball, they were again moved visually either in the same or opposite direction as the ball or they remained static. We found the expected biases in estimated time-to-contact, while for the speed estimation task, this was only the case when the ball and observer were moving in opposite directions. Our hypotheses regarding precision were largely unsupported by the data. Overall, we draw several conclusions from this experiment: first, incomplete flow parsing can affect motion prediction. Further, it suggests that time-to-contact estimation and speed judgements are determined by partially different mechanisms. Finally, and perhaps most strikingly, there appear to be certain compensatory mechanisms at play that allow for much higher-than-expected precision when observers are experiencing self-motion-even when self-motion is simulated only visually.


Assuntos
Percepção de Movimento , Humanos , Movimento (Física) , Fatores de Tempo , Retina , Viés
3.
Perception ; 53(3): 197-207, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38304970

RESUMO

Aristotle believed that objects fell at a constant velocity. However, Galileo Galilei showed that when an object falls, gravity causes it to accelerate. Regardless, Aristotle's claim raises the possibility that people's visual perception of falling motion might be biased away from acceleration towards constant velocity. We tested this idea by requiring participants to judge whether a ball moving in a simulated naturalistic setting appeared to accelerate or decelerate as a function of its motion direction and the amount of acceleration/deceleration. We found that the point of subjective constant velocity (PSCV) differed between up and down but not between left and right motion directions. The PSCV difference between up and down indicated that more acceleration was needed for a downward-falling object to appear at constant velocity than for an upward "falling" object. We found no significant differences in sensitivity to acceleration for the different motion directions. Generalized linear mixed modeling determined that participants relied predominantly on acceleration when making these judgments. Our results support the idea that Aristotle's belief may in part be due to a bias that reduces the perceived magnitude of acceleration for falling objects, a bias not revealed in previous studies of the perception of visual motion.


Assuntos
Percepção de Movimento , Humanos , Aceleração , Percepção Visual , Gravitação
4.
Sci Rep ; 13(1): 20075, 2023 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-37974023

RESUMO

Changes in perceived eye height influence visually perceived object size in both the real world and in virtual reality. In virtual reality, conflicts can arise between the eye height in the real world and the eye height simulated in a VR application. We hypothesized that participants would be influenced more by variation in simulated eye height when they had a clear expectation about their eye height in the real world such as when sitting or standing, and less so when they did not have a clear estimate of the distance between their eyes and the real-life ground plane, e.g., when lying supine. Using virtual reality, 40 participants compared the height of a red square simulated at three different distances (6, 12, and 18 m) against the length of a physical stick (38.1 cm) held in their hands. They completed this task in all combinations of four real-life postures (supine, sitting, standing, standing on a table) and three simulated eye heights that corresponded to each participant's real-world eye height (123cm sitting; 161cm standing; 201cm on table; on average). Confirming previous results, the square's perceived size varied inversely with simulated eye height. Variations in simulated eye height affected participants' perception of size significantly more when sitting than in the other postures (supine, standing, standing on a table). This shows that real-life posture can influence the perception of size in VR. However, since simulated eye height did not affect size estimates less in the lying supine than in the standing position, our hypothesis that humans would be more influenced by variations in eye height when they had a reliable estimate of the distance between their eyes and the ground plane in the real world was not fully confirmed.


Assuntos
Postura , Percepção de Tamanho , Humanos , Posição Ortostática , Olho , Postura Sentada
5.
PLoS One ; 18(10): e0293554, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37906616

RESUMO

Fear of falling (FoF) is a major concern among older adults and is associated with negative outcomes, such as decreased quality of life and increased risk of falls. Despite several systematic reviews conducted on various specific domains of FoF and its related interventions, the research area has only been minimally covered by scoping reviews, and a comprehensive scoping review mapping the range and scope of the research area is still lacking. This review aims to provide such a comprehensive investigation of the existing literature and identify main topics, gaps in the literature, and potential opportunities for bridging different strains of research. Using the PRISMA-ScR guidelines, we searched the Cochrane Database of Systematic Reviews, CINAHL, Embase, MEDLINE, PsycInfo, Scopus, and Web of Science databases. Following the screening process, 969 titles and abstracts were chosen for the review. Pre-processing steps included stop word removal, stemming, and term frequency-inverse document frequency vectorization. Using the Non-negative Matrix Factorization algorithm, we identified seven main topics and created a conceptual mapping of FoF research. The analysis also revealed that most studies focused on physical health-related factors, particularly balance and gait, with less attention paid to cognitive, psychological, social, and environmental factors. Moreover, more research could be done on demographic factors beyond gender and age with an interdisciplinary collaboration with social sciences. The review highlights the need for more nuanced and comprehensive understanding of FoF and calls for more research on less studied areas.


Assuntos
Acidentes por Quedas , Medo , Acidentes por Quedas/prevenção & controle , Medo/psicologia , Qualidade de Vida , Processamento de Linguagem Natural
6.
NPJ Microgravity ; 9(1): 42, 2023 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-37301926

RESUMO

Neutral buoyancy has been used as an analog for microgravity from the earliest days of human spaceflight. Compared to other options on Earth, neutral buoyancy is relatively inexpensive and presents little danger to astronauts while simulating some aspects of microgravity. Neutral buoyancy removes somatosensory cues to the direction of gravity but leaves vestibular cues intact. Removal of both somatosensory and direction of gravity cues while floating in microgravity or using virtual reality to establish conflicts between them has been shown to affect the perception of distance traveled in response to visual motion (vection) and the perception of distance. Does removal of somatosensory cues alone by neutral buoyancy similarly impact these perceptions? During neutral buoyancy we found no significant difference in either perceived distance traveled nor perceived size relative to Earth-normal conditions. This contrasts with differences in linear vection reported between short- and long-duration microgravity and Earth-normal conditions. These results indicate that neutral buoyancy is not an effective analog for microgravity for these perceptual effects.

7.
NPJ Microgravity ; 9(1): 52, 2023 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-37380706

RESUMO

The perceptual upright results from the multisensory integration of the directions indicated by vision and gravity as well as a prior assumption that upright is towards the head. The direction of gravity is signalled by multiple cues, the predominant of which are the otoliths of the vestibular system and somatosensory information from contact with the support surface. Here, we used neutral buoyancy to remove somatosensory information while retaining vestibular cues, thus "splitting the gravity vector" leaving only the vestibular component. In this way, neutral buoyancy can be used as a microgravity analogue. We assessed spatial orientation using the oriented character recognition test (OChaRT, which yields the perceptual upright, PU) under both neutrally buoyant and terrestrial conditions. The effect of visual cues to upright (the visual effect) was reduced under neutral buoyancy compared to on land but the influence of gravity was unaffected. We found no significant change in the relative weighting of vision, gravity, or body cues, in contrast to results found both in long-duration microgravity and during head-down bed rest. These results indicate a relatively minor role for somatosensation in determining the perceptual upright in the presence of vestibular cues. Short-duration neutral buoyancy is a weak analogue for microgravity exposure in terms of its perceptual consequences compared to long-duration head-down bed rest.

8.
PLoS One ; 18(3): e0282975, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36920954

RESUMO

Perceiving our orientation and motion requires sensory information provided by vision, our body and acceleration. Normally, these cues are redundant however in some situations they can conflict. Here, we created a visual-vestibular conflict by simulating a body-upright virtual world while participants were either standing (no conflict), supine or prone (conflict) and assessed the perception of "forward" distance travelled induced by visual motion. Some participants felt they were standing upright even when lying, indicating a visual reorientation illusion (VRI). We previously showed that when experiencing a VRI, visually induced self-motion is enhanced. Here, we determined if there was a relationship between VRI vulnerability and sensory weighting. Confirming our previous findings, the VRI-vulnerable group showed enhanced self-motion perception. We then assessed the relative weightings of visual and non-visual cues in VRI-vulnerable and VRI-resistant individuals using the Oriented Character Recognition Test. Surprisingly, VRI-vulnerable individuals weighted visual cues less and gravity cues more compared to VRI-resistant individuals. These findings are in line with robust integration where, when the difference between two cues is large, the discrepant cue (here gravity) is ignored. Ignoring the gravity cue then leads to relatively more emphasis being placed on visual information and thus a higher gain.


Assuntos
Ilusões , Percepção de Movimento , Vestíbulo do Labirinto , Humanos , Sinais (Psicologia) , Movimento (Física) , Visão Ocular , Percepção Visual , Estimulação Luminosa
9.
Epilepsy Behav Rep ; 21: 100588, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36794093

RESUMO

People with epilepsy (PwE) are at a greater risk of comorbid anxiety, which is often related to the fear of having another seizure for safety or social reasons. While virtual reality (VR) exposure therapy (ET) has been successfully used to treat several anxiety disorders, no studies to date have investigated its use in this population. This paper discusses Phase 1 of the three-phase AnxEpiVR pilot study. In Phase 1, we aimed to explore and validate scenarios that provoke epilepsy/seizure-specific (ES) interictal anxiety and provide recommendations that lay the foundation for designing VR-ET scenarios to treat this condition in PwE. An anonymous online questionnaire (including open- and closed-ended questions) that targeted PwE and those affected by it (e.g., through a family member, friend, or as a healthcare professional) was promoted by a major epilepsy foundation in Toronto, Canada. Responses from n = 18 participants were analyzed using grounded theory and the constant comparison method. Participants described anxiety-provoking scenes, which were categorized under the following themes: location, social setting, situational, activity, physiological, and previous seizure. While scenes tied to previous seizures were typically highly personalized and idiosyncratic, public settings and social situations were commonly reported fears. Factors consistently found to increase ES-interictal anxiety included the potential for danger (physical injury or inability to get help), social factors (increased number of unfamiliar people, social pressures), and specific triggers (stress, sensory, physiological, and medication-related). We make recommendations for incorporating different combinations of anxiety-related factors to achieve a customizable selection of graded exposure scenarios suitable for VR-ET. Subsequent phases of this study will include creating a set of VR-ET hierarchies (Phase 2) and rigorously evaluating their feasibility and effectiveness (Phase 3).

10.
JMIR Res Protoc ; 12: e41523, 2023 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-36692939

RESUMO

BACKGROUND: Anxiety is one of the most common psychiatric comorbidities in people with epilepsy and often involves fears specifically related to the condition, such as anxiety related to the fear of having another seizure. These epilepsy- or seizure-related fears have been reported as being more disabling than the seizures themselves and significantly impact quality of life. Although research has suggested that exposure therapy (ET) is helpful in decreasing anxiety in people with epilepsy, no research to our knowledge has been conducted on ET in people with epilepsy using virtual reality (VR). The use of novel technologies such as an immersive VR head-mounted display for ET in this population offers several benefits. Indeed, using VR can increase accessibility for people with epilepsy with transportation barriers (eg, those who live outside urban centers or who have a suspended driver's license owing to their condition), among other advantages. In the present research protocol, we describe the design of an innovative VR-ET program administered in the home that focuses on decreasing anxiety in people with epilepsy, specifically anxiety related to their epilepsy or seizures. OBJECTIVE: Our primary objective is to examine the feasibility of the study protocol and proposed treatment as well as identify suggestions for improvement when designing subsequent larger clinical trials. Our secondary objective is to evaluate whether VR-ET is effective in decreasing anxiety in a pilot study. We hypothesize that levels of anxiety in people with epilepsy will decrease from using VR-ET. METHODS: This mixed methods study comprises 3 phases. Phase 1 involves engaging with those with lived experience through a web-based questionnaire to validate assumptions about anxiety in people with epilepsy. Phase 2 involves filming videos using a 360° camera for the VR-ET intervention (likely consisting of 3 sets of scenes, each with 3 intensity levels) based on the epilepsy- and seizure-related fears most commonly reported in the phase 1 questionnaire. Finally, phase 3 involves evaluating the at-home VR-ET intervention and study methods using a series of validated scales, as well as semistructured interviews. RESULTS: This pilot study was funded in November 2021. Data collection for phase 1 was completed as of August 7, 2022, and had a final sample of 18 participants. CONCLUSIONS: Our findings will add to the limited body of knowledge on anxiety in people with epilepsy and the use of VR in this population. We anticipate that the insights gained from this study will lay the foundation for a novel and accessible VR intervention for this underrecognized and undertreated comorbidity in people with epilepsy. TRIAL REGISTRATION: ClinicalTrials.gov NCT05296057; https://clinicaltrials.gov/ct2/show/NCT05296057. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/41523.

11.
PLoS One ; 18(1): e0267983, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36716328

RESUMO

To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigate this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants are shown a ball moving laterally which disappears after a certain time. They then indicate by button press when they think the ball would have hit a target rectangle positioned in the environment. While the ball is visible, participants sometimes experience simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task is a two-interval forced choice task in which participants judge which of two motions is faster: in one interval they see the same ball they observed in the first task while in the other they see a ball cloud whose speed is controlled by a PEST staircase. While observing the single ball, they are again moved visually either in the same or opposite direction as the ball or they remain static. We expect participants to overestimate the speed of a ball that moves opposite to their simulated self-motion (speed estimation task), which should then lead them to underestimate the time it takes the ball to reach the target rectangle (prediction task). Seeing the ball during visually simulated self-motion should increase variability in both tasks. We expect to find performance in both tasks to be correlated, both in accuracy and precision.


Assuntos
Percepção de Movimento , Humanos , Movimento (Física) , Fatores de Tempo , Retina , Viés
13.
Front Aging Neurosci ; 14: 816512, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36092809

RESUMO

Self-motion perception (e.g., when walking/driving) relies on the integration of multiple sensory cues including visual, vestibular, and proprioceptive signals. Changes in the efficacy of multisensory integration have been observed in older adults (OA), which can sometimes lead to errors in perceptual judgments and have been associated with functional declines such as increased falls risk. The objectives of this study were to determine whether passive, visual-vestibular self-motion heading perception could be improved by providing feedback during multisensory training, and whether training-related effects might be more apparent in OAs vs. younger adults (YA). We also investigated the extent to which training might transfer to improved standing-balance. OAs and YAs were passively translated and asked to judge their direction of heading relative to straight-ahead (left/right). Each participant completed three conditions: (1) vestibular-only (passive physical motion in the dark), (2) visual-only (cloud-of-dots display), and (3) bimodal (congruent vestibular and visual stimulation). Measures of heading precision and bias were obtained for each condition. Over the course of 3 days, participants were asked to make bimodal heading judgments and were provided with feedback ("correct"/"incorrect") on 900 training trials. Post-training, participants' biases, and precision in all three sensory conditions (vestibular, visual, bimodal), and their standing-balance performance, were assessed. Results demonstrated improved overall precision (i.e., reduced JNDs) in heading perception after training. Pre- vs. post-training difference scores showed that improvements in JNDs were only found in the visual-only condition. Particularly notable is that 27% of OAs initially could not discriminate their heading at all in the visual-only condition pre-training, but subsequently obtained thresholds in the visual-only condition post-training that were similar to those of the other participants. While OAs seemed to show optimal integration pre- and post-training (i.e., did not show significant differences between predicted and observed JNDs), YAs only showed optimal integration post-training. There were no significant effects of training for bimodal or vestibular-only heading estimates, nor standing-balance performance. These results indicate that it may be possible to improve unimodal (visual) heading perception using a multisensory (visual-vestibular) training paradigm. The results may also help to inform interventions targeting tasks for which effective self-motion perception is important.

14.
Vision (Basel) ; 6(2)2022 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-35645379

RESUMO

Depth information is limited in a 2D scene and for people to perceive the distance of an object, they need to rely on pictorial cues such as perspective, size constancy and elevation in the scene. In this study, we tested whether people could use an object's size and its position in a 2D image to determine its distance. In a series of online experiments, participants viewed a target representing their smartphone rendered within a 2D scene. They either positioned it in the scene at the distance they thought was correct based on its size or adjusted the target to the correct size based on its position in the scene. In all experiments, the adjusted target size and positions were not consistent with their initially presented positions and sizes and were made larger and moved further away on average. Familiar objects influenced adjusted position from size but not adjusted size from position. These results suggest that in a 2D scene, (1) people cannot use an object's visual size and position relative to the horizon to infer distance reliably and (2) familiar objects in the scene affect perceived size and distance differently. The differences found demonstrate that size and distance perception processes may be independent.

15.
R Soc Open Sci ; 9(4): 210722, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35462776

RESUMO

Inaccurate perceptions, such as under- or over-estimation of body size are often found in clinical eating disorder populations but have recently been shown also in healthy people. However, it is not yet clear how body size perception may be affected when the internal body representation is manipulated. In this study, visual adaptation was used to investigate whether exposure to distorted visual feedback alters the representation of body size and how long any such effects might last. Participants were exposed for five minutes to a distorted life-size image of themselves that was either 20% wider or 20% narrower than their normal size. Accuracy was measured using our novel psychophysical method that taps into the implicit body representation. The accuracy of the representation was assessed at 6, 12 and 18 min following exposure to adaptation. Altered visual feedback caused changes in participants' judgements of their body size: adapting to a wider body resulted in size overestimation whereas underestimations occurred after adapting to a narrower body. These distortions lasted throughout testing and did not fully return back to normal within 18 min. The results are discussed in terms of the emerging literature indicating that the internal representation of the body is dynamic and flexible.

16.
Sci Rep ; 12(1): 6426, 2022 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-35440744

RESUMO

Falls are a common cause of injury in older adults (OAs), and age-related declines across the sensory systems are associated with increased falls risk. The vestibular system is particularly important for maintaining balance and supporting safe mobility, and aging has been associated with declines in vestibular end-organ functioning. However, few studies have examined potential age-related differences in vestibular perceptual sensitivities or their association with postural stability. Here we used an adaptive-staircase procedure to measure detection and discrimination thresholds in 19 healthy OAs and 18 healthy younger adults (YAs), by presenting participants with passive heave (linear up-and-down translations) and pitch (forward-backward tilt rotations) movements on a motion-platform in the dark. We also examined participants' postural stability under various standing-balance conditions. Associations among these postural measures and vestibular perceptual thresholds were further examined. Ultimately, OAs showed larger heave and pitch detection thresholds compared to YAs, and larger perceptual thresholds were associated with greater postural sway, but only in OAs. Overall, these results suggest that vestibular perceptual sensitivity declines with older age and that such declines are associated with poorer postural stability. Future studies could consider the potential applicability of these results in the development of screening tools for falls prevention in OAs.


Assuntos
Vestíbulo do Labirinto , Idoso , Humanos , Movimento , Percepção da Altura Sonora , Equilíbrio Postural
17.
J Vestib Res ; 32(4): 325-340, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34719448

RESUMO

BACKGROUND: Humans demonstrate many physiological changes in microgravity for which long-duration head down bed rest (HDBR) is a reliable analog. However, information on how HDBR affects sensory processing is lacking. OBJECTIVE: We previously showed [25] that microgravity alters the weighting applied to visual cues in determining the perceptual upright (PU), an effect that lasts long after return. Does long-duration HDBR have comparable effects? METHODS: We assessed static spatial orientation using the luminous line test (subjective visual vertical, SVV) and the oriented character recognition test (PU) before, during and after 21 days of 6° HDBR in 10 participants. Methods were essentially identical as previously used in orbit [25]. RESULTS: Overall, HDBR had no effect on the reliance on visual relative to body cues in determining the PU. However, when considering the three critical time points (pre-bed rest, end of bed rest, and 14 days post-bed rest) there was a significant decrease in reliance on visual relative to body cues, as found in microgravity. The ratio had an average time constant of 7.28 days and returned to pre-bed-rest levels within 14 days. The SVV was unaffected. CONCLUSIONS: We conclude that bed rest can be a useful analog for the study of the perception of static self-orientation during long-term exposure to microgravity. More detailed work on the precise time course of our effects is needed in both bed rest and microgravity conditions.


Assuntos
Repouso em Cama , Ausência de Peso , Decúbito Inclinado com Rebaixamento da Cabeça/fisiologia , Humanos , Percepção , Fatores de Tempo , Simulação de Ausência de Peso
18.
Atten Percept Psychophys ; 84(1): 25-46, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34704212

RESUMO

Judging object speed during observer self-motion requires disambiguating retinal stimulation from two sources: self-motion and object motion. According to the Flow Parsing hypothesis, observers estimate their own motion, then subtract the retinal corresponding motion from the total retinal stimulation and interpret the remaining stimulation as pertaining to object motion. Subtracting noisier self-motion information from retinal input should lead to a decrease in precision. Furthermore, when self-motion is only simulated visually, self-motion is likely to be underestimated, yielding an overestimation of target speed when target and observer move in opposite directions and an underestimation when they move in the same direction. We tested this hypothesis with a two-alternative forced-choice task in which participants judged which of two motions, presented in an immersive 3D environment, was faster. One motion interval contained a ball cloud whose speed was selected dynamically according to a PEST staircase, while the other contained one big target travelling laterally at a fixed speed. While viewing the big target, participants were either static or experienced visually simulated lateral self-motion in the same or opposite direction of the target. Participants were not significantly biased in either motion profile, and precision was only significantly lower when participants moved visually in the direction opposite to the target. We conclude that, when immersed in an ecologically valid 3D environment with rich self-motion cues, participants perceive an object's speed accurately at a small precision cost, even when self-motion is simulated only visually.


Assuntos
Percepção de Movimento , Sinais (Psicologia) , Humanos , Movimento (Física) , Estimulação Luminosa , Retina , Percepção Visual
19.
Ear Hear ; 43(2): 420-435, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34534156

RESUMO

OBJECTIVES: Older adults with age-related hearing loss (ARHL) are at greater risk of falling and have greater mobility problems than older adults with normal hearing (NH). The underlying cause of these associations remains unclear. One possible reason is that age-related declines in the vestibular system could parallel those observed in the auditory system within the same individuals. Here, we compare the sensitivity of vestibular perceptual abilities (psychophysics), vestibular end-organ functioning (vestibular evoked myogenic potentials and video head impulse tests), and standing balance (posturography) in healthy older adults with and without ARHL. DESIGN: A total of 46 community-dwelling older adults, 23 with ARHL and 23 with NH, were passively translated in heave (up and down) and rotated in pitch (tilted forward and backward) in the dark using a motion platform. Using an adaptive staircase psychophysical procedure, participants' heave and pitch detection and discrimination thresholds were determined. In a posturography task, participants' center of pressure (COP) path length was measured as they stood on a forceplate with eyes open and closed, on firm and compliant surfaces, with and without sound suppression. Baseline motor, cognitive, and sensory functioning, including vestibular end-organ function, were measured. RESULTS: Individuals with ARHL were less sensitive at discriminating pitch movements compared to older adults with NH. Poorer self-reported hearing abilities were also associated with poorer pitch discrimination. In addition to pitch discrimination thresholds, lower pitch detection thresholds were significantly associated with hearing loss in the low-frequency range. Less stable standing balance was significantly associated with poorer vestibular perceptual sensitivity. DISCUSSION: These findings provide evidence for an association between ARHL and reduced vestibular perceptual sensitivity.


Assuntos
Presbiacusia , Potenciais Evocados Miogênicos Vestibulares , Vestíbulo do Labirinto , Idoso , Audição , Humanos , Equilíbrio Postural/fisiologia , Potenciais Evocados Miogênicos Vestibulares/fisiologia , Vestíbulo do Labirinto/fisiologia
20.
Perception ; 51(1): 25-36, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34913755

RESUMO

Here, we investigate how body orientation relative to gravity affects the perceived size of visual targets. When in virtual reality, participants judged the size of a visual target projected at simulated distances of between 2 and 10 m and compared it to a physical reference length held in their hands while they were standing or lying prone or supine. Participants needed to make the visual size of the target 5.4% larger when supine and 10.1% larger when prone, compared to when they were in an upright position to perceive that it matched the physical reference length. Needing to make the target larger when lying compared to when standing suggests some not mutually exclusive possibilities. It may be that while tilted participants perceived the targets as smaller than when they were upright. It may be that participants perceived the targets as being closer while tilted compared to when upright. It may also be that participants perceived the physical reference length as longer while tilted. Misperceiving objects as larger and/or closer when lying may provide a survival benefit while in such a vulnerable position.


Assuntos
Gravitação , Orientação , Mãos , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...