Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Cereb Cortex ; 34(6)2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38897817

RESUMEN

Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural "blueprints" may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre-post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required.


Asunto(s)
Corteza Auditiva , Ceguera , Imagen por Resonancia Magnética , Corteza Visual , Humanos , Ceguera/fisiopatología , Ceguera/diagnóstico por imagen , Masculino , Adulto , Femenino , Corteza Auditiva/diagnóstico por imagen , Corteza Auditiva/fisiología , Corteza Auditiva/fisiopatología , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología , Adulto Joven , Plasticidad Neuronal/fisiología , Estimulación Acústica , Mapeo Encefálico , Persona de Mediana Edad , Percepción Auditiva/fisiología , Ecolocación/fisiología
2.
J Neurosci ; 43(24): 4470-4486, 2023 06 14.
Artículo en Inglés | MEDLINE | ID: mdl-37127360

RESUMEN

In the investigation of the brain areas involved in human spatial navigation, the traditional focus has been on visually guided navigation in sighted people. Consequently, it is unclear whether the involved areas also support navigational abilities in other modalities. We explored this possibility by testing whether the occipital place area (OPA), a region associated with visual boundary-based navigation in sighted people, has a similar role in echo-acoustically guided navigation in blind human echolocators. We used fMRI to measure brain activity in 6 blind echolocation experts (EEs; five males, one female), 12 blind controls (BCs; six males, six females), and 14 sighted controls (SCs; eight males, six females) as they listened to prerecorded echolocation sounds that conveyed either a route taken through one of three maze environments, a scrambled (i.e., spatiotemporally incoherent) control sound, or a no-echo control sound. We found significantly greater activity in the OPA of EEs, but not the control groups, when they listened to the coherent route sounds relative to the scrambled sounds. This provides evidence that the OPA of the human navigation brain network is not strictly tied to the visual modality but can be recruited for nonvisual navigation. We also found that EEs, but not BCs or SCs, recruited early visual cortex for processing of echo acoustic information. This is consistent with the recent notion that the human brain is organized flexibly by task rather than by specific modalities.SIGNIFICANCE STATEMENT There has been much research on the brain areas involved in visually guided navigation, but we do not know whether the same or different brain regions are involved when blind people use a sense other than vision to navigate. In this study, we show that one part of the brain (occipital place area) known to play a specific role in visually guided navigation is also active in blind human echolocators when they use reflected sound to navigate their environment. This finding opens up new ways of understanding how people navigate, and informs our ability to provide rehabilitative support to people with vision loss.


Asunto(s)
Ceguera , Ecolocación , Masculino , Animales , Humanos , Femenino , Visión Ocular , Percepción Auditiva , Lóbulo Occipital , Imagen por Resonancia Magnética
3.
Psychol Sci ; 33(7): 1143-1153, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35699555

RESUMEN

Here, we report novel empirical results from a psychophysical experiment in which we tested the echolocation abilities of nine blind adult human experts in click-based echolocation. We found that they had better acuity in localizing a target and used lower intensity emissions (i.e., mouth clicks) when a target was placed 45° off to the side compared with when it was placed at 0° (straight ahead). We provide a possible explanation of the behavioral result in terms of binaural-intensity signals, which appear to change more rapidly around 45°. The finding that echolocators have better echo-localization off axis is surprising, because for human source localization (i.e., regular spatial hearing), it is well known that performance is best when targets are straight ahead (0°) and decreases as targets move farther to the side. This may suggest that human echolocation and source hearing rely on different acoustic cues and that human spatial hearing has more facets than previously thought.


Asunto(s)
Ecolocación , Localización de Sonidos , Adulto , Animales , Señales (Psicología) , Audición , Humanos , Boca
4.
Exp Brain Res ; 239(12): 3625-3633, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34609546

RESUMEN

What factors are important in the calibration of mental representations of auditory space? A substantial body of research investigating the audiospatial abilities of people who are blind has shown that visual experience might be an important factor for accurate performance in some audiospatial tasks. Yet, it has also been shown that long-term experience using click-based echolocation might play a similar role, with blind expert echolocators demonstrating auditory localization abilities that are superior to those of people who are blind and who do not use click-based echolocation by Vercillo et al. (Neuropsychologia 67: 35-40, 2015). Based on this hypothesis we might predict that training in click-based echolocation may lead to improvement in performance in auditory localization tasks in people who are blind. Here we investigated this hypothesis in a sample of 12 adult people who have been blind from birth. We did not find evidence for an improvement in performance in auditory localization after 10 weeks of training despite significant improvement in echolocation ability. It is possible that longer-term experience with click-based echolocation is required for effects to develop, or that other factors can explain the association between echolocation expertise and superior auditory localization. Considering the practical relevance of click-based echolocation for people who are visually impaired, future research should address these questions.


Asunto(s)
Ecolocación , Localización de Sonidos , Adulto , Animales , Ceguera , Humanos
5.
Proc Biol Sci ; 286(1912): 20191910, 2019 10 09.
Artículo en Inglés | MEDLINE | ID: mdl-31575359

RESUMEN

The functional specializations of cortical sensory areas were traditionally viewed as being tied to specific modalities. A radically different emerging view is that the brain is organized by task rather than sensory modality, but it has not yet been shown that this applies to primary sensory cortices. Here, we report such evidence by showing that primary 'visual' cortex can be adapted to map spatial locations of sound in blind humans who regularly perceive space through sound echoes. Specifically, we objectively quantify the similarity between measured stimulus maps for sound eccentricity and predicted stimulus maps for visual eccentricity in primary 'visual' cortex (using a probabilistic atlas based on cortical anatomy) to find that stimulus maps for sound in expert echolocators are directly comparable to those for vision in sighted people. Furthermore, the degree of this similarity is positively related with echolocation ability. We also rule out explanations based on top-down modulation of brain activity-e.g. through imagery. This result is clear evidence that task-specific organization can extend even to primary sensory cortices, and in this way is pivotal in our reinterpretation of the functional organization of the human brain.


Asunto(s)
Ceguera , Mapeo Encefálico , Localización de Sonidos , Animales , Ecolocación , Humanos , Lóbulo Parietal , Sonido , Visión Ocular , Corteza Visual , Personas con Daño Visual
6.
PLoS Comput Biol ; 13(8): e1005670, 2017 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28859082

RESUMEN

Echolocation is the ability to use sound-echoes to infer spatial information about the environment. Some blind people have developed extraordinary proficiency in echolocation using mouth-clicks. The first step of human biosonar is the transmission (mouth click) and subsequent reception of the resultant sound through the ear. Existing head-related transfer function (HRTF) data bases provide descriptions of reception of the resultant sound. For the current report, we collected a large database of click emissions with three blind people expertly trained in echolocation, which allowed us to perform unprecedented analyses. Specifically, the current report provides the first ever description of the spatial distribution (i.e. beam pattern) of human expert echolocation transmissions, as well as spectro-temporal descriptions at a level of detail not available before. Our data show that transmission levels are fairly constant within a 60° cone emanating from the mouth, but levels drop gradually at further angles, more than for speech. In terms of spectro-temporal features, our data show that emissions are consistently very brief (~3ms duration) with peak frequencies 2-4kHz, but with energy also at 10kHz. This differs from previous reports of durations 3-15ms and peak frequencies 2-8kHz, which were based on less detailed measurements. Based on our measurements we propose to model transmissions as sum of monotones modulated by a decaying exponential, with angular attenuation by a modified cardioid. We provide model parameters for each echolocator. These results are a step towards developing computational models of human biosonar. For example, in bats, spatial and spectro-temporal features of emissions have been used to derive and test model based hypotheses about behaviour. The data we present here suggest similar research opportunities within the context of human echolocation. Relatedly, the data are a basis to develop synthetic models of human echolocation that could be virtual (i.e. simulated) or real (i.e. loudspeaker, microphones), and which will help understanding the link between physical principles and human behaviour.


Asunto(s)
Ceguera/rehabilitación , Ecolocación/fisiología , Modelos Biológicos , Localización de Sonidos/fisiología , Adulto , Animales , Bases de Datos Factuales , Humanos , Masculino , Persona de Mediana Edad , Boca/fisiología , Procesamiento de Señales Asistido por Computador , Espectrografía del Sonido
7.
Neurocase ; 21(4): 465-70, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-24874426

RESUMEN

Some blind humans make clicking noises with their mouth and use the reflected echoes to perceive objects and surfaces. This technique can operate as a crude substitute for vision, allowing human echolocators to perceive silent, distal objects. Here, we tested if echolocation would, like vision, show size constancy. To investigate this, we asked a blind expert echolocator (EE) to echolocate objects of different physical sizes presented at different distances. The EE consistently identified the true physical size of the objects independent of distance. In contrast, blind and blindfolded sighted controls did not show size constancy, even when encouraged to use mouth clicks, claps, or other signals. These findings suggest that size constancy is not a purely visual phenomenon, but that it can operate via an auditory-based substitute for vision, such as human echolocation.


Asunto(s)
Ceguera/psicología , Localización de Sonidos , Percepción Espacial , Estimulación Acústica , Adulto , Animales , Femenino , Humanos , Masculino , Persona de Mediana Edad
8.
Exp Brain Res ; 232(6): 1915-25, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24584899

RESUMEN

The ability of humans to echolocate has been recognized since the 1940s. Little is known about what determines individual differences in echolocation ability, however. Although hearing ability has been suggested as an important factor in blind people and sighted-trained echolocators, there is evidence to suggest that this may not be the case for sighted novices. Therefore, non-auditory aspects of human cognition might be relevant. Previous brain imaging studies have shown activation of the early 'visual', i.e. calcarine, cortex during echolocation in blind echolocation experts, and also during visual imagery in blind and sighted people. Therefore, here we investigated the relationship between echolocation ability and vividness of visual imagery (VVI). Twenty-four sighted echolocation novices completed Marks' (Br J Psychol 1:17-24, 1973) VVI questionnaire and they also performed an echolocation size-discrimination task. Furthermore, they participated in a battery of auditory tests that determined their ability to detect fluctuations in sound frequency and intensity, as well as hearing differences between the right and left ear. A correlational analysis revealed a significant relationship between participants' VVI and echolocation ability, i.e. participants with stronger VVI also had higher echolocation ability, even when differences in auditory abilities were taken into account. In terms of underlying mechanisms, we suggest that either the use of visual imagery is a strategy for echolocation, or that visual imagery and echolocation both depend on the ability to recruit calcarine cortex for cognitive tasks that do not rely on retinal input.


Asunto(s)
Umbral Auditivo/fisiología , Imágenes en Psicoterapia , Psicoacústica , Localización de Sonidos/fisiología , Visión Ocular/fisiología , Estimulación Acústica , Adolescente , Adulto , Análisis de Varianza , Animales , Femenino , Humanos , Masculino , Estadística como Asunto , Adulto Joven
9.
Front Rehabil Sci ; 4: 1098624, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37284336

RESUMEN

Click-based echolocation can support mobility and orientation in people with vision impairments (VI) when used alongside other mobility methods. Only a small number of people with VI use click-based echolocation. Previous research about echolocation addresses the skill of echolocation per se to understand how echolocation works, and its brain basis. Our report is the first to address the question of professional practice for people with VI, i.e., a very different focus. VI professionals are well placed to affect how a person with VI might learn about, experience or use click-based echolocation. Thus, we here investigated if training in click-based echolocation for VI professionals might lead to a change in their professional practice. The training was delivered via 6-h workshops throughout the UK. It was free to attend, and people signed up via a publicly available website. We received follow-up feedback in the form of yes/no answers and free text comments. Yes/no answers showed that 98% of participants had changed their professional practice as a consequence of the training. Free text responses were analysed using content analysis, and we found that 32%, 11.7% and 46.6% of responses indicated a change in information processing, verbal influencing or instruction and practice, respectively. This attests to the potential of VI professionals to act as multipliers of training in click-based echolocation with the potential to improve the lives of people with VI. The training we evaluated here could feasibly be integrated into VI Rehabilitation or VI Habilitation training as implemented at higher education institutions (HEIs) or continuing professional development (CPD).

10.
J Exp Psychol Hum Percept Perform ; 49(5): 600-622, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37261769

RESUMEN

It is clear that people can learn a new sensory skill-a new way of mapping sensory inputs onto world states. It remains unclear how flexibly a new sensory skill can become embedded in multisensory perception and decision-making. To address this, we trained typically sighted participants (N = 12) to use a new echo-like auditory cue to distance in a virtual world, together with a noisy visual cue. Using model-based analyses, we tested for key markers of efficient multisensory perception and decision-making with the new skill. We found that 12 of 14 participants learned to judge distance using the novel auditory cue. Their use of this new sensory skill showed three key features: (a) It enhanced the speed of timed decisions; (b) it largely resisted interference from a simultaneous digit span task; and (c) it integrated with vision in a Bayes-like manner to improve precision. We also show some limits following this relatively short training: Precision benefits were lower than the Bayes-optimal prediction, and there was no forced fusion of signals. We conclude that people already embed new sensory skills in flexible multisensory perception and decision-making after a short training period. A key application of these insights is to the development of sensory augmentation systems that can enhance human perceptual abilities in novel ways. The limitations we reveal (sub-optimality, lack of fusion) provide a foundation for further investigations of the limits of these abilities and their brain basis. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Aprendizaje , Percepción Visual , Humanos , Teorema de Bayes , Percepción Auditiva , Estimulación Luminosa
11.
J Neurophysiol ; 105(2): 846-59, 2011 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-21160005

RESUMEN

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


Asunto(s)
Biorretroalimentación Psicológica/fisiología , Mano/fisiología , Percepción de Movimiento/fisiología , Movimiento/fisiología , Análisis y Desempeño de Tareas , Adulto , Femenino , Humanos , Masculino , Sistemas en Línea
12.
Exp Brain Res ; 211(2): 313-28, 2011 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21516448

RESUMEN

Many movements that people perform every day are directed at visual targets, e.g., when we press an elevator button. However, many other movements are not target-directed, but are based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing or copying. Here, show a reaction time difference between these two types of movements in four separate experiments. In Exp. 1, subjects moved their eyes freely and used direct hand movements. In Exp. 2, subjects moved their eyes freely and their movements were tool-mediated (computer mouse). In Exp. 3, subjects fixated a central target and the visual field in which visual information was presented was manipulated. Experiment 4 was identical to Exp. 3 except for the fact that visual information about targets disappeared before movement onset. In all four experiments, reaction times in the allocentric task were approximately 35 ms slower than they were in the target-directed task. We suggest that this difference in reaction time between the two tasks reflects the fact that allocentric, but not target-directed, movements recruit the ventral stream, in particular lateral occipital cortex, which increases processing time. We also observed an advantage for movements made in the lower visual field as measured by movement variability, whether or not those movements were allocentric or target-directed. This latter result, we argue, reflects the role of the dorsal visual stream in the online control of movements in both kinds of tasks.


Asunto(s)
Movimiento/fisiología , Estimulación Luminosa/métodos , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología , Percepción Visual/fisiología , Humanos , Factores de Tiempo
13.
J Exp Psychol Hum Percept Perform ; 47(2): 269-281, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33271045

RESUMEN

Making sense of the world requires perceptual constancy-the stable perception of an object across changes in one's sensation of it. To investigate whether constancy is intrinsic to perception, we tested whether humans can learn a form of constancy that is unique to a novel sensory skill (here, the perception of objects through click-based echolocation). Participants judged whether two echoes were different either because: (a) the clicks were different, or (b) the objects were different. For differences carried through spectral changes (but not level changes), blind expert echolocators spontaneously showed a high constancy ability (mean d' = 1.91) compared to sighted and blind people new to echolocation (mean d' = 0.69). Crucially, sighted controls improved rapidly in this ability through training, suggesting that constancy emerges in a domain with which the perceiver has no prior experience. This provides strong evidence that constancy is intrinsic to human perception. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Ecolocación , Localización de Sonidos , Animales , Ceguera , Humanos , Percepción , Sensación
14.
PLoS One ; 16(6): e0252330, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34077457

RESUMEN

Understanding the factors that determine if a person can successfully learn a novel sensory skill is essential for understanding how the brain adapts to change, and for providing rehabilitative support for people with sensory loss. We report a training study investigating the effects of blindness and age on the learning of a complex auditory skill: click-based echolocation. Blind and sighted participants of various ages (21-79 yrs; median blind: 45 yrs; median sighted: 26 yrs) trained in 20 sessions over the course of 10 weeks in various practical and virtual navigation tasks. Blind participants also took part in a 3-month follow up survey assessing the effects of the training on their daily life. We found that both sighted and blind people improved considerably on all measures, and in some cases performed comparatively to expert echolocators at the end of training. Somewhat surprisingly, sighted people performed better than those who were blind in some cases, although our analyses suggest that this might be better explained by the younger age (or superior binaural hearing) of the sighted group. Importantly, however, neither age nor blindness was a limiting factor in participants' rate of learning (i.e. their difference in performance from the first to the final session) or in their ability to apply their echolocation skills to novel, untrained tasks. Furthermore, in the follow up survey, all participants who were blind reported improved mobility, and 83% reported better independence and wellbeing. Overall, our results suggest that the ability to learn click-based echolocation is not strongly limited by age or level of vision. This has positive implications for the rehabilitation of people with vision loss or in the early stages of progressive vision loss.


Asunto(s)
Estimulación Acústica , Adaptación Fisiológica , Ceguera/fisiopatología , Aprendizaje , Localización de Sonidos/fisiología , Personas con Daño Visual/psicología , Adulto , Factores de Edad , Anciano , Animales , Fenómenos Biomecánicos , Ceguera/psicología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Factores de Tiempo , Adulto Joven
15.
J Vis ; 10(5): 17, 2010 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-20616135

RESUMEN

A new computational analysis is described that is capable of estimating the 3D shapes of continuously curved surfaces with anisotropic textures that are viewed with negligible perspective. This analysis assumes that the surface texture is homogeneous, and it makes specific predictions about how the apparent shape of a surface should be distorted in cases where that assumption is violated. Two psychophysical experiments are reported in an effort to test those predictions, and the results confirm that observers' ordinal shape judgments are consistent with what would be expected based on the model. The limitations of this analysis are also considered, and a complimentary model is discussed that is only appropriate for surfaces viewed with large amounts of perspective.


Asunto(s)
Percepción de Profundidad/fisiología , Percepción de Forma/fisiología , Imagenología Tridimensional , Reconocimiento Visual de Modelos/fisiología , Humanos , Modelos Psicológicos , Estimulación Luminosa/métodos , Psicofísica , Propiedades de Superficie
16.
J Vis ; 10(3): 3.1-27, 2010 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-20377280

RESUMEN

In their day-to-day activities human beings are constantly generating behavior, such as pointing, grasping or verbal reports, on the basis of visible target locations. The question arises how the brain represents target locations. One possibility is that the brain represents them metrically, i.e. in terms of distance and direction. Another equally plausible possibility is that the brain represents locations non-metrically, using for example ordered geometry or topology. Here we report two experiments that were designed to test if the brain represents locations metrically or non-metrically. We measured accuracy and variability of visually guided reach-to-point movements (Experiment 1) and probe-stimulus adjustments (Experiment 2). The specific procedure of informing subjects about the relevant response on each trial enabled us to dissociate the use of non-metric target location from the use of metric distance and direction in head/eye-centered, hand-centered and externally defined (allocentric) coordinates. The behavioral data show that subjects' responses are least variable when they can direct their response at a visible target location, the only condition that permitted the use of non-metric information about target location in our experiments. Data from Experiments 1 and 2 correspond well quantitatively. Response variability in non-metric conditions cannot be predicted based on response variability in metric conditions. We conclude that the brain uses non-metric geometrical structure to represent locations.


Asunto(s)
Modelos Neurológicos , Movimiento/fisiología , Desempeño Psicomotor/fisiología , Percepción Espacial/fisiología , Fenómenos Biomecánicos/fisiología , Femenino , Mano/fisiología , Humanos , Masculino , Estimulación Luminosa
17.
J Exp Psychol Gen ; 149(12): 2314-2331, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32324025

RESUMEN

The human brain may use recent sensory experience to create sensory templates that are then compared to incoming sensory input, that is, "knowing what to listen for." This can lead to greater perceptual sensitivity, as long as the relevant properties of the target stimulus can be reliably estimated from past sensory experiences. Echolocation is an auditory skill probably best understood in bats, but humans can also echolocate. Here we investigated for the first time whether echolocation in humans involves the use of sensory templates derived from recent sensory experiences. Our results showed that when there was certainty in the acoustic properties of the echo relative to the emission, either in temporal onset, spectral content or level, people detected the echo more accurately than when there was uncertainty. In addition, we found that people were more accurate when the emission's spectral content was certain but, surprisingly, not when either its level or temporal onset was certain. Importantly, the lack of an effect of temporal onset of the emission is counter to that found previously for tasks using nonecholocation sounds, suggesting that the underlying mechanisms might be different for echolocation and nonecholocation sounds. Importantly, the effects of stimulus certainty were no different for people with and without experience in echolocation, suggesting that stimulus-specific sensory templates can be used in a skill that people have never used before. From an applied perspective our results suggest that echolocation instruction should encourage users to make clicks that are similar to one another in their spectral content. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Ecolocación/fisiología , Incertidumbre , Adulto , Anciano , Animales , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
18.
J Exp Psychol Hum Percept Perform ; 46(1): 21-35, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31556685

RESUMEN

People use sensory, in particular visual, information to guide actions such as walking around obstacles, grasping or reaching. However, it is presently unclear how malleable the sensorimotor system is. The present study investigated this by measuring how click-based echolocation may be used to avoid obstacles while walking. We tested 7 blind echolocation experts, 14 sighted, and 10 blind echolocation beginners. For comparison, we also tested 10 sighted participants, who used vision. To maximize the relevance of our research for people with vision impairments, we also included a condition where the long cane was used and considered obstacles at different elevations. Motion capture and sound data were acquired simultaneously. We found that echolocation experts walked just as fast as sighted participants using vision, and faster than either sighted or blind echolocation beginners. Walking paths of echolocation experts indicated early and smooth adjustments, similar to those shown by sighted people using vision and different from later and more abrupt adjustments of beginners. Further, for all participants, the use of echolocation significantly decreased collision frequency with obstacles at head, but not ground level. Further analyses showed that participants who made clicks with higher spectral frequency content walked faster, and that for experts higher clicking rates were associated with faster walking. The results highlight that people can use novel sensory information (here, echolocation) to guide actions, demonstrating the action system's ability to adapt to changes in sensory input. They also highlight that regular use of echolocation enhances sensory-motor coordination for walking in blind people. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Ceguera/fisiopatología , Ceguera/psicología , Localización de Sonidos , Trastornos de la Visión/psicología , Caminata , Adolescente , Adulto , Fenómenos Biomecánicos , Bastones , Femenino , Humanos , Masculino , Trastornos de la Visión/fisiopatología , Caminata/fisiología , Caminata/psicología , Adulto Joven
19.
Neuropsychologia ; 47(5): 1227-44, 2009 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-19428386

RESUMEN

Two experiments are reported that were designed to measure the accuracy and reliability of both visually guided hand movements (Exp. 1) and perceptual matching judgments (Exp. 2). The specific procedure for informing subjects of the required response on each trial was manipulated so that some tasks could only be performed using an allocentric representation of the visual target; others could be performed using either an allocentric or hand-centered representation; still others could be performed based on an allocentric, hand-centered or head/eye-centered representation. Both head/eye and hand centered representations are egocentric because they specify visual coordinates with respect to the subject. The results reveal that accuracy and reliability of both motor and perceptual responses are highest when subjects direct their response towards a visible target location, which allows them to rely on a representation of the target in head/eye-centered coordinates. Systematic changes in averages and standard deviations of responses are observed when subjects cannot direct their response towards a visible target location, but have to represent target distance and direction in either hand-centered or allocentric visual coordinates instead. Subjects' motor and perceptual performance agree quantitatively well. These results strongly suggest that subjects process head/eye-centered representations differently from hand-centered or allocentric representations, but that they process visual information for motor actions and perceptual judgments together.


Asunto(s)
Mano/fisiología , Cabeza/fisiología , Juicio/fisiología , Movimiento/fisiología , Desempeño Psicomotor/fisiología , Femenino , Humanos , Masculino , Estimulación Luminosa
20.
Cognition ; 193: 104014, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31302529

RESUMEN

Cue combination occurs when two independent noisy perceptual estimates are merged together as a weighted average, creating a unified estimate that is more precise than either single estimate alone. Surprisingly, this effect has not been demonstrated compellingly in children under the age of 10 years, in contrast with the array of other multisensory skills that children show even in infancy. Instead, across a wide variety of studies, precision with both cues is no better than the best single cue - and sometimes worse. Here we provide the first consistent evidence of cue combination in children from 7 to 10 years old. Across three experiments, participants showed evidence of a bimodal precision advantage (Experiments 1a and 1b) and the majority were best-fit by a combining model (Experiment 2). The task was to localize a target horizontally with a binaural audio cue and a noisy visual cue in immersive virtual reality. Feedback was given as well, which could both (a) help participants judge how reliable each cue is and (b) help correct between-cue biases that might prevent cue combination. Crucially, our results show cue combination when feedback is only given on single cues - therefore, combination itself was not a strategy learned via feedback. We suggest that children at 7-10 years old are capable of cue combination in principle, but must have sufficient representations of reliabilities and biases in their own perceptual estimates as relevant to the task, which can be facilitated through task-specific feedback.


Asunto(s)
Adaptación Psicológica/fisiología , Percepción Auditiva/fisiología , Señales (Psicología) , Retroalimentación Psicológica/fisiología , Percepción Visual/fisiología , Niño , Femenino , Humanos , Masculino , Realidad Virtual
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA