Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
Add more filters










Publication year range
1.
J Imaging ; 10(1)2024 Jan 18.
Article in English | MEDLINE | ID: mdl-38249011

ABSTRACT

The lack of accessible information conveyed by descriptions of art images presents significant barriers for people with blindness and low vision (BLV) to engage with visual artwork. Most museums are not able to easily provide accessible image descriptions for BLV visitors to build a mental representation of artwork due to vastness of collections, limitations of curator training, and current measures for what constitutes effective automated captions. This paper reports on the results of two studies investigating the types of information that should be included to provide high-quality accessible artwork descriptions based on input from BLV description evaluators. We report on: (1) a qualitative study asking BLV participants for their preferences for layered description characteristics; and (2) an evaluation of several current models for image captioning as applied to an artwork image dataset. We then provide recommendations for researchers working on accessible image captioning and museum engagement applications through a focus on spatial information access strategies.

2.
ACM Trans Access Comput ; 16(2): 1-26, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37427355

ABSTRACT

In this article, we introduce Semantic Interior Mapology (SIM), a web app that allows anyone to quickly trace the floor plan of a building, generating a vectorized representation that can be automatically converted into a tactile map at the desired scale. The design of SIM is informed by a focus group with seven blind participants. Maps generated by SIM at two different scales have been tested by a user study with 10 participants, who were asked to perform a number of tasks designed to ascertain the spatial knowledge acquired through map exploration. These tasks included cross-map pointing and path finding, and determination of turn direction/walker orientation during imagined path traversal. By and large, participants were able to successfully complete the tasks, suggesting that these types of maps could be useful for pre-journey spatial learning.

4.
Front Hum Neurosci ; 14: 87, 2020.
Article in English | MEDLINE | ID: mdl-32256329

ABSTRACT

This article starts by discussing the state of the art in accessible interactive maps for use by blind and visually impaired (BVI) people. It then describes a behavioral experiment investigating the efficacy of a new type of low-cost, touchscreen-based multimodal interface, called a vibro-audio map (VAM), for supporting environmental learning, cognitive map development, and wayfinding behavior on the basis of nonvisual sensing. In the study, eight BVI participants learned two floor-maps of university buildings, one using the VAM and the other using an analogous hardcopy tactile map (HTM) overlaid on the touchscreen. They were asked to freely explore each map, with the task of learning the entire layout and finding three hidden target locations. After meeting a learning criterion, participants performed an environmental transfer test, where they were brought to the corresponding physical layout and were asked to plan/navigate routes between learned target locations from memory, i.e., without access to the map used at learning. The results using Bayesian analyses aimed at assessing equivalence showed highly similar target localization accuracy and route efficiency performance between conditions, suggesting that the VAM supports the same level of environmental learning, cognitive map development, and wayfinding performance as is possible from interactive displays using traditional tactile map overlays. These results demonstrate the efficacy of the VAM for supporting complex spatial tasks without vision using a commercially available, low-cost interface and open the door to a new era of mobile interactive maps for spatial learning and wayfinding by BVI navigators.

5.
Optom Vis Sci ; 95(9): 720-726, 2018 09.
Article in English | MEDLINE | ID: mdl-30169351

ABSTRACT

Touchscreen-based, multimodal graphics represent an area of increasing research in digital access for individuals with blindness or visual impairments; yet, little empirical research on the effects of screen size on graphical exploration exists. This work probes if and when more screen area is necessary in supporting a pattern-matching task. PURPOSE: Larger touchscreens are thought to have distinct benefit over smaller touchscreens for the amount of space available to convey graphical information nonvisually. The current study investigates two questions: (1) Do screen size and grid density impact a user's accuracy on pattern-matching tasks? (2) Do screen size and grid density impact a user's time on task? METHODS: Fourteen blind and visually impaired individuals were given a pattern-matching task to complete on either a 10.5-in tablet or a 5.1-in phone. The patterns consisted of five vibrating targets imposed on sonified grids that varied in density (higher density = more grid squares). At test, participants compared the touchscreen pattern with a group of physical, embossed patterns and selected the matching pattern. Participants were evaluated on time exploring the pattern on the device and their pattern-matching accuracy. Multiple and logistic regressions were performed on the data. RESULTS: Device size, grid density, and age had no statistically significant effects on the model of pattern-matching accuracy. However, device size, grid density, and age had significant effects on the model for grid exploration. Using the phone, exploring low-density grids, and being older were indicative of faster exploration time. CONCLUSIONS: A trade-off of time and accuracy exists between devices that seems to be task dependent. Users may find a tablet most useful in situations where the accuracy of graphic interpretation is important and is not limited by time. Smaller screen sizes afforded comparable accuracy performance to tablets and were faster to explore overall.


Subject(s)
Blindness/physiopathology , Computers, Handheld , Data Display , Pattern Recognition, Visual/physiology , Self-Help Devices , Smartphone , Vision, Low/physiopathology , Adult , Aged , Blindness/rehabilitation , Female , Humans , Male , Middle Aged , Vision, Low/rehabilitation , Visually Impaired Persons/rehabilitation , Young Adult
6.
Mem Cognit ; 45(7): 1240-1251, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28653274

ABSTRACT

When walking without vision, people mentally keep track of the directions and distances of previously viewed objects, a process called spatial updating. The current experiment indicates that while people across a large age range are able to update multiple targets in memory without perceptual support, aging negatively affects accuracy, precision, and decision time. Participants (20 to 80 years of age) viewed one, three, or six targets (colored lights) on the floor of a dimly lit room. Then, without vision, they walked to a target designated by color, either directly or indirectly (via a forward turning point). The younger adults' final stopping points were both accurate (near target) and precise (narrowly dispersed), but updating performance did degrade slightly with the number of targets. Older adults' performance was consistently worse than the younger group, but the lack of interaction between age and memory load indicates that the effect of age on performance was not further exacerbated by a greater number of targets. The number of targets also significantly increased the latency required to turn toward the designated target for both age groups. Taken together, results extend previous work showing impressive updating performance by younger adults, with novel findings showing that older adults manifest small but consistent degradation of updating performance of multitarget arrays.


Subject(s)
Aging/physiology , Psychomotor Performance/physiology , Space Perception/physiology , Spatial Memory/physiology , Visual Perception/physiology , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Young Adult
7.
Exp Aging Res ; 43(3): 274-290, 2017.
Article in English | MEDLINE | ID: mdl-28358297

ABSTRACT

Background/Study Context: Aging research addressing spatial learning, representation, and action is almost exclusively based on vision as the input source. Much less is known about how spatial abilities from nonvisual inputs, particularly from haptic information, may change during life-span spatial development. This research studied whether learning and updating of haptic target configurations differs as a function of age. METHODS: Three groups of participants, ranging from 20 to 80 years old, felt four-target table-top circular arrays and then performed several tasks to assess life-span haptic spatial cognition. Measures evaluated included egocentric pointing, allocentric pointing, and array reconstruction after physical or imagined spatial updating. RESULTS: All measures revealed reliable differences between the oldest and youngest participant groups. The age effect for egocentric pointing contrasts with previous findings showing preserved egocentric spatial abilities. Error performance on allocentric pointing and map reconstruction tasks showing a clear age effect, with the oldest participants exhibiting the greatest error, is in line with other studies in the visual domain. Postupdating performance sharply declined with age but did not reliably differ between physical and imagined updating. CONCLUSION: Results suggest that there is a general trend for age-related degradation of spatial abilities after haptic learning, with the greatest declines manifesting in all measures in people over 60 years of age. Results are interpreted in terms of a spatial aging effect on mental transformations of three-dimensional representations of space in working memory.


Subject(s)
Aging/psychology , Space Perception , Adult , Aged , Female , Humans , Learning , Male , Middle Aged , Young Adult
8.
Assist Technol ; 28(1): 1-6, 2016.
Article in English | MEDLINE | ID: mdl-26953681

ABSTRACT

Four different platforms were compared in a task of exploring an angular stimulus and reporting its value. The angle was explored visually, tangibly as raised fine-grit sandpaper, or on a touch-screen with a frictional or vibratory signal. All platforms produced highly accurate angle judgments. Differences were found, however, in exploration time, with vision fastest as expected, followed by tangible, vibration, and friction. Relative to the tangible display, touch-screens evidenced greater noise in the perceived angular value, with a particular disadvantage for friction. The latter must be interpreted in the context of a first-generation display and a rapidly advancing technology. On the whole, the results point both to promise and barriers in the use of refreshable graphical displays for blind users.


Subject(s)
Computer Graphics , Self-Help Devices , Touch , User-Computer Interface , Adult , Analysis of Variance , Female , Friction , Humans , Male , Vibration , Young Adult
9.
IEEE Trans Haptics ; 8(3): 248-57, 2015.
Article in English | MEDLINE | ID: mdl-26276998

ABSTRACT

This paper discusses issues of importance to designers of media for visually impaired users. The paper considers the influence of human factors on the effectiveness of presentation as well as the strengths and weaknesses of tactile, vibrotactile, haptic, and multimodal methods of rendering maps, graphs, and models. The authors, all of whom are visually impaired researchers in this domain, present findings from their own work and work of many others who have contributed to the current understanding of how to prepare and render images for both hard-copy and technology-mediated presentation of Braille and tangible graphics.


Subject(s)
Data Display , Equipment Design , Sensory Aids , Touch , Visually Impaired Persons/rehabilitation , Blindness/rehabilitation , Communications Media , Humans , Macular Degeneration , Therapeutic Touch , User-Computer Interface
10.
ASSETS ; 2015: 405-406, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26824081

ABSTRACT

People who are blind or visually impaired face difficulties using a growing array of everyday appliances because they are equipped with inaccessible electronic displays. We report developments on our "Display Reader" smartphone app, which uses computer vision to help a user acquire a usable image of a display and have the contents read aloud, to address this problem. Drawing on feedback from past and new studies with visually impaired volunteer participants, as well as from blind accessibility experts, we have improved and simplified our user interface and have also added the ability to read seven-segment digit displays. Our system works fully automatically and in real time, and we compare it with general-purpose assistive apps such as Be My Eyes, which recruit remote sighted assistants (RSAs) to answer questions about video captured by the user. Our discussions and preliminary experiment highlight the advantages and disadvantages of fully automatic approaches compared with RSAs, and suggest possible hybrid approaches to investigate in the future.

11.
Multisens Res ; 27(5-6): 359-78, 2014.
Article in English | MEDLINE | ID: mdl-25693301

ABSTRACT

Many developers wish to capitalize on touch-screen technology for developing aids for the blind, particularly by incorporating vibrotactile stimulation to convey patterns on their surfaces, which otherwise are featureless. Our belief is that they will need to take into account basic research on haptic perception in designing these graphics interfaces. We point out constraints and limitations in haptic processing that affect the use of these devices. We also suggest ways to use sound to augment basic information from touch, and we include evaluation data from users of a touch-screen device with vibrotactile and auditory feedback that we have been developing, called a vibro-audio interface.


Subject(s)
Blindness/rehabilitation , Self-Help Devices , Space Perception/physiology , Touch/physiology , User-Computer Interface , Adolescent , Adult , Blindness/physiopathology , Female , Humans , Male , Sensory Deprivation/physiology , Young Adult
12.
Behav Brain Sci ; 36(5): 554-5; discussion 571-87, 2013 Oct.
Article in English | MEDLINE | ID: mdl-24103608

ABSTRACT

Humans' spatial representations enable navigation and reaching to targets above the ground plane, even without direct perceptual support. Such abilities are inconsistent with an impoverished representation of the third dimension. Features that differentiate humans from most terrestrial animals, including raised eye height and arms dedicated to manipulation rather than locomotion, have led to robust metric representations of volumetric space.


Subject(s)
Cognition/physiology , Models, Neurological , Space Perception/physiology , Spatial Behavior , Animals , Humans
13.
Article in English | MEDLINE | ID: mdl-24110904

ABSTRACT

Indoor navigation technology is needed to support seamless mobility for the visually impaired. This paper describes the construction and evaluation of an inertial dead reckoning navigation system that provides real-time auditory guidance along mapped routes. Inertial dead reckoning is a navigation technique coupling step counting together with heading estimation to compute changes in position at each step. The research described here outlines the development and evaluation of a novel navigation system that utilizes information from the mapped route to limit the problematic error accumulation inherent in traditional dead reckoning approaches. The prototype system consists of a wireless inertial sensor unit, placed at the users' hip, which streams readings to a smartphone processing a navigation algorithm. Pilot human trials were conducted assessing system efficacy by studying route-following performance with blind and sighted subjects using the navigation system with real-time guidance, versus offline verbal directions.


Subject(s)
Blindness/rehabilitation , Cell Phone , Self-Help Devices , Visually Impaired Persons , Adolescent , Adult , Algorithms , Computers , Humans , Middle Aged , Walking , Wireless Technology , Young Adult
14.
Exp Brain Res ; 224(1): 141-53, 2013 Jan.
Article in English | MEDLINE | ID: mdl-23070234

ABSTRACT

Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.


Subject(s)
Depth Perception/physiology , Spatial Behavior/physiology , Touch/physiology , Vision, Ocular/physiology , Adult , Female , Humans , Male , Physical Stimulation , Young Adult
15.
Atten Percept Psychophys ; 74(6): 1260-7, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22552825

ABSTRACT

Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.


Subject(s)
Association Learning , Depth Perception , Distance Perception , Memory, Short-Term , Orientation , Pattern Recognition, Visual , Sound Localization , Space Perception , Female , Humans , Male , Reaction Time , Students/psychology
16.
Article in English | MEDLINE | ID: mdl-23366303

ABSTRACT

Indoor navigation technology is needed to support seamless mobility for the visually impaired. This paper describes the construction of and evaluation of a navigation system that infers the users' location using only magnetic sensing. It is well known that the environments within steel frame structures are subject to significant magnetic distortions. Many of these distortions are persistent and have sufficient strength and spatial characteristics to allow their use as the basis for a location technology. This paper describes the development and evaluation of a prototype magnetic navigation system consisting of a wireless magnetometer placed at the users' hip streaming magnetic readings to a smartphone processing location algorithms. Human trials were conducted to assess the efficacy of the system by studying route-following performance with blind and sighted subjects using the navigation system for real-time guidance.


Subject(s)
Magnetic Phenomena , Sensory Aids , Visually Impaired Persons , Accelerometry , Adult , Algorithms , Case-Control Studies , Female , Humans , Male , Middle Aged , Pilot Projects , Wireless Technology/instrumentation , Young Adult
17.
Psychon Bull Rev ; 18(6): 1119-25, 2011 Dec.
Article in English | MEDLINE | ID: mdl-21956382

ABSTRACT

In two experiments, we investigated whether reference frames acquired through touch could influence memories for locations learned through vision. Participants learned two objects through touch, and haptic egocentric (Experiment 1) and environmental (Experiment 2) cues encouraged selection of a specific reference frame. Participants later learned eight new objects through vision. Haptic cues were manipulated, whereas visual learning was held constant in order to observe any potential influence of the haptically experienced reference frame on memories for visually learned locations. When the haptically experienced reference frame was defined primarily by egocentric cues, cue manipulation had no effect on memories for objects learned through vision. Instead, visually learned locations were remembered using a reference frame selected from the visual study perspective. When the haptically experienced reference frame was defined by both egocentric and environmental cues, visually learned objects were remembered in the context of the haptically experienced reference frame. These findings support the common reference frame hypothesis, which proposes that locations learned through different sensory modalities are represented within a common reference frame.


Subject(s)
Learning , Mental Recall , Space Perception , Touch Perception , Cues , Female , Humans , Male , Visual Perception
18.
Curr Biol ; 21(11): 984-9, 2011 Jun 07.
Article in English | MEDLINE | ID: mdl-21620708

ABSTRACT

In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain.


Subject(s)
Cerebral Cortex/physiology , Form Perception/physiology , Space Perception/physiology , Visual Perception/physiology , Adult , Aged , Blindness/physiopathology , Brain Mapping , Cues , Feedback, Sensory , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Photic Stimulation , Visually Impaired Persons/psychology
19.
J Exp Psychol Learn Mem Cogn ; 37(3): 621-34, 2011 May.
Article in English | MEDLINE | ID: mdl-21299331

ABSTRACT

This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.


Subject(s)
Blindness/physiopathology , Imagination/physiology , Learning/physiology , Space Perception/physiology , Touch/physiology , Vision, Ocular/physiology , Adolescent , Adult , Analysis of Variance , Female , Humans , Linear Models , Male , Memory, Short-Term/physiology , Neuropsychological Tests , Physical Stimulation/methods , Reaction Time/physiology , Young Adult
20.
Neuroimage ; 56(2): 681-7, 2011 May 15.
Article in English | MEDLINE | ID: mdl-20451630

ABSTRACT

Accurate processing of nonvisual stimuli is fundamental to humans with visual impairments. In this population, moving sounds activate an occipito-temporal region thought to encompass the equivalent of monkey area MT+, but it remains unclear whether the signal carries information beyond the mere presence of motion. To address this important question, we tested whether the processing in this region retains functional properties that are critical for accurate motion processing and that are well established in the visual modality. Specifically, we focussed on the property of 'directional selectivity', because MT+ neurons in non-human primates fire preferentially to specific directions of visual motion. Recent neuroimaging studies have revealed similar properties in sighted humans by successfully decoding different directions of visual motion from fMRI activation patterns. Here we used fMRI and multivariate pattern classification to demonstrate that the direction in which a sound is moving can be reliably decoded from dorsal occipito-temporal activation in the blind. We also show that classification performance is at chance (i) in a control region in posterior parietal cortex and (ii) when motion information is removed and subjects only hear a sequence of static sounds presented at the same start and end positions. These findings reveal that information about the direction of auditory motion is present in dorsal occipito-temporal responses of blind humans. As such, this area, which appears consistent with the hMT+ complex in the sighted, provides crucial information for the generation of a veridical percept of moving non-visual stimuli.


Subject(s)
Auditory Perception/physiology , Brain Mapping/methods , Cerebral Cortex/physiology , Image Processing, Computer-Assisted/methods , Visually Impaired Persons , Adult , Blindness , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Movement/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...