Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters











Database
Language
Publication year range
1.
ASSETS ; 2015: 405-406, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26824081

ABSTRACT

People who are blind or visually impaired face difficulties using a growing array of everyday appliances because they are equipped with inaccessible electronic displays. We report developments on our "Display Reader" smartphone app, which uses computer vision to help a user acquire a usable image of a display and have the contents read aloud, to address this problem. Drawing on feedback from past and new studies with visually impaired volunteer participants, as well as from blind accessibility experts, we have improved and simplified our user interface and have also added the ability to read seven-segment digit displays. Our system works fully automatically and in real time, and we compare it with general-purpose assistive apps such as Be My Eyes, which recruit remote sighted assistants (RSAs) to answer questions about video captured by the user. Our discussions and preliminary experiment highlight the advantages and disadvantages of fully automatic approaches compared with RSAs, and suggest possible hybrid approaches to investigate in the future.

2.
ASSETS ; 2014: 281-282, 2014.
Article in English | MEDLINE | ID: mdl-25531011

ABSTRACT

People who are blind or visually impaired face difficulties accessing a growing array of everyday appliances, needed to perform a variety of daily activities, because they are equipped with electronic displays. We are developing a "Display Reader" smartphone app, which uses computer vision to help a user acquire a usable image of a display, to address this problem. The current prototype analyzes video from the smartphone's camera, providing real-time feedback to guide the user until a satisfactory image is acquired, based on automatic estimates of image blur and glare. Formative studies were conducted with several blind and visually impaired participants, whose feedback is guiding the development of the user interface. The prototype software has been released as a Free and Open Source (FOSS) project.

3.
Comput Help People Spec Needs ; 8547: 437-440, 2014 Jul.
Article in English | MEDLINE | ID: mdl-26029746

ABSTRACT

The number of persons with hearing and vision loss is on the rise as lifespans increase. Vision plays an important role in communication, especially in the presence of background noise or for persons with hearing loss. However, persons with vision loss cannot make use of this extra modality to overcome their hearing deficits. We propose automatically utilizing some visual information in hearing aids through the addition of a small wearable camera. Our initial results show potentially significant benefits to incorporating low level robust visual cues when the background noise is high. This technique can potentially benefit all persons with hearing loss, with substantial improvements possible for the speech perception performance of persons with dual sensory loss.

4.
J Technol Pers Disabil ; 28: 230-239, 2013.
Article in English | MEDLINE | ID: mdl-25602592

ABSTRACT

We describe a new smartphone app called BLaDE (Barcode Localization and Decoding Engine), designed to enable a blind or visually impaired user find and read product barcodes. Developed at The Smith-Kettlewell Eye Research Institute, the BLaDE Android app has been released as open source software, which can be used for free or modified for commercial or non-commercial use. Unlike popular commercial smartphone apps, BLaDE provides real-time audio feedback to help visually impaired users locate a barcode, which is a prerequisite to being able to read it. We describe experiments performed with five blind/visually impaired volunteer participants demonstrating that BLaDE is usable and that the audio feedback is key to its usability.

5.
Proc IEEE Workshop Appl Comput Vis ; : 491-496, 2011 Jan 05.
Article in English | MEDLINE | ID: mdl-21804957

ABSTRACT

Modern household appliances, such as microwave ovens and DVD players, increasingly require users to read an LED or LCD display to operate them, posing a severe obstacle for persons with blindness or visual impairment. While OCR-enabled devices are emerging to address the related problem of reading text in printed documents, they are not designed to tackle the challenge of finding and reading characters in appliance displays. Any system for reading these characters must address the challenge of first locating the characters among substantial amounts of background clutter; moreover, poor contrast and the abundance of specular highlights on the display surface - which degrade the image in an unpredictable way as the camera is moved - motivate the need for a system that processes images at a few frames per second, rather than forcing the user to take several photos, each of which can take seconds to acquire and process, until one is readable.We describe a novel system that acquires video, detects and reads LED/LCD characters in real time, reading them aloud to the user with synthesized speech. The system has been implemented on both a desktop and a cell phone. Experimental results are reported on videos of display images, demonstrating the feasibility of the system.

6.
Comput Help People Spec Needs ; 6180: 290-295, 2010 Jul.
Article in English | MEDLINE | ID: mdl-26949757

ABSTRACT

While there are many barcode readers available for identifying products in a supermarket or at home on mobile phones (e.g., Red Laser iPhone app), such readers are inaccessible to blind or visually impaired persons because of their reliance on visual feedback from the user to center the barcode in the camera's field of view. We describe a mobile phone application that guides a visually impaired user to the barcode on a package in real-time using the phone's built-in video camera. Once the barcode is located by the system, the user is prompted with audio signals to bring the camera closer to the barcode until it can be resolved by the camera, which is then decoded and the corresponding product information read aloud using text-to-speech. Experiments with a blind volunteer demonstrate proof of concept of our system, which allowed the volunteer to locate barcodes which were then translated to product information that was announced to the user. We successfully tested a series of common products, as well as user-generated barcodes labeling household items that may not come with barcodes.

7.
Proc Can Conf Comput Robot Vis ; 2009: 61-67, 2009 May 25.
Article in English | MEDLINE | ID: mdl-20428491

ABSTRACT

The 1D barcode is a ubiquitous labeling technology, with symbologies such as UPC used to label approximately 99% of all packaged goods in the US. It would be very convenient for consumers to be able to read these barcodes using portable cameras (e.g. mobile phones), but the limited quality and resolution of images taken by these cameras often make it difficult to read the barcodes accurately. We propose a Bayesian framework for reading 1D barcodes that models the shape and appearance of barcodes, allowing for geometric distortions and image noise, and exploiting the redundant information contained in the parity digit. An important feature of our framework is that it doesn't require that every barcode edge be detected in the image. Experiments on a publicly available dataset of barcode images explore the range of images that are readable, and comparisons with two commercial readers demonstrate the superior performance of our algorithm.

8.
Proc IEEE Workshop Appl Comput Vis ; 2009: 1-8, 2009 Dec 07.
Article in English | MEDLINE | ID: mdl-20617114

ABSTRACT

Most camera-based systems for finding and reading barcodes are designed to be used by sighted users (e.g. the Red Laser iPhone app), and assume the user carefully centers the barcode in the image before the barcode is read. Blind individuals could benefit greatly from such systems to identify packaged goods (such as canned goods in a supermarket), but unfortunately in their current form these systems are completely inaccessible because of their reliance on visual feedback from the user.To remedy this problem, we propose a computer vision algorithm that processes several frames of video per second to detect barcodes from a distance of several inches; the algorithm issues directional information with audio feedback (e.g. "left," "right") and thereby guides a blind user holding a webcam or other portable camera to locate and home in on a barcode. Once the barcode is detected at sufficiently close range, a barcode reading algorithm previously developed by the authors scans and reads aloud the barcode and the corresponding product information. We demonstrate encouraging experimental results of our proposed system implemented on a desktop computer with a webcam held by a blindfolded user; ultimately the system will be ported to a camera phone for use by visually impaired users.

SELECTION OF CITATIONS
SEARCH DETAIL