Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53
Filter
1.
J Blind Innov Res ; 14(1)2024.
Article in English | MEDLINE | ID: mdl-38650844

ABSTRACT

A group co-design was held in March 2021 with six blind and low-vision individuals (BLIs) from the United States. Participants were asked to discuss problems related to travel during the COVID-19 pandemic and make recommendations for possible solutions. Two probes (prototypes) of a non-visual neighborhood travel map and a non-visual COVID-19 choropleth map (a map using colors or sounds over each state to represent different values) of COVID-19 state data were shown to participants for inspiration. The participants expressed that COVID-19 had significantly increased their apprehension and discomfort associated with activities such as venturing outside, traveling, engaging with strangers, communicating, adapting to changes in familiar environments, and wearing masks. Participants gravitated towards the need for information the probes provided, and made a number of observations and recommendations for improvement. They wanted more detailed geo-referenced COVID-19 data (including by county), information related to voting, a mobile app, and more detailed building information, such as doors on the travel map.

2.
Sensors (Basel) ; 23(5)2023 Mar 01.
Article in English | MEDLINE | ID: mdl-36904904

ABSTRACT

Independent wayfinding is a major challenge for blind and visually impaired (BVI) travelers. Although GPS-based localization approaches enable the use of navigation smartphone apps that provide accessible turn-by-turn directions in outdoor settings, such approaches are ineffective in indoor and other GPS-deprived settings. We build on our previous work on a localization algorithm based on computer vision and inertial sensing; the algorithm is lightweight in that it requires only a 2D floor plan of the environment, annotated with the locations of visual landmarks and points of interest, instead of a detailed 3D model (used in many computer vision localization algorithms), and requires no new physical infrastructure (such as Bluetooth beacons). The algorithm can serve as the foundation for a wayfinding app that runs on a smartphone; crucially, the approach is fully accessible because it does not require the user to aim the camera at specific visual targets, which would be problematic for BVI users who may not be able to see these targets. In this work, we improve upon the existing algorithm so as to incorporate recognition of multiple classes of visual landmarks to facilitate effective localization, and demonstrate empirically how localization performance improves as the number of these classes increases, showing the time to correct localization can be decreased by 51-59%. The source code for our algorithm and associated data used for our analyses have been made available in a free repository.


Subject(s)
Mobile Applications , Visually Impaired Persons , Humans , Blindness , Algorithms , Computers
3.
J Technol Pers Disabil ; 11: 192-208, 2023 May.
Article in English | MEDLINE | ID: mdl-38516032

ABSTRACT

The You Described, We Archived dataset (YuWA) is a collaboration between San Francisco State University and The Smith-Kettlewell Eye Research Institute. It includes audio description (AD) data collected worldwide 2013-2022 through YouDescribe, an accessibility tool for adding audio descriptions to YouTube videos. YouDescribe, a web-based audio description tool along with an iOS viewing app, has a community of 12,000+ average annual visitors, with approximately 3,000 volunteer describers, and has created over 5,500 audio described YouTube videos. Blind and visually impaired (BVI) viewers request videos, which then are saved to a wish list and volunteer audio describers select a video, write a script, record audio clips, and edit clip placement to create an audio description. The AD tracks are stored separately, posted for public view at https://youdescribe.org/ and played together with the YouTube video. The YuWA audio description data paired with the describer and viewer metadata, and collection timeline has a large number of research applications including artificial intelligence, machine learning, sociolinguistics, audio description, video understanding, video retrieval and video-language grounding tasks.

4.
J Technol Pers Disabil ; 11: 245-259, 2023.
Article in English | MEDLINE | ID: mdl-38528939

ABSTRACT

Smartphone-based navigation apps allow blind and visually impaired (BVI) people to take images or videos to complete various tasks such as determining a user 's location, recognizing objects, and detecting obstacles. The quality of the images and videos significantly affects the performance of these systems, but manipulating a camera to capture clear images with proper framing is a challenging task for BVI users. This research explores the interactions between a camera and BVI users in assistive navigation systems through interviews with BVI participants. We identified the form factors, applications, and challenges in using camera-based navigation systems and designed an interactive training app to improve BVI users' skills in using a camera for navigation. In this paper, we describe a novel virtual environment of the training app and report the preliminary results of a user study with BVI participants.

5.
Comput Help People Spec Needs ; 13341: 253-260, 2022 Jul.
Article in English | MEDLINE | ID: mdl-36108327

ABSTRACT

Maps are indispensable for helping people learn about unfamiliar environments and plan trips. While tactile (2D) and 3D maps offer non-visual map access to people who are blind or visually impaired (BVI), this access is greatly enhanced by adding interactivity to the maps: when the user points at a feature of interest on the map, the name and other information about the feature is read aloud in audio. We explore how the use of an interactive 3D map of a playground, containing over seventy play structures and other features, affects spatial learning and cognition. Specifically, we perform experiments in which four blind participants answer questions about the map to evaluate their grasp of three types of spatial knowledge: landmark, route and survey. The results of these experiments demonstrate that participants are able to acquire this knowledge, most of which would be inaccessible without the interactivity of the map.

7.
Proc Int Conf Audit Disp ; 2022: 82-90, 2022 Jun.
Article in English | MEDLINE | ID: mdl-36919036

ABSTRACT

The auditory virtual reality interface of Audiom, a web-based map viewer, was evaluated by thirteen blind participants. In Audiom, the user is an avatar that navigates, using the arrow keys, through geographic data, as if they are playing a first-person, egocentric game. The research questions were: What will make blind users want to use Audiom maps? And Can participants demonstrate basic acquisition of spatial knowledge after viewing an auditory map? A dynamic choropleth map of state-level US COVID-19 data, and a detailed OpenStreetMap powered travel map, were evaluated. All participants agreed they wanted more maps of all kinds, in particular county-level COVID data, and they would use Audiom once some bugs were fixed and their few recommended features were added. Everyone wanted to see Audiom embedded in their existing travel and mapping applications. All participants were able to answer a question evaluating spatial knowledge. Participants also agreed this spatial information was not available in existing applications.

8.
J Technol Pers Disabil ; 10: 135-153, 2022 Mar.
Article in English | MEDLINE | ID: mdl-37008596

ABSTRACT

This study evaluated the impact the Tactile Maps Automated Production (TMAP) system has had on its blind and visually impaired (BVI) and Orientation and Mobility (O&M) users and obtained suggestions for improvement. A semi-structured interview was performed with six BVI and seven O&M TMAP users who had printed or ordered two or more TMAPs in the last year. The number of maps downloaded from the online TMAP generation platform was also reviewed for each participant. The most significant finding is that having access to TMAPs increased map usage for BVIs from less than 1 map a year to getting at least two maps from the order system, with those who had easy access to an embosser generating on average 18.33 TMAPs from the online system and saying they embossed 42 maps on average at home or work. O&Ms appreciated the quick, high-quality, and scaled map they could create and send home with their students, and they frequently used TMAPs with their braille reading students. To improve TMAPs, users requested that the following features be added: interactivity, greater customizability of TMAPs, viewing of transit stops, lower cost of the ordered TMAP, and nonvisual viewing of the digital TMAP on the online platform.

9.
Eur J Dent Educ ; 26(3): 599-607, 2022 Aug.
Article in English | MEDLINE | ID: mdl-34882932

ABSTRACT

INTRODUCTION: COVID-19 pandemic impacted dental students and postgraduate residents world-wide, forcing them to rapidly adapt to new forms of teaching and learning. Dental school leaderships needed to ensure the academic continuity, therefore the majority of the in-person actives were transitioned into a virtual setting. The aim of this study was to identify students' perception regarding the measures taken to adapt during the pandemic by different dental schools in the European Region. MATERIALS AND METHODS: This cross-sectional study utilised a validated 37 questions survey. Ethical approval was obtained from the Trinity College Dublin, Ireland. Using this instrument, the perceptions of the European dental students regarding the impact of COVID-19 on their education and mental health were identified. The questions were divided into categories: standard demographic information; models of education during the COVID-19 pandemic (types of teaching, examination and other educational activities) and support received. The survey was administered through electronic online tools, and all responses remained confidential. The data were processed through quantitative and qualitative analysis. RESULTS: A total of 879 student responses to the survey from 34 countries in the European Region were included in this study. When asked about the time spent on their education, 50% of the participants (n = 435) reported spending less time on their education and 30% (n = 265) reported spending more time. The types of teaching included showed a heterogeneous approach, varying from online simulations to problem solving for the didactic setting, or a hybrid model with group activities for the clinical setting. There were broad splits in satisfaction with the education delivered, with 44% (n = 382) being either satisfied or very satisfied and 31% (n = 279) being either unsatisfied or very unsatisfied. Students were most concerned with their clinical experience and skills. CONCLUSIONS: The qualitative and quantitative data compiled in this cross-sectional study enable a direct comparison between different approaches to adapt dental education during the COVID-19 in the European Region. Future studies are recommended that will include compiling perceptions from the staff, faculty and administrators on the transition as well.


Subject(s)
COVID-19 , Education, Distance , Cross-Sectional Studies , Education, Dental , Humans , Pandemics , Students
10.
J Technol Pers Disabil ; 9: 125-139, 2021.
Article in English | MEDLINE | ID: mdl-34350305

ABSTRACT

Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. We describe a new approach to recognizing and analyzing informational signs, such as Exit and restroom signs, in a building. This approach will be incorporated in iNavigate, a smartphone app we are developing, that provides accessible indoor navigation assistance. The app combines a digital map of the environment with computer vision and inertial sensing to estimate the user's location on the map in real time. Our new approach can recognize and analyze any sign from a small number of training images, and multiple types of signs can be processed simultaneously in each video frame. Moreover, in addition to estimating the distance to each detected sign, we can also estimate the approximate sign orientation (indicating if the sign is viewed head-on or obliquely), which improves the localization performance in challenging conditions. We evaluate the performance of our approach on four sign types distributed among multiple floors of an office building.

12.
Comput Help People Spec Needs ; 12376: 485-494, 2020 Sep.
Article in English | MEDLINE | ID: mdl-33263114

ABSTRACT

Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. Building on our recent work on a computer vision-based localization approach that runs in real time on a smartphone, we describe an accessible wayfinding iOS app we have created that provides turn-by-turn directions to a desired destination. The localization approach combines dead reckoning obtained using visual-inertial odometry (VIO) with information about the user's location in the environment from informational sign detections and map constraints. We explain how we estimate the user's distance from Exit signs appearing in the image, describe new improvements in the sign detection and range estimation algorithms, and outline our algorithm for determining appropriate turn-by-turn directions.

13.
Comput Help People Spec Needs ; 12376: 475-484, 2020 Sep.
Article in English | MEDLINE | ID: mdl-33225323

ABSTRACT

Augmented reality (AR) has great potential for blind users because it enables a range of applications that provide audio information about specific locations or directions in the user's environment. For instance, the CamIO ("Camera Input-Output") AR app makes physical objects (such as documents, maps, devices and 3D models) accessible to blind and visually impaired persons by providing real-time audio feedback in response to the location on an object that the user is touching (using an inexpensive stylus). An important feature needed by blind users of AR apps such as CamIO is a 3D spatial guidance feature that provides real-time audio feedback to help the user find a desired location on an object. We have devised a simple audio interface to provide verbal guidance towards a target of interest in 3D. The experiment we report with blind participants using this guidance interface demonstrates the feasibility of the approach and its benefit for helping users find locations of interest.

14.
Br Dent J ; 229(9): 622-626, 2020 11.
Article in English | MEDLINE | ID: mdl-33188346

ABSTRACT

Introduction The United Kingdom (UK) left the European Union (EU) on 31 January 2020. Brexit will impact many sectors of the economy, including the dental sector.Methods and analysis This policy analysis evaluates UK and EU legislation and planning documents, as well as the published literature, to analyse the impact of two scenarios relating to the UK's exit from the EU on the dental sector: a free trade agreement based on the jointly agreed Political Declaration (the 'FTA') and a 'no-deal' scenario.Conclusion An FTA could cause price increases of medicines and medical devices, and cause some disruption to the ability of regulating authorities in this area, while a no-deal would additionally risk shortages of medicines and medical devices as well as more dramatic price increases. In both an FTA and a 'no-deal scenario', with EU law no longer applicable to the UK, more innovative policy in the area of tobacco control could be developed. An FTA could exacerbate existing workforce shortages and would likely cause a reduction in EU research funding, as well as posing issues with data transfers, with these all likely to be more severe under a no-deal scenario.


Subject(s)
Oral Health , Policy Making , European Union , United Kingdom , Workforce
15.
Article in English | MEDLINE | ID: mdl-33163996

ABSTRACT

Wayfinding is a major challenge for visually impaired travelers, who generally lack access to visual cues such as landmarks and informational signs that many travelers rely on for navigation. Indoor wayfinding is particularly challenging since the most commonly used source of location information for wayfinding, GPS, is inaccurate indoors. We describe a computer vision approach to indoor localization that runs as a real-time app on a conventional smartphone, which is intended to support a full-featured wayfinding app in the future that will include turn-by-turn directions. Our approach combines computer vision, existing informational signs such as Exit signs, inertial sensors and a 2D map to estimate and track the user's location in the environment. An important feature of our approach is that it requires no new physical infrastructure. While our approach requires the user to either hold the smartphone or wear it (e.g., on a lanyard) with the camera facing forward while walking, it has the advantage of not forcing the user to aim the camera towards specific signs, which would be challenging for people with low or no vision. We demonstrate the feasibility of our approach with five blind travelers navigating an indoor space, with localization accuracy of roughly 1 meter once the localization algorithm has converged.

16.
J Technol Pers Disabil ; 8: 210-222, 2020.
Article in English | MEDLINE | ID: mdl-32802916

ABSTRACT

We describe a new approach to audio labeling of 3D objects such as appliances, 3D models and maps that enables a visually impaired person to audio label objects. Our approach to audio labeling is called CamIO, a smartphone app that issues audio labels when the user points to a hotspot (a location of interest on an object) with a handheld stylus viewed by the smartphone camera. The CamIO app allows a user to create a new hotspot location by pointing at the location with a second stylus and recording a personalized audio label for the hotspot. In contrast with other audio labeling approaches that require the object of interest to be constructed of special materials, 3D printed, or equipped with special sensors, CamIO works with virtually any rigid object and requires only a smartphone, a paper barcode pattern mounted to the object of interest, and two inexpensive styluses. Moreover, our approach allows a visually impaired user to create audio labels independently. We describe a co-design performed with six blind participants exploring how they label objects in their daily lives and a study with the participants demonstrating the feasibility of CamIO for providing accessible audio labeling.

17.
Proc Int Conf Audit Disp ; 2019: 20-27, 2019 Jun.
Article in English | MEDLINE | ID: mdl-32051791

ABSTRACT

This study evaluated a web-based auditory map prototype built utilizing conventions found in audio games and presents findings from a set of tasks participants performed with the prototype. The prototype allowed participants to use their own computer and screen reader, contrary to most studies, which restrict use to a single platform and a self-voicing feature (providing a voice that talks by default). There were three major findings from the tasks: the interface was extremely easy to learn and navigate, participants all had unique navigational styles and preferred using their own screen reader, and participants needed user interface features that made it easier to understand and answer questions about spatial properties and relationships. Participants gave an average task load score of 39 from the NASA Task Load Index and gave a confidence level of 46/100 for actually using the prototype to physically navigate.

18.
Comput Help People Spec Needs ; 10897: 86-93, 2018 Jul.
Article in English | MEDLINE | ID: mdl-31058269

ABSTRACT

Indoor wayfinding is a major challenge for people with visual impairments, who are often unable to see visual cues such as informational signs, land-marks and structural features that people with normal vision rely on for wayfinding. We describe a novel indoor localization approach to facilitate wayfinding that uses a smartphone to combine computer vision and a dead reckoning technique known as visual-inertial odometry (VIO). The approach uses sign recognition to estimate the user's location on the map whenever a known sign is recognized, and VIO to track the user's movements when no sign is visible. The ad-vantages of our approach are (a) that it runs on a standard smartphone and re-quires no new physical infrastructure, just a digital 2D map of the indoor environment that includes the locations of signs in it; and (b) it allows the user to walk freely without having to actively search for signs with the smartphone (which is challenging for people with severe visual impairments). We report a formative study with four blind users demonstrating the feasibility of the approach and suggesting areas for future improvement.

19.
ASSETS ; 2017: 369-370, 2017.
Article in English | MEDLINE | ID: mdl-29218332

ABSTRACT

We describe three usability studies involving a prototype system for creation and haptic exploration of labeled locations on 3D objects. The system uses a computer, webcam, and fiducial markers to associate a physical 3D object in the camera's view with a predefined digital map of labeled locations ("hotspots"), and to do real-time finger tracking, allowing a blind or visually impaired user to explore the object and hear individual labels spoken as each hotspot is touched. This paper describes: (a) a formative study with blind users exploring pre-annotated objects to assess system usability and accuracy; (b) a focus group of blind participants who used the system and, through structured and unstructured discussion, provided feedback on its practicality, possible applications, and real-world potential; and (c) a formative study in which a sighted adult used the system to add labels to on-screen images of objects, demonstrating the practicality of remote annotation of 3D models. These studies and related literature suggest potential for future iterations of the system to benefit blind and visually impaired users in educational, professional, and recreational contexts.

20.
ASSETS ; 2017: 329-330, 2017.
Article in English | MEDLINE | ID: mdl-29218331

ABSTRACT

People with severe visual impairments usually have no way of identifying the colors of objects in their environment. While existing smartphone apps can recognize colors and speak them aloud, they require the user to center the object of interest in the camera's field of view, which is challenging for many users. We developed a smartphone app to address this problem that reads aloud the color of the object pointed to by the user's fingertip, without confusion from background colors. We evaluated the app with nine people who are blind, demonstrating the app's effectiveness and suggesting directions for improvements in the future.

SELECTION OF CITATIONS
SEARCH DETAIL
...