Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-39302771

RESUMO

The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefts to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifer users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically "see" the chart content. They ascribed these challenges mainly to the magnifcation-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method signifcantly improved the usability of charts over both the status quo screen magnifer and a state-of-the-art space compaction-based solution.

2.
J Imaging ; 9(11)2023 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-37998086

RESUMO

Advertisements have become commonplace on modern websites. While ads are typically designed for visual consumption, it is unclear how they affect blind users who interact with the ads using a screen reader. Existing research studies on non-visual web interaction predominantly focus on general web browsing; the specific impact of extraneous ad content on blind users' experience remains largely unexplored. To fill this gap, we conducted an interview study with 18 blind participants; we found that blind users are often deceived by ads that contextually blend in with the surrounding web page content. While ad blockers can address this problem via a blanket filtering operation, many websites are increasingly denying access if an ad blocker is active. Moreover, ad blockers often do not filter out internal ads injected by the websites themselves. Therefore, we devised an algorithm to automatically identify contextually deceptive ads on a web page. Specifically, we built a detection model that leverages a multi-modal combination of handcrafted and automatically extracted features to determine if a particular ad is contextually deceptive. Evaluations of the model on a representative test dataset and 'in-the-wild' random websites yielded F1 scores of 0.86 and 0.88, respectively.

3.
HT ACM Conf Hypertext Soc Media ; 2021: 231-236, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35265946

RESUMO

Blind users interact with smartphone applications using a screen reader, an assistive technology that enables them to navigate and listen to application content using touch gestures. Since blind users rely on screen reader audio, interacting with online videos can be challenging due to the screen reader audio interfering with the video sounds. Existing solutions to address this interference problem are predominantly designed for desktop scenarios, where special keyboard or mouse actions are supported to facilitate 'silent' and direct access to various video controls such as play, pause, and progress bar. As these solutions are not transferable to smartphones, suitable alternatives are desired. In this regard, we explore the potential of motion gestures in smartphones as an effective and convenient method for blind screen reader users to interact with online videos. Specifically, we designed and developed YouTilt, an Android application that enables screen reader users to exploit an assortment of motion gestures to access and manipulate various video controls. We then conducted a user study with 10 blind participants to investigate whether blind users can leverage YouTilt to properly execute motion gestures for video-interaction tasks while simultaneously listening to video sounds. Analysis of the study data showed a significant improvement in usability by as much as 43.3% (avg.) with YouTilt compared to that with default screen reader, and overall a positive attitude and acceptance towards motion gesture-based video interaction.

4.
Proc Symp Appl Comput ; 2021: 1941-1949, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35265951

RESUMO

Navigating back-and-forth between segments in webpages is well-known to be an arduous endeavor for blind screen-reader users, due to the serial nature of content navigation coupled with the inconsistent usage of accessibility enhancing features such as WAI-ARIA landmarks and skip navigation links by web developers. Without these supporting features, navigating modern webpages that typically contain thousands of HTML elements in their DOMs, is both tedious and cumbersome for blind screen-reader users. Existing approaches to improve non-visual navigation efficiency typically propose 'one-size-fits-all' solutions that do not accommodate the personal needs and preferences of screen-reader users. To fill this void, in this paper, we present sTag, a browser extension embodying a semi-automatic method that enables users to easily create their own Table Of Contents (TOC) for any webpage by simply 'tagging' their preferred 'semantically-meaningful' segments (e.g., search results, filter options, forms, menus, etc.) while navigating the webpage. This way, all subsequent accesses to these segments can be made via the generated TOC that is made instantly accessible via a special shortcut or a repurposed mouse/touchpad action. As tags in sTag are attached to the abstract semantic segments instead of actual DOM nodes in the webpage, sTag can automatically generate equivalent TOCs for other similar webpages, without requiring the users to duplicate their tagging efforts from scratch in these webpages. An evaluation with 15 blind screen-reader users revealed that sTag significantly reduced the content-navigation time and effort compared to those with a state-of-the-art solution.

5.
Artigo em Inglês | MEDLINE | ID: mdl-35224455

RESUMO

Many people with low vision rely on screen-magnifier assistive technology to interact with productivity applications such as word processors, spreadsheets, and presentation software. Despite the importance of these applications, little is known about their usability with respect to low-vision screen-magnifier users. To fill this knowledge gap, we conducted a usability study with 10 low-vision participants having different eye conditions. In this study, we observed that most usability issues were predominantly due to high spatial separation between main edit area and command ribbons on the screen, as well as the wide span grid-layout of command ribbons; these two GUI aspects did not gel with the screen-magnifier interface due to lack of instantaneous WYSIWYG (What You See Is What You Get) feedback after applying commands, given that the participants could only view a portion of the screen at any time. Informed by the study findings, we developed MagPro, an augmentation to productivity applications, which significantly improves usability by not only bringing application commands as close as possible to the user's current viewport focus, but also enabling easy and straightforward exploration of these commands using simple mouse actions. A user study with nine participants revealed that MagPro significantly reduced the time and workload to do routine command-access tasks, compared to using the state-of-the-art screen magnifier.

6.
Proc ACM Int Conf Inf Knowl Manag ; 2021: 58-67, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35173995

RESUMO

Video accessibility is crucial for blind screen-reader users as online videos are increasingly playing an essential role in education, employment, and entertainment. While there exist quite a few techniques and guidelines that focus on creating accessible videos, there is a dearth of research that attempts to characterize the accessibility of existing videos. Therefore in this paper, we define and investigate a diverse set of video and audio-based accessibility features in an effort to characterize accessible and inaccessible videos. As a ground truth for our investigation, we built a custom dataset of 600 videos, in which each video was assigned an accessibility score based on the number of its wins in a Swiss-system tournament, where human annotators performed pairwise accessibility comparisons of videos. In contrast to existing accessibility research where the assessments are typically done by blind users, we recruited sighted users for our effort, since videos comprise a special case where sight could be required to better judge if any particular scene in a video is presently accessible or not. Subsequently, by examining the extent of association between the accessibility features and the accessibility scores, we could determine the features that signifcantly (positively or negatively) impact video accessibility and therefore serve as good indicators for assessing the accessibility of videos. Using the custom dataset, we also trained machine learning models that leveraged our handcrafted features to either classify an arbitrary video as accessible/inaccessible or predict an accessibility score for the video. Evaluation of our models yielded an F 1 score of 0.675 for binary classification and a mean absolute error of 0.53 for score prediction, thereby demonstrating their potential in video accessibility assessment while also illuminating their current limitations and the need for further research in this area.

7.
MobileHCI ; 20212021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37547542

RESUMO

Gliding a finger on touchscreen to reach a target, that is, touch exploration, is a common selection method of blind screen-reader users. This paper investigates their gliding behavior and presents a model for their motor performance. We discovered that the gliding trajectories of blind people are a mixture of two strategies: 1) ballistic movements with iterative corrections relying on non-visual feedback, and 2) multiple sub-movements separated by stops, and concatenated until the target is reached. Based on this finding, we propose the mixture pointing model, a model that relates movement time to distance and width of the target. The model outperforms extant models, improving R2 from 0.65 for Fitts' law to 0.76, and is superior in cross-validation and information criteria. The model advances understanding of gliding-based target selection and serves as a tool for designing interface layouts for screen-reader based touch exploration.

8.
ASSETS ; 20202020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33681868

RESUMO

People with visual impairments typically rely on screen-magnifier assistive technology to interact with webpages. As screen-magnifier users can only view a portion of the webpage content in an enlarged form at any given time, they have to endure an inconvenient and arduous process of repeatedly moving the magnifier focus back-and-forth over different portions of the webpage in order to make comparisons between data records, e.g., comparing the available fights in a travel website based on their prices, durations, etc. To address this issue, we designed and developed TableView, a browser extension that leverages a state-of-the art information extraction method to automatically identify and extract data records and their attributes in a webpage, and subsequently presents them to a user in a compactly arranged tabular format that needs significantly less screen space compared to that currently occupied by these items in the page. This way, TableView is able to pack more items within the magnifier focus, thereby reducing the overall content area for panning, and hence making it easy for screen-magnifier users to compare different items before making their selections. A user study with 16 low vision participants showed that with TableView, the time spent on panning the data records in webpages was significantly reduced by 72.9% (avg.) compared to that with just a screen magnifier, and 66.5% compared to that with a screen magnifier using a space compaction method.

9.
HCI Int 2020 Late Break Posters (2020) ; 12426: 291-305, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33659964

RESUMO

Most computer applications manifest visually rich and dense graphical user interfaces (GUIs) that are primarily tailored for an easy-and-efficient sighted interaction using a combination of two default input modalities, namely the keyboard and the mouse/touchpad. However, blind screen-reader users predominantly rely only on keyboard, and therefore struggle to interact with these applications, since it is both arduous and tedious to perform the visual 'point-and-click' tasks such as accessing the various application commands/features using just keyboard shortcuts supported by screen readers. In this paper, we investigate the suitability of a 'rotate-and-press' input modality as an effective non-visual substitute for the visual mouse to easily interact with computer applications, with specific focus on word processing applications serving as the representative case study. In this regard, we designed and developed bTunes, an add-on for Microsoft Word that customizes an off-the-shelf Dial input device such that it serves as a surrogate mouse for blind screen-reader users to quickly access various application commands and features using a set of simple rotate and press gestures supported by the Dial. Therefore, with bTunes, blind users too can now enjoy the benefits of two input modalities, as their sighted counterparts. A user study with 15 blind participants revealed that bTunes significantly reduced both the time and number of user actions for doing representative tasks in a word processing application, by as much as 65.1% and 36.09% respectively. The participants also stated that they did not face any issues switching between keyboard and Dial, and furthermore gave a high usability rating (84.66 avg. SUS score) for bTunes.

10.
Conf Proc IEEE Int Conf Syst Man Cybern ; 2020: 3799-3806, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33679118

RESUMO

Interacting with long web documents such as wiktionaries, manuals, tutorials, blogs, novels, etc., is easy for sighted users, as they can leverage convenient pointing devices such as a mouse/touchpad to quickly access the desired content either via scrolling with visual scanning or clicking hyperlinks in the available Table of Contents (TOC). Blind users on the other hand are unable to use these pointing devices, and therefore can only rely on keyboard-based screen reader assistive technology that lets them serially navigate and listen to the page content using keyboard shortcuts. As a consequence, interacting with long web documents with just screen readers, is often an arduous and tedious experience for the blind users. To bridge the usability divide between how sighted and blind users interact with web documents, in this paper, we present iTOC, a browser extension that automatically identifies and extracts TOC hyperlinks from the web documents, and then facilitates on-demand instant screen-reader access to the TOC from anywhere in the website. This way, blind users need not manually search for the desired content by moving the screen-reader focus sequentially all over the webpage; instead they can simply access the TOC from anywhere using iTOC, and then select the desired hyperlink which will automatically move the focus to the corresponding content in the document. A user study with 15 blind participants showed that with iTOC, both the access time and user effort (number of user input actions) were significantly lowered by as much as 42.73% and 57.9%, respectively, compared to that with another state-of-the-art solution for improving web usability.

11.
ASSETS ; 20202020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33569549

RESUMO

People with low vision use screen magnifiers to interact with computers. They usually need to zoom and pan with the screen magnifier using predefined keyboard and mouse actions. When using office productivity applications (e.g., word processors and spreadsheet applications), the spatially distributed arrangement of UI elements makes interaction a challenging proposition for low vision users, as they can only view a fragment of the screen at any moment. They expend significant chunks of time panning back-and-forth between application ribbons containing various commands (e.g., formatting, design, review, references, etc.) and the main edit area containing user content. In this demo, we will demonstrate MagPro, an interface augmentation to office productivity tools, that not only reduces the interaction effort of low-vision screen-magnifier users by bringing the application commands as close as possible to the users' current focus in the edit area, but also lets them easily explore these commands using simple mouse actions. Moreover, MagPro automatically synchronizes the magnifier viewport with the keyboard cursor, so that users can always see what they are typing, without having to manually adjust the magnifier focus every time the keyboard cursor goes of screen during text entry.

12.
Conf Proc IEEE Int Conf Syst Man Cybern ; 2020: 2714-2721, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33568891

RESUMO

Visual 'point-and-click' interaction artifacts such as mouse and touchpad are tangible input modalities, which are essential for sighted users to conveniently interact with computer applications. In contrast, blind users are unable to leverage these visual input modalities and are thus limited while interacting with computers using a sequentially narrating screen-reader assistive technology that is coupled to keyboards. As a consequence, blind users generally require significantly more time and effort to do even simple application tasks (e.g., applying a style to text in a word processor) using only keyboard, compared to their sighted peers who can effortlessly accomplish the same tasks using a point-and-click mouse. This paper explores the idea of repurposing visual input modalities for non-visual interaction so that blind users too can draw the benefits of simple and efficient access from these modalities. Specifically, with word processing applications as the representative case study, we designed and developed NVMouse as a concrete manifestation of this repurposing idea, in which the spatially distributed word-processor controls are mapped to a virtual hierarchical 'Feature Menu' that is easily traversable non-visually using simple scroll and click input actions. Furthermore, NVMouse enhances the efficiency of accessing frequently-used application commands by leveraging a data-driven prediction model that can determine what commands the user will most likely access next, given the current 'local' screen-reader context in the document. A user study with 14 blind participants comparing keyboard-based screen readers with NVMouse, showed that the latter significantly reduced both the task-completion times and user effort (i.e., number of user actions) for different word-processing activities.

13.
Artigo em Inglês | MEDLINE | ID: mdl-34337341

RESUMO

PDF forms are ubiquitous. Businesses big and small, government agencies, health and educational institutions and many others have all embraced PDF forms. People use PDF forms for providing information to these entities. But people who are blind frequently find it very difficult to fill out PDF forms with screen readers, the standard assistive software that they use for interacting with computer applications. Firstly, many of the them are not even accessible as they are non-interactive and hence not editable on a computer. Secondly, even if they are interactive, it is not always easy to associate the correct labels with the form fields, either because the labels are not meaningful or the sequential reading order of the screen reader misses the visual cues that associate the correct labels with the fields. In this paper we present a solution to the accessibility problem of PDF forms. We leverage the fact that many people with visual impairments are familiar with web browsing and are proficient at filling out web forms. Thus, we create a web form layer over the PDF form via a high fidelity transformation process that attempts to preserve all the spatial relationships of the PDF elements including forms, their labels and the textual content. Blind people only interact with the web forms, and the filled out web form fields are transparently transferred to the corresponding fields in the PDF form. An optimization algorithm automatically adjusts the length and width of the PDF fields to accommodate arbitrary size field data. This ensures that the filled out PDF document does not have any truncated form-field values, and additionally, it is readable. A user study with fourteen users with visual impairments revealed that they were able to populate more form fields than the status quo and the self-reported user experience with the proposed interface was superior compared to the status quo.

14.
Artigo em Inglês | MEDLINE | ID: mdl-34337615

RESUMO

Consuming video content poses significant challenges for many screen magnifier users, which is the "go to" assistive technology for people with low vision. While screen magnifier software could be used to achieve a zoom factor that would make the content of the video visible to low-vision users, it is oftentimes a major challenge for these users to navigate through videos. Towards making videos more accessible for low-vision users, we have developed the SViM video magnifier system [6]. Specifically, SViM consists of three different magnifier interfaces with easy-to-use means of interactions. All three interfaces are driven by visual saliency as a guided signal, which provides a quantification of interestingness at the pixel-level. Saliency information, which is provided as a heatmap is then processed to obtain distinct regions of interest. These regions of interests are tracked over time and displayed using an easy-to-use interface. We present a description of our overall design and interfaces.

15.
ASSETS ; 20202020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33569550

RESUMO

Filling out PDF forms with screen readers has always been a challenge for people who are blind. Many of these forms are not interactive and hence are not accessible; even if they are interactive, the serial reading order of the screen reader makes it difficult to associate the correct labels with the form fields. This demo will present TransPAc[5], an assistive technology that enables blind people to fill out PDF forms. Since blind people are familiar with web browsing, TransPAc leverages this fact by faithfully transforming a PDF document with forms into a HTML page. The blind user fills out the form fields in the HTML page with their screen reader and these filled-in data values are transparently transferred onto the corresponding form fields in the PDF document. TransPAc thus addresses a long standing problem in PDF form accessibility.

16.
IUI ; 2020: 10-21, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33569551

RESUMO

People with low vision who use screen magnifiers to interact with computing devices find it very challenging to interact with dynamically changing digital content such as videos, since they do not have the luxury of time to manually move, i.e., pan the magnifier lens to different regions of interest (ROIs) or zoom into these ROIs before the content changes across frames. In this paper, we present SViM, a first of its kind screen-magnifier interface for such users that leverages advances in computer vision, particularly video saliency models, to identify salient ROIs in videos. SViM's interface allows users to zoom in/out of any point of interest, switch between ROIs via mouse clicks and provides assistive panning with the added flexibility that lets the user explore other regions of the video besides the ROIs identified by SViM. Subjective and objective evaluation of a user study with 13 low vision screen magnifier users revealed that overall the participants had a better user experience with SViM over extant screen magnifiers, indicative of the former's promise and potential for making videos accessible to low vision screen magnifier users.

17.
IUI ; 2020: 111-115, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33585839

RESUMO

Navigating webpages with screen readers is a challenge even with recent improvements in screen reader technologies and the increased adoption of web standards for accessibility, namely ARIA. ARIA landmarks, an important aspect of ARIA, lets screen reader users access different sections of the webpage quickly, by enabling them to skip over blocks of irrelevant or redundant content. However, these landmarks are sporadically and inconsistently used by web developers, and in many cases, even absent in numerous web pages. Therefore, we propose SaIL, a scalable approach that automatically detects the important sections of a web page, and then injects ARIA landmarks into the corresponding HTML markup to facilitate quick access to these sections. The central concept underlying SaIL is visual saliency, which is determined using a state-of-the-art deep learning model that was trained on gaze-tracking data collected from sighted users in the context of web browsing. We present the findings of a pilot study that demonstrated the potential of SaIL in reducing both the time and effort spent in navigating webpages with screen readers.

18.
Artigo em Inglês | MEDLINE | ID: mdl-34337621

RESUMO

Web browsing has never been easy for blind people, primarily due to the serial press-and-listen interaction mode of screen readers - their "go-to" assistive technology. Even simple navigational browsing actions on a page require a multitude of shortcuts. Auto-suggesting the next browsing action has the potential to assist blind users in swiftly completing various tasks with minimal effort. Extant auto-suggest feature in web pages is limited to filling form fields; in this paper, we generalize it to any web screen-reading browsing action, e.g., navigation, selection, etc. Towards that, we introduce SuggestOmatic, a personalized and scalable unsupervised approach for predicting the most likely next browsing action of the user, and proactively suggesting it to the user so that the user can avoid pressing a lot of shortcuts to complete that action. SuggestOmatic rests on two key ideas. First, it exploits the user's Action History to identify and suggest a small set of browsing actions that will, with high likelihood, contain an action which the user will want to do next, and the chosen action is executed automatically. Second, the Action History is represented as an abstract temporal sequence of operations over semantic web entities called Logical Segments - a collection of related HTML elements, e.g., widgets, search results, menus, forms, etc.; this semantics-based abstract representation of browsing actions in the Action History makes SuggestOmatic scalable across websites, i.e., actions recorded in one website can be used to make suggestions for other similar websites. We also describe an interface that uses an off-the-shelf physical Dial as an input device that enables SuggestOmatic to work with any screen reader. The results of a user study with 12 blind participants indicate that SuggestOmatic can significantly reduce the browsing task times by as much as 29% when compared with a hand-crafted macro-based web automation solution.

19.
Artigo em Inglês | MEDLINE | ID: mdl-33585840

RESUMO

Gesture typing-entering a word by gliding the finger sequentially over letter to letter- has been widely supported on smartphones for sighted users. However, this input paradigm is currently inaccessible to blind users: it is difficult to draw shape gestures on a virtual keyboard without access to key visuals. This paper describes the design of accessible gesture typing, to bring this input paradigm to blind users. To help blind users figure out key locations, the design incorporates the familiar screen-reader supported touch exploration that narrates the keys as the user drags the finger across the keyboard. The design allows users to seamlessly switch between exploration and gesture typing mode by simply lifting the finger. Continuous touch-exploration like audio feedback is provided during word shape construction that helps the user glide in the right direction of the key locations constituting the word. Exploration mode resumes once word shape is completed. Distinct earcons help distinguish gesture typing mode from touch exploration mode, and thereby avoid unintended mix-ups. A user study with 14 blind people shows 35% increment in their typing speed, indicative of the promise and potential of gesture typing technology for non-visual text entry.

20.
IUI ; 2018: 427-431, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30027159

RESUMO

Working with non-digital, standard printed materials has always been a challenge for blind people, especially writing. Blind people very often depend on others to fill out printed forms, write checks, sign receipts and documents. Extant assistive technologies for working with printed material have exclusively focused on reading, with little to no support for writing. Also, these technologies employ special-purpose hardware that are usually worn on fingers, making them unsuitable for writing. In this paper, we explore the idea of using off-the-shelf smartwatches (paired with smartphones) to assist blind people in both reading and writing paper forms including checks and receipts. Towards this, we performed a Wizard-of-Oz evaluation of different smartwatch-based interfaces that provide user-customized audio-haptic feedback in real-time, to guide blind users to different form fields, narrate the field labels, and help them write straight while filling out these fields. Finally, we report the findings of this study including the technical challenges and user expectations that can potentially inform the design of Write-it-Yourself aids based on smartwatches.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa