Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Stud Health Technol Inform ; 306: 371-378, 2023 Aug 23.
Article in English | MEDLINE | ID: mdl-37638938

ABSTRACT

This research has analyzed the accessibility of the current metaverse platforms from the perspective of screen reader and switch scanning interface users, using the mixture of quantitative and qualitative assessments. To this end, the two representative metaverse platforms, ZEPETO and Roblox, were targeted. As a result, it was found that the current metaverse platforms are not carefully designed with accessibility in mind. Many content elements and controls in the metaverse environment suffers from the lack of alternative text description and appropriate markups which are essential to make it perceivable and recognizable by assistive technology. People with severe disabilities are very likely to find it difficult or impossible to independently navigate the current metaverse environment, because they do not provide any viable means of orientation and mobility in the 3D virtual space at all. The UI/UX of the current metaverse platforms also do not provide adequate feedback to help people with limited sensory/motor functions to understand the purpose and function of it. Overall, thereby, the current metaverse environment is not robust enough to reliably work with a wide range of assistive technologies.


Subject(s)
Self-Help Devices , Humans
2.
Disabil Rehabil Assist Technol ; 13(2): 140-145, 2018 02.
Article in English | MEDLINE | ID: mdl-28326859

ABSTRACT

We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.


Subject(s)
Robotics , Self-Help Devices , Upper Extremity , Adult , Aged , Equipment Design , Female , Humans , Male , Middle Aged , User-Computer Interface , Young Adult
3.
Disabil Rehabil Assist Technol ; 12(3): 227-235, 2017 04.
Article in English | MEDLINE | ID: mdl-26776719

ABSTRACT

PURPOSE: To investigate a new alternative interaction method, called circling interface, for manipulating on-screen objects. To specify a target, the user makes a circling motion around the target. To specify a desired pointing command with the circling interface, each edge of the screen is used. The user selects a command before circling the target. METHOD: To evaluate the circling interface, we conducted an experiment with 16 participants, comparing the performance on pointing tasks with different combinations of selection method (circling interface, physical mouse and dwelling interface) and input device (normal computer mouse, head pointer and joystick mouse emulator). RESULTS: A circling interface is compatible with many types of pointing devices, not requiring physical activation of mouse buttons, and is more efficient than dwell-clicking. Across all common pointing operations, the circling interface had a tendency to produce faster performance with a head-mounted mouse emulator than with a joystick mouse. The performance accuracy of the circling interface outperformed the dwelling interface. CONCLUSIONS: It was demonstrated that the circling interface has the potential as another alternative pointing method for selecting and manipulating objects in a graphical user interface. Implications for Rehabilitation A circling interface will improve clinical practice by providing an alternative pointing method that does not require physically activating mouse buttons and is more efficient than dwell-clicking. The Circling interface can also work with AAC devices.


Subject(s)
Computer Peripherals , Disabled Persons/rehabilitation , Movement , Self-Help Devices , User-Computer Interface , Adult , Equipment Design , Female , Humans , Male , Middle Aged , Time Factors , Young Adult
4.
Disabil Rehabil Assist Technol ; 12(5): 469-479, 2017 07.
Article in English | MEDLINE | ID: mdl-27292928

ABSTRACT

PURPOSE: To evaluate the performance of the circling interface, which is an alternative interaction method for selecting and manipulating on-screen objects based on circling the target, rather than pointing and clicking. METHOD: We conducted empirical evaluations with actual head-mounted mouse emulator users from two different groups: individuals with spinal cord injury (SCI) and individuals with cerebral palsy (CP), comparing each group's performance and satisfaction level on pointing tasks with the circling interface to performance on the same tasks when using dwell-clicking software. RESULTS: Across all operations, for both subjects with SCI and with CP, the circling interface showed faster performance than the dwell-clicking interface. For the single-click operation, the circling interface showed slower performance than dwell selection, but for both double-click and drag-and-drop operations, the circling interface produced faster performance. Subjects with CP required much longer time to complete the tasks compared to subjects with SCI. If errors caused by circling on an area with no target and unintentional circling caused by jerky movements and an abnormally tiny circle are automatically corrected by the circling interface, their performance accuracy with the circling interface outperformed existing solutions without a steep learning curve. CONCLUSIONS: Circling interface can be used in conjunction with existing techniques and this kind of combined approach achieve more effective mouse use for some individuals with pointing problems. It is also expected to be useful for both computer access and augmentative communication software. Implications for Rehabilitation A circling interface will improve clinical practice by providing an alternative pointing method that does not require physically activating mouse buttons and is more efficient than dwell-clicking. Being used in conjunction with existing techniques, some individuals who are head mouse users can achieve more effective mouse use. The Circling interface can also work with AAC devices.


Subject(s)
Cerebral Palsy/rehabilitation , Communication Aids for Disabled , Spinal Cord Injuries/rehabilitation , User-Computer Interface , Adult , Equipment Design , Female , Humans , Male , Middle Aged , Patient Satisfaction , Time Factors
5.
Top Spinal Cord Inj Rehabil ; 23(2): 131-139, 2017.
Article in English | MEDLINE | ID: mdl-29339889

ABSTRACT

Background: Assistive robotic manipulators (ARMs) have been developed to provide enhanced assistance and independence in performance of daily activities among people with spinal cord injury when a caregiver is not on site. However, the current commercial ARM user interfaces (UIs) may be difficult to learn and control. A touchscreen mobile UI was developed to overcome these challenges. Objective: The object of this study was to evaluate the performance between 2 ARM UIs, touchscreen and the original joystick, using an ARM evaluation tool (ARMET). Methods: This is a pilot study of people with upper extremity impairments (N = 8). Participants were trained on 2 UIs, and then they chose one to use when performing 3 tasks on the ARMET: flipping a toggle switch, pushing down a door handle, and turning a knob. Task completion time, mean velocity, and open interviews were the main outcome measurements. Results: Among 8 novice participants, 7 chose the touchscreen UI and 1 chose the joystick UI. All participants could complete the ARMET tasks independently. Use of the touchscreen UI resulted in enhanced ARMET performance (higher mean moving speed and faster task completion). Conclusions: Mobile ARM UIs demonstrated easier learning experience, less physical effort, and better ARMET performance. The improved performance, the accessibility, and lower physical effort suggested that the touchscreen UI might be an efficient tool for the ARM users.


Subject(s)
Activities of Daily Living , Self-Help Devices , Spinal Cord Injuries , User-Computer Interface , Wheelchairs , Adult , Aged , Disabled Persons , Female , Humans , Male , Middle Aged , Pilot Projects , Robotics , Young Adult
6.
Disabil Rehabil Assist Technol ; 7(6): 501-6, 2012 Nov.
Article in English | MEDLINE | ID: mdl-22356240

ABSTRACT

We have developed an intelligent single switch scanning interface and wheelchair navigation assistance system, called intelligent single switch wheelchair navigation (ISSWN), to improve driving safety, comfort and efficiency for individuals who rely on single switch scanning as a control method. ISSWN combines a standard powered wheelchair with a laser rangefinder, a single switch scanning interface and a computer. It provides the user with context sensitive and task specific scanning options that reduce driving effort based on an interpretation of sensor data together with user input. Trials performed by 9 able-bodied participants showed that the system significantly improved driving safety and efficiency in a navigation task by significantly reducing the number of switch presses to 43.5% of traditional single switch wheelchair navigation (p < 0.001). All participants made a significant improvement (39.1%; p < 0.001) in completion time after only two trials.


Subject(s)
Activities of Daily Living , Artificial Intelligence , Disabled Persons/rehabilitation , Robotics , User-Computer Interface , Wheelchairs , Adult , Aged , Aged, 80 and over , Algorithms , Analysis of Variance , Chi-Square Distribution , Female , Humans , Male , Middle Aged , Pilot Projects , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...