Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters











Publication year range
1.
Sci Rep ; 14(1): 1313, 2024 01 15.
Article in English | MEDLINE | ID: mdl-38225344

ABSTRACT

Visual prostheses such as the Argus II provide partial vision for individuals with limited or no light perception. However, their effectiveness in daily life situations is limited by scene complexity and variability. We investigated whether additional image processing techniques could improve mobility performance in everyday indoor environments. A mobile system connected to the Argus II provided thermal or distance-filtered video stimulation. Four participants used the thermal camera to locate a person and the distance filter to navigate a hallway with obstacles. The thermal camera allowed for finding a target person in 99% of trials, while unfiltered video led to confusion with other objects and a success rate of only 55% ([Formula: see text]). Similarly, the distance filter enabled participants to detect and avoid 88% of obstacles by removing background clutter, whereas unfiltered video resulted in a detection rate of only 10% ([Formula: see text]). For any given elapsed time, the success rate with filtered video was higher than with unfiltered video. After 90 s, participants' success rate reached above 50% with filtered video and 24% and 3% with normal camera in the first and second tasks, respectively. Despite individual variations, all participants showed significant improvement when using the thermal and distance filters compared to unfiltered video. Adding a thermal and distance filter to a visual prosthesis system can enhance the performance of mobility activities by removing clutter in the background, showing people and warm objects with the thermal camera, or nearby obstacles with the distance filter.


Subject(s)
Visual Prosthesis , Humans , Prosthesis Implantation , Vision Disorders , Image Processing, Computer-Assisted , Diagnostic Imaging
2.
Transl Vis Sci Technol ; 12(10): 14, 2023 10 03.
Article in English | MEDLINE | ID: mdl-37847202

ABSTRACT

Purpose: Visual functioning questionnaires are commonly used as patient-reported outcome measures to estimate visual ability. Performance measures, on the other hand, provide a direct measure of visual ability. For individuals with ultra-low vision (ULV; visual acuity (VA) <20/1600), the ultra-low vision visual functioning questionnaire (ULV-VFQ) and the Wilmer VRI-a virtual reality-based performance test-estimate self-reported and actual visual ability, respectively, for activities of daily living. But how well do self-reports from ULV-VFQ predict actual task performance in the Wilmer VRI? Methods: We administered a subset of 10 matching items from the ULV-VFQ and Wilmer VRI to 27 individuals with ULV. We estimated item measures (task difficulty) and person measures (visual ability) using Rasch analysis for ULV-VFQ and using latent variable signal detection theory for the Wilmer VRI. We then used regression analysis to compare person and item measure estimates from self-reports and task performance. Results: Item and person measures were modestly correlated between the two instruments, with r2 = 0.47 (P = 0.02) and r2 = 0.36 (P = 0.001), demonstrating that self-reports are an imperfect predictor of task difficulty and performance. Conclusions: While self-reports impose a lower demand for equipment and personnel, actual task performance should be measured to assess visual ability in ULV. Translational Relevance: Visual performance measures should be the preferred outcome measure in clinical trials recruiting individuals with ULV. Virtual reality can be used to standardize tasks.


Subject(s)
Activities of Daily Living , Vision, Low , Humans , Self Report , Vision, Low/diagnosis , Task Performance and Analysis , Visual Acuity
3.
J Vis ; 23(11): 55, 2023 09 01.
Article in English | MEDLINE | ID: mdl-37733523

ABSTRACT

Ultra-Low Vision (ULV) refers to a level of vision that is ≦ 20/1600. There are a growing number of vision restoration treatments that recruit people with ULV or restore vision to the ULV level. At present, limited standardized outcome measures are available to assess visual potential before and after such vision restoration treatments. The ULV toolkit was developed as a standardized outcome measure for people with ULV. Three virtual reality (VR) based modules were developed to assess visual information gathering, hand-eye coordination and wayfinding in people with ULV. Each module consisted of a range of visually guided tasks related to activities of daily life (e.g., direction of motion of cars, flipping a light switch, boarding a train). Each correct/incorrect response was scored as '1'/ '0'. These raw scores were then analyzed to estimate item difficulty (item measure) and person ability (person measure). Item measures showed a wide range of difficulty levels that can be used to evaluate visual performance in people with ULV. Person measures were correlated with estimated logMAR visual acuity as well as completion rates, number of collisions and reaction times. This study bridges a big gap in the field of ULV where little is known about visual potential and usefulness in activities of daily life. VR provides portability and consistency for testing across participants with ULV thereby allowing for standardization of measurements across vision restoration studies.


Subject(s)
Virtual Reality , Vision, Low , Humans , Vision Disorders , Visual Acuity , Automobiles
4.
Sci Rep ; 13(1): 3143, 2023 02 23.
Article in English | MEDLINE | ID: mdl-36823360

ABSTRACT

People with ULV (visual acuity ≤ 20/1600 or 1.9 logMAR) lack form vision but have rudimentary levels of vision that can be used for a range of activities in daily life. However, current clinical tests are designed to assess form vision and do not provide information about the range of visually guided activities that can be performed in daily life using ULV. This is important to know given the growing number of clinical trials that recruit individuals with ULV (e.g., gene therapy, stem cell therapy) or restore vision to the ULV range in the blind (visual prosthesis). In this study, we develop a set of 19 activities (items) in virtual reality involving spatial localization/detection, motion detection, and direction of motion that can be used to assess visual performance in people with ULV. We estimated measures of item difficulty and person ability on a relative d prime (d') axis using a signal detection theory based analysis for latent variables. The items represented a range of difficulty levels (- 1.09 to 0.39 in relative d') in a heterogeneous group of individuals with ULV (- 0.74 to 2.2 in relative d') showing the instrument's utility as an outcome measure in clinical trials.


Subject(s)
Virtual Reality , Vision, Low , Humans , Vision, Low/diagnosis , Surveys and Questionnaires , Vision Disorders , Blindness
5.
Front Neurosci ; 17: 1251935, 2023.
Article in English | MEDLINE | ID: mdl-38178831

ABSTRACT

Introduction: Ultra low vision (ULV) refers to profound visual impairment where an individual cannot read even the top line of letters on an ETDRS chart from a distance of 0.5 m. There are limited tools available to assess visual ability in ULV. The aim of this study was to develop and calibrate a new performance test, Wilmer VRH, to assess hand-eye coordination in individuals with ULV. Methods: A set of 55 activities was developed for presentation in a virtual reality (VR) headset. Activities were grouped into 2-step and 5-step items. Participants performed a range of tasks involving reaching and grasping, stacking, sorting, pointing, throwing, and cutting. Data were collected from 20 healthy volunteers under normal vision (NV) and simulated ULV (sULV) conditions, and from 33 participants with ULV. Data were analyzed using the method of successive dichotomizations (MSD), a polytomous Rasch model, to estimate item (difficulty) and person (ability) measures. MSD was applied separately to 2-step and 5-step performance data, then merged to a single equal interval scale. Results: The mean ±SD of completion rates were 98.6 ± 1.8%, 78.2 ± 12.5% and 61.1 ±34.2% for NV, sULV and ULV, respectively. Item measures ranged from -1.09 to 5.7 logits and - 4.3 to 4.08 logits and person measures ranged from -0.03 to 4.2 logits and -3.5 to 5.2 logits in sULV and ULV groups, respectively. Ninety percent of item infits were within the desired range of [0.5,1.5], and 97% of person infits were within that range. Together with item and person reliabilities of 0.94 and 0.91 respectively, this demonstrates unidimensionality of Wilmer VRH. A Person Item map showed that the items were well-targeted to the sample of individuals with ULV in the study. Discussion: We present the development of a calibrated set of activities in VR that can be used to assess hand-eye coordination in individuals with ULV. This helps bridge a gap in the field by providing a validated outcome measure that can be used in vision restoration trials that recruit people with ULV, and to assess rehabilitation outcomes in people with ULV.

6.
Front Neurosci ; 16: 901337, 2022.
Article in English | MEDLINE | ID: mdl-36090266

ABSTRACT

Two of the main obstacles to the development of epiretinal prosthesis technology are electrodes that require current amplitudes above safety limits to reliably elicit percepts, and a failure to consistently elicit pattern vision. Here, we explored the causes of high current amplitude thresholds and poor spatial resolution within the Argus II epiretinal implant. We measured current amplitude thresholds and two-point discrimination (the ability to determine whether one or two electrodes had been stimulated) in 3 blind participants implanted with Argus II devices. Our data and simulations show that axonal stimulation, lift and retinal damage all play a role in reducing performance in the Argus 2, by either limiting sensitivity and/or reducing spatial resolution. Understanding the relative role of these various factors will be critical for developing and surgically implanting devices that can successfully subserve pattern vision.

7.
J Neural Eng ; 19(3)2022 06 09.
Article in English | MEDLINE | ID: mdl-35613043

ABSTRACT

Objective. Electrical stimulation of the retina can elicit flashes of light called phosphenes, which can be used to restore rudimentary vision for people with blindness. Functional sight requires stimulation of multiple electrodes to create patterned vision, but phosphenes tend to merge together in an uninterpretable way. Sequentially stimulating electrodes in human visual cortex has recently demonstrated that shapes could be 'drawn' with better perceptual resolution relative to simultaneous stimulation. The goal of this study was to evaluate if sequential stimulation would also form clearer shapes when the retina is the neural target.Approach. Two human participants with retinitis pigmentosa who had Argus®II epiretinal prostheses participated in this study. We evaluated different temporal parameters for sequential stimulation and performed phosphene shape mapping and forced choice discrimination tasks. For the discrimination tasks, performance was compared between stimulating electrodes simultaneously versus sequentially.Main results. Phosphenes elicited by different electrodes were reported as vastly different shapes. For sequential stimulation, the optimal pulse train duration was 200 ms when stimulating at 20 Hz and the optimal gap interval was tied between 0 and 50 ms. Sequential electrode stimulation outperformed simultaneous stimulation in simple discrimination tasks, in which shapes were created by stimulating 3-4 electrodes, but not in more complex discrimination tasks involving ≥5 electrodes. The efficacy of sequential stimulation depended strongly on selecting electrodes that elicited phosphenes with similar shapes and sizes.Significance. An epiretinal prosthesis can produce coherent simple shapes with a sequential stimulation paradigm, which can be used as rudimentary visual feedback. However, success in creating more complex shapes, such as letters of the alphabet, is still limited. Sequential stimulation may be most beneficial for epiretinal prostheses in simple tasks, such as basic navigation, rather than complex tasks such as novel object identification.


Subject(s)
Retinitis Pigmentosa , Visual Prosthesis , Blindness , Electric Stimulation , Electrodes, Implanted , Humans , Phosphenes , Retina , Retinitis Pigmentosa/therapy , Vision Disorders
8.
Vision Res ; 184: 23-29, 2021 07.
Article in English | MEDLINE | ID: mdl-33780753

ABSTRACT

To date, retinal implants are the only available treatment for blind individuals with retinal degenerations such as retinitis pigmentosa. Argus II is the only visual implant with FDA approval, with more than 300 users worldwide. Argus II stimulation is based on a grayscale image coming from a head-mounted visible-light camera. Normally, the 11°×19° field of view of the Argus II user is full of objects that may elicit similar phosphenes. The prosthesis cannot meaningfully convey so much visual information, and the percept is reduced to an ambiguous impression of light. This study is aimed at investigating the efficacy of simplifying the video input in real-time using a heat-sensitive camera. Data were acquired from four Argus II users in 5 stationary tasks with either hot objects or human targets as stimuli. All tasks were of m-alternative forced choice design where precisely one of the m≥2 response alternatives was defined to be "correct" by the experimenter. To compare performance with heat-sensitive and normal cameras across all tasks, regardless of m, we used an extension of signal detection theory to latent variables, estimating person ability and item difficulty in d' units. Results demonstrate that subject performance was significantly better across all tasks with the thermal camera compared to the regular Argus II camera. The future addition of thermal imaging to devices with very poor spatial resolution may have significant real-life benefits for orientation, personal safety, and social interactions, thereby improving quality of life.


Subject(s)
Retinitis Pigmentosa , Visual Prosthesis , Hot Temperature , Humans , Quality of Life , Vision, Ocular
9.
Transl Vis Sci Technol ; 9(12): 27, 2020 11.
Article in English | MEDLINE | ID: mdl-33244447

ABSTRACT

Purpose: At present, Argus II is the only retinal prosthesis approved by the US Food and Drug Administration that induces visual percepts in people who are blind from end-stage outer retinal degenerations such as retinitis pigmentosa. It has been shown to work well in sparse, high-contrast settings, but in daily practice visual performance with the device is likely to be hampered by the cognitive load presented by a cluttered real-world environment. In this study, we investigated the effect of a stereo-disparity-based distance-filtering system on four experienced Argus II users for a range of tasks: object localization, depth discrimination, orientation and size discrimination, and people detection and direction of motion. Methods: Functional vision was assessed in a semicontrolled setup using unfiltered (normal camera) and distance-filtered (stereo camera) imagery. All tasks were forced choice designs and an extension of signal detection theory to latent (unobservable) variables was used to analyze the data, allowing estimation of person ability (person measures) and task difficulty (item measures) on the same axis. Results: All subjects performed better with the distance filter compared with the unfiltered image (P  < 0.001 on all tasks except localization). Conclusions: Our results show that depth filtering using a disparity-based algorithm has significant benefits for people with Argus II implants. Translational Relevance: The improvement in functional vision with the distance filter found in this study may have an important impact on vision rehabilitation and quality of life for people with visual prostheses and ultra low vision.


Subject(s)
Retinitis Pigmentosa , Vision, Low , Visual Prosthesis , Humans , Quality of Life , United States , Vision, Ocular
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 3323-3326, 2020 07.
Article in English | MEDLINE | ID: mdl-33018715

ABSTRACT

Tracking the eye of a blind patient can enhance the usability of an artificial vision system. In systems where the sensing element, i.e. the scene camera that captures the visual information, is mounted on the patient's head, the user must use head scanning in order to steer the line of sight of the implant to the region of interest. Integrating an eye tracker in the prosthesis will enable scanning using eye movements. The eye position will set the region of interest within the wide field-of-view of the scene camera. An essential requirement of an eye tracker is the need to calibrate it. Obviously, off-the-shelf calibration methods that require looking at known points in space cannot be used with blind users.Here we tested the feasibility of calibrating the eye-tracker based on pupil position and the location of the percept reported by the implant recipient, using a handheld marker. Pupil positions were extracted using custom image processing in a field-programmable-gate-array built into a glasses-mounted eye tracker. In the calibration process, electrodes were directly stimulated and the subject reported the location of the percept using a handheld marker. Linear regression was used to extract the transfer function from pupil position to gaze direction in the coordinates of the scene camera.In using the eye tracker with the proposed calibration method, patients demonstrated improved precision on a localization task with corresponding reduction of head movements.


Subject(s)
Eye Movements , Visually Impaired Persons , Blindness , Head Movements , Humans , Image Processing, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL