Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 777
1.
Nat Commun ; 15(1): 4481, 2024 May 27.
Article En | MEDLINE | ID: mdl-38802397

Retinal degeneration, a leading cause of irreversible low vision and blindness globally, can be partially addressed by retina prostheses which stimulate remaining neurons in the retina. However, existing electrode-based treatments are invasive, posing substantial risks to patients and healthcare providers. Here, we introduce a completely noninvasive ultrasonic retina prosthesis, featuring a customized ultrasound two-dimensional array which allows for simultaneous imaging and stimulation. With synchronous three-dimensional imaging guidance and auto-alignment technology, ultrasonic retina prosthesis can generate programmed ultrasound waves to dynamically and precisely form arbitrary wave patterns on the retina. Neuron responses in the brain's visual center mirrored these patterns, evidencing successful artificial vision creation, which was further corroborated in behavior experiments. Quantitative analysis of the spatial-temporal resolution and field of view demonstrated advanced performance of ultrasonic retina prosthesis and elucidated the biophysical mechanism of retinal stimulation. As a noninvasive blindness prosthesis, ultrasonic retina prosthesis could lead to a more effective, widely acceptable treatment for blind patients. Its real-time imaging-guided stimulation strategy with a single ultrasound array, could also benefit ultrasound neurostimulation in other diseases.


Blindness , Retina , Visual Prosthesis , Retina/diagnostic imaging , Retina/physiology , Animals , Blindness/therapy , Blindness/physiopathology , Retinal Degeneration/therapy , Retinal Degeneration/diagnostic imaging , Ultrasonic Waves , Humans , Neurons/physiology , Ultrasonography/methods , Vision, Ocular/physiology
2.
Sensors (Basel) ; 24(9)2024 Apr 23.
Article En | MEDLINE | ID: mdl-38732784

Artificial retinas have revolutionized the lives of many blind people by enabling their ability to perceive vision via an implanted chip. Despite significant advancements, there are some limitations that cannot be ignored. Presenting all objects captured in a scene makes their identification difficult. Addressing this limitation is necessary because the artificial retina can utilize a very limited number of pixels to represent vision information. This problem in a multi-object scenario can be mitigated by enhancing images such that only the major objects are considered to be shown in vision. Although simple techniques like edge detection are used, they fall short in representing identifiable objects in complex scenarios, suggesting the idea of integrating primary object edges. To support this idea, the proposed classification model aims at identifying the primary objects based on a suggested set of selective features. The proposed classification model can then be equipped into the artificial retina system for filtering multiple primary objects to enhance vision. The suitability of handling multi-objects enables the system to cope with real-world complex scenarios. The proposed classification model is based on a multi-label deep neural network, specifically designed to leverage from the selective feature set. Initially, the enhanced images proposed in this research are compared with the ones that utilize an edge detection technique for single, dual, and multi-object images. These enhancements are also verified through an intensity profile analysis. Subsequently, the proposed classification model's performance is evaluated to show the significance of utilizing the suggested features. This includes evaluating the model's ability to correctly classify the top five, four, three, two, and one object(s), with respective accuracies of up to 84.8%, 85.2%, 86.8%, 91.8%, and 96.4%. Several comparisons such as training/validation loss and accuracies, precision, recall, specificity, and area under a curve indicate reliable results. Based on the overall evaluation of this study, it is concluded that using the suggested set of selective features not only improves the classification model's performance, but aligns with the specific problem to address the challenge of correctly identifying objects in multi-object scenarios. Therefore, the proposed classification model designed on the basis of selective features is considered to be a very useful tool in supporting the idea of optimizing image enhancement.


Artificial Intelligence , Neural Networks, Computer , Retina , Retina/diagnostic imaging , Humans , Image Enhancement/methods , Algorithms , Image Processing, Computer-Assisted/methods , Visual Prosthesis
3.
Nat Commun ; 15(1): 3086, 2024 Apr 10.
Article En | MEDLINE | ID: mdl-38600063

Bioinspired bionic eyes should be self-driving, repairable and conformal to arbitrary geometries. Such eye would enable wide-field detection and efficient visual signal processing without requiring external energy, along with retinal transplantation by replacing dysfunctional photoreceptors with healthy ones for vision restoration. A variety of artificial eyes have been constructed with hemispherical silicon, perovskite and heterostructure photoreceptors, but creating zero-powered retinomorphic system with transplantable conformal features remains elusive. By combining neuromorphic principle with retinal and ionoelastomer engineering, we demonstrate a self-driven hemispherical retinomorphic eye with elastomeric retina made of ionogel heterojunction as photoreceptors. The receptor driven by photothermoelectric effect shows photoperception with broadband light detection (365 to 970 nm), wide field-of-view (180°) and photosynaptic (paired-pulse facilitation index, 153%) behaviors for biosimilar visual learning. The retinal photoreceptors are transplantable and conformal to any complex surface, enabling visual restoration for dynamic optical imaging and motion tracking.


Visual Prosthesis , Bionics , Retina , Vision, Ocular , Visual Perception
4.
J Neural Eng ; 21(2)2024 Mar 19.
Article En | MEDLINE | ID: mdl-38452381

Objective.Retinal prostheses evoke visual precepts by electrically stimulating functioning cells in the retina. Despite high variance in perceptual thresholds across subjects, among electrodes within a subject, and over time, retinal prosthesis users must undergo 'system fitting', a process performed to calibrate stimulation parameters according to the subject's perceptual thresholds. Although previous work has identified electrode-retina distance and impedance as key factors affecting thresholds, an accurate predictive model is still lacking.Approach.To address these challenges, we (1) fitted machine learning models to a large longitudinal dataset with the goal of predicting individual electrode thresholds and deactivation as a function of stimulus, electrode, and clinical parameters ('predictors') and (2) leveraged explainable artificial intelligence (XAI) to reveal which of these predictors were most important.Main results.Our models accounted for up to 76% of the perceptual threshold response variance and enabled predictions of whether an electrode was deactivated in a given trial with F1 and area under the ROC curve scores of up to 0.732 and 0.911, respectively. Our models identified novel predictors of perceptual sensitivity, including subject age, time since blindness onset, and electrode-fovea distance.Significance.Our results demonstrate that routinely collected clinical measures and a single session of system fitting might be sufficient to inform an XAI-based threshold prediction strategy, which has the potential to transform clinical practice in predicting visual outcomes.


Visual Prosthesis , Humans , Artificial Intelligence , Electrodes, Implanted , Retina/physiology , Machine Learning , Electric Stimulation/methods
5.
J Neural Eng ; 21(2)2024 Apr 08.
Article En | MEDLINE | ID: mdl-38457841

Objective.Retinal implants use electrical stimulation to elicit perceived flashes of light ('phosphenes'). Single-electrode phosphene shape has been shown to vary systematically with stimulus parameters and the retinal location of the stimulating electrode, due to incidental activation of passing nerve fiber bundles. However, this knowledge has yet to be extended to paired-electrode stimulation.Approach.We retrospectively analyzed 3548 phosphene drawings made by three blind participants implanted with an Argus II Retinal Prosthesis. Phosphene shape (characterized by area, perimeter, major and minor axis length) and number of perceived phosphenes were averaged across trials and correlated with the corresponding single-electrode parameters. In addition, the number of phosphenes was correlated with stimulus amplitude and neuroanatomical parameters: electrode-retina and electrode-fovea distance as well as the electrode-electrode distance to ('between-axon') and along axon bundles ('along-axon'). Statistical analyses were conducted using linear regression and partial correlation analysis.Main results.Simple regression revealed that each paired-electrode shape descriptor could be predicted by the sum of the two corresponding single-electrode shape descriptors (p < .001). Multiple regression revealed that paired-electrode phosphene shape was primarily predicted by stimulus amplitude and electrode-fovea distance (p < .05). Interestingly, the number of elicited phosphenes tended to increase with between-axon distance (p < .05), but not with along-axon distance, in two out of three participants.Significance.The shape of phosphenes elicited by paired-electrode stimulation was well predicted by the shape of their corresponding single-electrode phosphenes, suggesting that two-point perception can be expressed as the linear summation of single-point perception. The impact of the between-axon distance on the perceived number of phosphenes provides further evidence in support of the axon map model for epiretinal stimulation. These findings contribute to the growing literature on phosphene perception and have important implications for the design of future retinal prostheses.


Retina , Visual Prosthesis , Humans , Retrospective Studies , Retina/physiology , Phosphenes , Axons , Electric Stimulation , Perception
6.
J Neural Eng ; 21(2)2024 Apr 10.
Article En | MEDLINE | ID: mdl-38547529

Objective.Neuromodulation, particularly electrical stimulation, necessitates high spatial resolution to achieve artificial vision with high acuity. In epiretinal implants, this is hindered by the undesired activation of distal axons. Here, we investigate focal and axonal activation of retinal ganglion cells (RGCs) in epiretinal configuration for different sinusoidal stimulation frequencies.Approach.RGC responses to epiretinal sinusoidal stimulation at frequencies between 40 and 100 Hz were tested inex-vivophotoreceptor degenerated (rd10) isolated retinae. Experiments were conducted using a high-density CMOS-based microelectrode array, which allows to localize RGC cell bodies and axons at high spatial resolution.Main results.We report current and charge density thresholds for focal and distal axon activation at stimulation frequencies of 40, 60, 80, and 100 Hz for an electrode size with an effective area of 0.01 mm2. Activation of distal axons is avoided up to a stimulation amplitude of 0.23µA (corresponding to 17.3µC cm-2) at 40 Hz and up to a stimulation amplitude of 0.28µA (14.8µC cm-2) at 60 Hz. The threshold ratio between focal and axonal activation increases from 1.1 for 100 Hz up to 1.6 for 60 Hz, while at 40 Hz stimulation frequency, almost no axonal responses were detected in the tested intensity range. With the use of synaptic blockers, we demonstrate the underlying direct activation mechanism of the ganglion cells. Finally, using high-resolution electrical imaging and label-free electrophysiological axon tracking, we demonstrate the extent of activation in axon bundles.Significance.Our results can be exploited to define a spatially selective stimulation strategy avoiding axonal activation in future retinal implants, thereby solving one of the major limitations of artificial vision. The results may be extended to other fields of neuroprosthetics to achieve selective focal electrical stimulation.


Retina , Visual Prosthesis , Retina/physiology , Retinal Ganglion Cells/physiology , Microelectrodes , Axons/physiology , Electric Stimulation/methods
7.
J Neural Eng ; 21(1)2024 02 23.
Article En | MEDLINE | ID: mdl-38364290

Objective.Retinal prosthetics offer partial restoration of sight to patients blinded by retinal degenerative diseases through electrical stimulation of the remaining neurons. Decreasing the pixel size enables increasing prosthetic visual acuity, as demonstrated in animal models of retinal degeneration. However, scaling down the size of planar pixels is limited by the reduced penetration depth of the electric field in tissue. We investigated 3-dimensional (3d) structures on top of photovoltaic arrays for enhanced penetration of the electric field, permitting higher resolution implants.Approach.3D COMSOL models of subretinal photovoltaic arrays were developed to accurately quantify the electrodynamics during stimulation and verified through comparison to flat photovoltaic arrays. Models were applied to optimize the design of 3D electrode structures (pillars and honeycombs). Return electrodes on honeycomb walls vertically align the electric field with bipolar cells for optimal stimulation. Pillars elevate the active electrode, thus improving proximity to target neurons. The optimized 3D structures were electroplated onto existing flat subretinal prostheses.Main results.Simulations demonstrate that despite exposed conductive sidewalls, charge mostly flows via high-capacitance sputtered iridium oxide films topping the 3D structures. The 24µm height of honeycomb structures was optimized for integration with the inner nuclear layer cells in the rat retina, whilst 35µm tall pillars were optimized for penetrating the debris layer in human patients. Implantation of released 3D arrays demonstrates mechanical robustness, with histology demonstrating successful integration of 3D structures with the rat retinain-vivo.Significance. Electroplated 3D honeycomb structures produce vertically oriented electric fields, providing low stimulation thresholds, high spatial resolution, and high contrast for pixel sizes down to 20µm. Pillar electrodes offer an alternative for extending past the debris layer. Electroplating of 3D structures is compatible with the fabrication process of flat photovoltaic arrays, enabling much more efficient retinal stimulation.


Artificial Limbs , Retinal Degeneration , Visual Prosthesis , Humans , Rats , Animals , Prostheses and Implants , Retina/physiology , Neurons/physiology , Electric Stimulation , Electrodes, Implanted
8.
Elife ; 132024 Feb 22.
Article En | MEDLINE | ID: mdl-38386406

Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or 'phosphenes') has limited resolution, and a great portion of the field's research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is non-invasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator's suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral experiments. The modular and open-source software provides a flexible simulation framework for computational, clinical, and behavioral neuroscientists working on visual neuroprosthetics.


Phosphenes , Visual Prosthesis , Animals , Humans , Computer Simulation , Software , Blindness/therapy
9.
J Neural Eng ; 21(1)2024 02 09.
Article En | MEDLINE | ID: mdl-38290151

Objective.Current retinal prosthetics are limited in their ability to precisely control firing patterns of functionally distinct retinal ganglion cell (RGC) types. The aim of this study was to characterise RGC responses to continuous, kilohertz-frequency-varying stimulation to assess its utility in controlling RGC activity.Approach.We usedin vitropatch-clamp experiments to assess electrically-evoked ON and OFF RGC responses to frequency-varying pulse train sequences. In each sequence, the stimulation amplitude was kept constant while the stimulation frequency (0.5-10 kHz) was changed every 40 ms, in either a linearly increasing, linearly decreasing or randomised manner. The stimulation amplitude across sequences was increased from 10 to 300µA.Main results.We found that continuous stimulation without rest periods caused complex and irreproducible stimulus-response relationships, primarily due to strong stimulus-induced response adaptation and influence of the preceding stimulus frequency on the response to a subsequent stimulus. In addition, ON and OFF populations showed different sensitivities to continuous, frequency-varying pulse trains, with OFF cells generally exhibiting more dependency on frequency changes within a sequence. Finally, the ability to maintain spiking behaviour to continuous stimulation in RGCs significantly reduced over longer stimulation durations irrespective of the frequency order.Significance.This study represents an important step in advancing and understanding the utility of continuous frequency modulation in controlling functionally distinct RGCs. Our results indicate that continuous, kHz-frequency-varying stimulation sequences provide very limited control of RGC firing patterns due to inter-dependency between adjacent frequencies and generally, different RGC types do not display different frequency preferences under such stimulation conditions. For future stimulation strategies using kHz frequencies, careful consideration must be given to design appropriate pauses in stimulation, stimulation frequency order and the length of continuous stimulation duration.


Retinal Ganglion Cells , Visual Prosthesis , Retinal Ganglion Cells/physiology , Action Potentials/physiology , Electric Stimulation/methods
10.
Nat Nanotechnol ; 19(5): 688-697, 2024 May.
Article En | MEDLINE | ID: mdl-38225357

Electronic retinal prostheses for stimulating retinal neurons are promising for vision restoration. However, the rigid electrodes of conventional retinal implants can inflict damage on the soft retina tissue. They also have limited selectivity due to their poor proximity to target cells in the degenerative retina. Here we present a soft artificial retina (thickness, 10 µm) where flexible ultrathin photosensitive transistors are integrated with three-dimensional stimulation electrodes of eutectic gallium-indium alloy. Platinum nanoclusters locally coated only on the tip of these three-dimensional liquid-metal electrodes show advantages in reducing the impedance of the stimulation electrodes. These microelectrodes can enhance the proximity to the target retinal ganglion cells and provide effective charge injections (72.84 mC cm-2) to elicit neural responses in the retina. Their low Young's modulus (234 kPa), owing to their liquid form, can minimize damage to the retina. Furthermore, we used an unsupervised machine learning approach to effectively identify the evoked spikes to grade neural activities within the retinal ganglion cells. Results from in vivo experiments on a retinal degeneration mouse model reveal that the spatiotemporal distribution of neural responses on their retina can be mapped under selective localized illumination areas of light, suggesting the restoration of their vision.


Microelectrodes , Visual Prosthesis , Visual Prosthesis/chemistry , Animals , Mice , Retinal Ganglion Cells/physiology , Retinal Degeneration/therapy , Retinal Degeneration/pathology , Retina , Electrodes, Implanted , Platinum/chemistry
11.
Nat Commun ; 15(1): 808, 2024 Jan 27.
Article En | MEDLINE | ID: mdl-38280912

A fundamental challenge in neuroengineering is determining a proper artificial input to a sensory system that yields the desired perception. In neuroprosthetics, this process is known as artificial sensory encoding, and it holds a crucial role in prosthetic devices restoring sensory perception in individuals with disabilities. For example, in visual prostheses, one key aspect of artificial image encoding is to downsample images captured by a camera to a size matching the number of inputs and resolution of the prosthesis. Here, we show that downsampling an image using the inherent computation of the retinal network yields better performance compared to learning-free downsampling methods. We have validated a learning-based approach (actor-model framework) that exploits the signal transformation from photoreceptors to retinal ganglion cells measured in explanted mouse retinas. The actor-model framework generates downsampled images eliciting a neuronal response in-silico and ex-vivo with higher neuronal reliability than the one produced by a learning-free approach. During the learning process, the actor network learns to optimize contrast and the kernel's weights. This methodological approach might guide future artificial image encoding strategies for visual prostheses. Ultimately, this framework could be applicable for encoding strategies in other sensory prostheses such as cochlear or limb.


Retina , Visual Prosthesis , Mice , Animals , Reproducibility of Results , Retinal Ganglion Cells/physiology , Learning/physiology , Visual Perception/physiology
12.
IEEE Trans Biomed Circuits Syst ; 18(3): 580-591, 2024 Jun.
Article En | MEDLINE | ID: mdl-38261488

Wireless, miniaturised and distributed neural interfaces are emerging neurotechnologies. Although extensive research efforts contribute to their technological advancement, the need for real-time systems enabling simultaneous wireless information and power transfer toward distributed neural implants remains crucial. Here we present a complete wearable system including a software for real-time image capturing, processing and digital data transfer; an hardware for high radiofrequency generation and modulation via amplitude shift keying; and a 3-coil inductive link adapt to operate with multiple miniaturised receivers. The system operates in real-time with a maximum frame rate of 20 Hz, reconstructing each frame with a matrix of 32 × 32 pixels. The device generates a carrier frequency of 433.92 MHz. It transmits the highest power of 32 dBm with a data rate of 6 Mbps and a variable modulation index as low as 8 %, thus potentially enabling wireless communication with 1024 miniaturised and distributed intracortical microstimulators. The system is primarily conceived as an external wearable device for distributed cortical visual prosthesis covering a visual field of 20 °. At the same time, it is modular and versatile, being suitable for multiple applications requiring simultaneous wireless information and power transfer to large-scale neural interfaces.


Visual Prosthesis , Wearable Electronic Devices , Wireless Technology , Wireless Technology/instrumentation , Humans , Signal Processing, Computer-Assisted/instrumentation , Equipment Design , Electric Power Supplies
13.
Sci Rep ; 14(1): 1313, 2024 01 15.
Article En | MEDLINE | ID: mdl-38225344

Visual prostheses such as the Argus II provide partial vision for individuals with limited or no light perception. However, their effectiveness in daily life situations is limited by scene complexity and variability. We investigated whether additional image processing techniques could improve mobility performance in everyday indoor environments. A mobile system connected to the Argus II provided thermal or distance-filtered video stimulation. Four participants used the thermal camera to locate a person and the distance filter to navigate a hallway with obstacles. The thermal camera allowed for finding a target person in 99% of trials, while unfiltered video led to confusion with other objects and a success rate of only 55% ([Formula: see text]). Similarly, the distance filter enabled participants to detect and avoid 88% of obstacles by removing background clutter, whereas unfiltered video resulted in a detection rate of only 10% ([Formula: see text]). For any given elapsed time, the success rate with filtered video was higher than with unfiltered video. After 90 s, participants' success rate reached above 50% with filtered video and 24% and 3% with normal camera in the first and second tasks, respectively. Despite individual variations, all participants showed significant improvement when using the thermal and distance filters compared to unfiltered video. Adding a thermal and distance filter to a visual prosthesis system can enhance the performance of mobility activities by removing clutter in the background, showing people and warm objects with the thermal camera, or nearby obstacles with the distance filter.


Visual Prosthesis , Humans , Prosthesis Implantation , Vision Disorders , Image Processing, Computer-Assisted , Diagnostic Imaging
14.
J Neurosurg ; 140(4): 1169-1176, 2024 Apr 01.
Article En | MEDLINE | ID: mdl-37890180

The prospect of direct interaction between the brain and computers has been investigated in recent decades, revealing several potential applications. One of these is sight restoration in profoundly blind people, which is based on the ability to elicit visual perceptions while directly stimulating the occipital cortex. Technological innovation has led to the development of microelectrodes implantable on the brain surface. The feasibility of implanting a microelectrode on the visual cortex has already been shown in animals, with promising results. Current research has focused on the implantation of microelectrodes into the occipital brain of blind volunteers. The technique raises several technical challenges. In this technical note, the authors suggest a safe and effective approach for robot-assisted implantation of microelectrodes in the occipital lobe for sight restoration.


Robotics , Visual Cortex , Visual Prosthesis , Animals , Humans , Electrodes, Implanted , Microelectrodes , Visual Cortex/surgery , Prosthesis Implantation
15.
IEEE Trans Nanobioscience ; 23(2): 262-271, 2024 Apr.
Article En | MEDLINE | ID: mdl-37747869

The main objective of the present study is to use graphene as electrode neural interface material to design novel microelectrodes topology for retinal prosthesis and investigate device operation safety based on the computational framework. The study's first part establishes the electrode material selection based on electrochemical impedance and the equivalent circuit model. The second part of the study is modeling at the microelectrode-tissue level to investigate the potential distribution, generated resistive heat dissipation, and thermally induced stress in the tissue due to electrical stimulation. The formulation of Joule heating and thermal expansion between microelectrode-tissue-interface employing finite element method modeling is based on the three coupled equations, specifically Ohm's law, Navier's equation, and Fourier equation. Electrochemical simulation results of electrode material reveal that single-layer and few-layer graphene-based microelectrode has a specific impedance in the range of 0.02- [Formula: see text], comparable to platinum counterparts. The microelectrode of [Formula: see text] size can stimulate retinal tissue with a threshold current in the range of 8.7- [Formula: see text]. Such stimulation with the observed microelectrode size indicates that both microelectrodes and retinal tissue stay structurally intact, and the device is thermally and mechanically stable, functioning within the safety limit. The results reveal the viability of high-density graphene-based microelectrodes for improved interface as stimulating electrodes to acquire higher visual acuity. Furthermore, the novel microelectrodes design configuration in the honeycomb pattern gives the retinal tissue non-invasive heating and minimal stress upon electrical stimulation. Thus, it paves the path to designing a graphene-based microelectrode array for retinal prosthesis for further in vitro or in vivo studies.


Graphite , Visual Prosthesis , Microelectrodes , Retina/surgery , Computer Simulation , Electric Stimulation , Electric Impedance
16.
Article En | MEDLINE | ID: mdl-38082908

Cortical visual prostheses are designed to treat blindness by restoring visual perceptions through artificial electrical stimulation of the primary visual cortex (V1). Intracortical microelectrodes produce the smallest visual percepts and thus higher resolution vision - like a higher density of pixels on a monitor. However, intracortical microelectrodes must maintain a minimum spacing to preserve tissue integrity. One solution to increase the density of percepts is to implant and stimulate multiple visual areas, such as V1 and V2, although the properties of microstimulation in V2 remain largely unexplored. We provide a direct comparison of V1 and V2 microstimulation in two common marmoset monkeys. We find similarities in response trends between V1 and V2 but differences in threshold, neural activity duration, and spread of activity at the threshold current. This has implications for using multi-area stimulation to increase the resolution of cortical visual prostheses.


Visual Cortex , Visual Prosthesis , Humans , Visual Cortex/physiology , Visual Perception/physiology , Blindness , Electric Stimulation
17.
Article En | MEDLINE | ID: mdl-38083046

We investigate Self-Attention (SA) networks for directly learning visual representations for prosthetic vision. Specifically, we explore how the SA mechanism can be leveraged to produce task-specific scene representations for prosthetic vision, overcoming the need for explicit hand-selection of learnt features and post-processing. Further, we demonstrate how the mapping of importance to image regions can serve as an explainability tool to analyse the learnt vision processing behaviour, providing enhanced validation and interpretation capability than current learning-based methods for prosthetic vision. We investigate our approach in the context of an orientation and mobility (OM) task, and demonstrate its feasibility for learning vision processing pipelines for prosthetic vision.


Visual Prosthesis , Image Processing, Computer-Assisted/methods , Vision, Ocular , Visual Perception , Learning
18.
Article En | MEDLINE | ID: mdl-38083330

Optimization of retinal prostheses requires preclinical animal models that mimic features of human retinal disease, have appropriate eye sizes to accommodate implantable arrays, and provide options for unilateral degeneration so as to enable a contralateral, within-animal control eye. In absence of a suitable non-human primate model and shortcomings of our previous feline model generated through intravitreal injections of Adenosine Triphosphate (ATP), we aimed in the present study to develop an ATP induced degeneration model in the rabbit. Six normally sighted Dutch rabbits were monocularly blinded with this technique. Subsequent retinal degeneration was assessed with optical coherence tomography, electroretinography, and histological assays. Overall, there was a 42% and 26% reduction in a-wave and oscillatory potential amplitudes in the electroretinograms respectively, along with a global decrease in retinal thickness, with increased variability. Qualitative inspection also revealed that there were variable levels of retinal degeneration and remodeling both within and between treated eyes, mimicking the disease heterogeneity observed in retinitis pigmentosa. These findings confirm that ATP can be utilized to unilaterally induce blinding in rabbits and, potentially present an ideal model for future cortical recording experiments aimed at optimizing vision restoration strategies.Clinical Relevance- A rapid, unilaterally induced model of retinal degeneration in an animal with low binocular overlap and large eyes will allow for clinically valid recordings of downstream cortical activity following retinal stimulation. Such a model would be highly beneficial for the optimization of clinically appropriate vision restoration approaches.


Retinal Degeneration , Retinitis Pigmentosa , Visual Prosthesis , Rabbits , Animals , Cats , Retinal Degeneration/etiology , Adenosine Triphosphate/adverse effects , Retina/pathology
19.
Article En | MEDLINE | ID: mdl-38083376

Photoreceptor loss and inner retinal network remodeling severely impacts the ability of retinal prosthetic devices to create artificial vision. We developed a computational model of a degenerating retina based on rodent data and tested its response to retinal electrical stimulation. This model includes detailed network connectivity and diverse neural intrinsic properties, capable of exploring how the degenerated retina influences the performance of electrical stimulation during the degeneration process. Our model suggests the possibility of quantitatively modulating retinal ON and OFF pathways between phase II and III of retinal degeneration without requiring any differences between ON and OFF RGC intrinsic cellular properties. The model also provided insights about how remodeling events influence stage-dependent differential electrical responses of ON and OFF pathways.Clinical Relevance-This data-driven model can guide future development of retinal prostheses and stimulation strategies that may benefit patients at different stages of retinal disease progression, particularly in the early and mid-stages, thus increasing their global acceptance.


Retinal Degeneration , Visual Prosthesis , Humans , Retinal Degeneration/therapy , Retinal Ganglion Cells/physiology , Retina , Electric Stimulation
20.
Article En | MEDLINE | ID: mdl-38083423

Retinal visual prosthetic devices aim to restore vision via electrical stimulation delivered on the retina. While a number of devices have been commercially available, the stimulation strategies applied have not met the expectations of end-users. These stimulation strategies involve the neurons being activated based on their spatial properties, regardless of their functions, which may lead to lower visual acuity. The ability to predict light-evoked neural activities thus becomes crucial for the development of a retinal prosthetic device with better visual acuity. In addition to temporal nonlinearity, the spatial relationship between the 2-dimensional light stimulus and the spiking activity of neuron populations is the main barrier to accurate predictions. Recent advances in deep learning offer a possible alternative for neural activity prediction tasks. With proven performance on nonlinear sequential data in fields such as natural language processing and computer vision, the emerging transformer model may be adapted to predict neural activities. In this study, we built and evaluated a deep learning model based on the transformer to explore its predictive capacity in light-evoked retinal spikes. Our preliminary results show that the model is possible to achieve good performance in this task. The high versatility of deep learning models may allow us to make retinal activity predictions in more complex physiological environments and potentially enhance the visual acuity of retinal prosthetic devices in the future by enabling us to anticipate the desired neural responses to electrical stimuli.


Retinal Ganglion Cells , Visual Prosthesis , Retinal Ganglion Cells/physiology , Retina/physiology , Visual Acuity , Electric Stimulation/methods
...