artcogsysArtificial Cognitive SystemsHomeResearchPeoplePublicationsEducationCodeContact
PhD Student

Jaap de Ruyter van Steveninck

PhD Student - Donders Institute

Restoring some visual perception in blindness, using intelligent technology? Thanks to current developments in neuro-technology and artificial intelligence, this is becoming a very realistic scenario! My research aims to optimise prosthetic vision by developing clever models that can capture our rather complex visual environment into meaningful stimulation patterns, bridging the gap between human and computer vision.

Abstract taken from Google Scholar:

Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or ‘phosphenes’) has limited resolution, and a great portion of the field’s research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is noninvasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator’s suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral …

Go to article

Abstract taken from Google Scholar:

OBJECTIVE The enabling technology of visual prosthetics for the blind is making rapid progress. However, there are still uncertainties regarding the functional outcomes, which can depend on many design choices in the development. In visual prostheses with a head-mounted camera, a particularly challenging question is how to deal with the gaze-locked visual percept associated with spatial updating conflicts in the brain. The current study investigates a recently proposed compensation strategy based on gaze-contingent image processing with eye-tracking. Gaze-contingent processing is expected to reinforce natural-like visual scanning and reestablished spatial updating based on eye movements. The beneficial effects remain to be investigated for daily life activities in complex visual environments. APPROACH The current study evaluates the benefits of gaze-contingent processing versus gaze-locked and gaze …

Go to article

Abstract taken from Google Scholar:

The enabling technology of visual prosthetics for the blind is making rapid progress. However, there are still uncertainties regarding the functional outcomes, which can depend on many design choices in the development. In visual prostheses with a head-mounted camera, a particularly challenging question is how to deal with the gaze-locked visual percept associated with spatial updating conflicts in the brain. A recently proposed compensation strategy is gaze-contingent image processing with eye-tracking, which enables natural visual scanning and reestablished spatial updating based on eye movements. The current study evaluates the benefits of gaze-contingent processing versus gaze-locked and gaze-ignored simulations in the context of mobility and orientation, using a simulated prosthetic vision paradigm with sighted subjects. Compared to gaze-locked vision, gaze-contingent processing was found to improve the speed in all experimental tasks, as well as the subjective quality of vision. Similar or further improvements were found in a control condition that ignores gaze-depended effects, a simulation that is unattainable in the clinical reality. Our results suggest that gaze-locked vision and spatial updating conflicts can be debilitating for complex visually-guided activities of daily living such as mobility and orientation. Therefore, for prospective users of head-steered prostheses with an unimpaired oculomotor system, the inclusion of a compensatory eye-tracking system is strongly endorsed.

Go to article

Abstract taken from Google Scholar:

Neural prosthetics may provide a promising solution to restore visual perception in some forms of blindness. The restored prosthetic percept is rudimentary compared to normal vision and can be optimized with a variety of image preprocessing techniques to maximize relevant information transfer. Extracting the most useful features from a visual scene is a nontrivial task and optimal preprocessing choices strongly depend on the context. Despite rapid advancements in deep learning, research currently faces a difficult challenge in finding a general and automated preprocessing strategy that can be tailored to specific tasks or user requirements. In this paper, we present a novel deep learning approach that explicitly addresses this issue by optimizing the entire process of phosphene generation in an end-to-end fashion. The proposed model is based on a deep auto-encoder architecture and includes a highly adjustable simulation module of prosthetic vision. In computational validation experiments, we show that such an approach is able to automatically find a task-specific stimulation protocol. The results of these proof-of-principle experiments illustrate the potential of end-to-end optimization for prosthetic vision. The presented approach is highly modular and our approach could be extended to automated dynamic optimization of prosthetic vision for everyday tasks, given any specific constraints, accommodating individual requirements of the end-user.

Go to article

Abstract taken from Google Scholar:

Visual neuroprostheses are a promising approach to restore basic sight in visually impaired people. A major challenge is to condense the sensory information contained in a complex environment into meaningful stimulation patterns at low spatial and temporal resolution. Previous approaches considered task-agnostic feature extractors such as edge detectors or semantic segmentation, which are likely suboptimal for specific tasks in complex dynamic environments. As an alternative approach, we propose to optimize stimulation patterns by end-to-end training of a feature extractor using deep reinforcement learning agents in virtual environments. We present a task-oriented evaluation framework to compare different stimulus generation mechanisms, such as static edge-based and adaptive end-to-end approaches like the one introduced here. Our experiments in Atari games show that stimulation patterns obtained …

Go to article

Abstract taken from Google Scholar:

Neuroprosthetic implants are a promising technology for restoring some form of vision in people with visual impairments via electrical neurostimulation in the visual pathway. Although an artificially generated prosthetic percept is relatively limited compared with normal vision, it may provide some elementary perception of the surroundings, re-enabling daily living functionality. For mobility in particular, various studies have investigated the benefits of visual neuroprosthetics in a simulated prosthetic vision paradigm with varying outcomes. The previous literature suggests that scene simplification via image processing, and particularly contour extraction, may potentially improve the mobility performance in a virtual environment. In the current simulation study with sighted participants, we explore both the theoretically attainable benefits of strict scene simplification in an indoor environment by controlling the environmental complexity, as well as the practically achieved improvement with a deep learning-based surface boundary detection implementation compared with traditional edge detection. A simulated electrode resolution of 26× 26 was found to provide sufficient information for mobility in a simple environment. Our results suggest that, for a lower number of implanted electrodes, the removal of background textures and within-surface gradients may be beneficial in theory. However, the deep learning-based implementation for surface boundary detection did not improve mobility performance in the current study. Furthermore, our findings indicate that, for a greater number of electrodes, the removal of within-surface gradients and background textures …

Go to article

Abstract taken from Google Scholar:

Blindness affects millions of people around the world, and is expected to become increasingly prevalent in the years to come. For some blind individuals, a promising solution to restore a form of vision are cortical visual prostheses, which convert camera input to electrical stimulation of the cortex to bypass part of the impaired visual system. Due to the constrained number of electrodes that can be implanted, the artificially induced visual percept (a pattern of localized light flashes, or ‘phosphenes’) is of limited resolution, and a great portion of the field’s research attention is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is the non-invasive functional evaluation in sighted subjects or with computational models by making use of simulated prosthetic vision (SPV) pipelines. Although the SPV literature has provided us with some fundamental insights, an important drawback that researchers and clinicians may encounter is the lack of realism in the simulation of cortical prosthetic vision, which limits the validity for real-life applications. Moreover, none of the existing simulators address the specific practical requirements for the electrical stimulation parameters. In this study, we developed a PyTorch-based, fast and fully differentiable phosphene simulator. Our simulator transforms specific electrode stimulation patterns into biologically plausible representations of the artificial visual percepts that the prosthesis wearer is expected to see. The simulator integrates a wide range of both classical and recent clinical results with neurophysiological evidence in humans and non …

Go to article

Abstract taken from Google Scholar:

Go to article

Abstract taken from Google Scholar:

Background: Turning in place is particularly bothersome for patients with Parkinson's disease (PD) experiencing freezing of gait (FOG). Cues designed to enforce goal-directed turning are not yet available.Objectives: Assess whether augmented reality (AR) visual cues improve FOG and turning in place in PD patients with FOG.Methods: Sixteen PD patients with FOG performed a series of 180° turns under an experimental condition with AR visual cues displayed through a HoloLens and two control conditions (one consisting of auditory cues and one without any cues). FOG episodes were annotated by two independent raters from video recordings. Motion data were measured with 17 inertial measurement units for calculating axial kinematics, scaling, and timing of turning.Results: AR visual cues did not reduce the percent time frozen (p = 0.73) or the number (p = 0.73) and duration (p = 0.78) of FOG episodes compared to the control condition without cues. All FOG parameters were higher with AR visual cues than with auditory cues [percent time frozen (p = 0.01), number (p = 0.02), and duration (p = 0.007) of FOG episodes]. The AR visual cues did reduce the peak angular velocity (visual vs. uncued p = 0.03; visual vs. auditory p = 0.02) and step height (visual vs. uncued p = 0.02; visual vs. auditory p = 0.007), and increased the step height coefficient of variation (visual vs. uncued p = 0.04; visual vs. auditory p = 0.01) and time to maximum head–pelvis separation (visual vs. uncued p = 0.02; visual vs. auditory p = 0.005), compared to both control conditions.Conclusions: The AR visual cues in this study did not reduce FOG, and worsened some …

Go to article

Abstract taken from Google Scholar:

Wearing smart glasses may be distracting and thus annihilate the beneficial effects of cues on freezing of gait in Parkinson’s disease. Furthermore, augmented reality cues might be effective in reducing FOG specifically in cueing-responsive patients. We present a single-patient study in which a patient with Parkinson’s disease traversed a doorway under different cueing conditions. Wearing augmented reality (AR) glasses did not deteriorate FOG nor affect the beneficial effects of cues. The AR visual cues did not improve FOG. This single-patient study implies that the current design of AR glasses does not stand in the way of the development of augmented reality visual cues. However, the effectivity of augmented reality visual cues remains to be proven.

Go to article

/2