artcogsysArtificial Cognitive SystemsHomeResearchPeoplePublicationsEducationCodeContact
PhD Student

Burcu Kucukoglu

PhD Student - PhD Candidate, Donders Institute for Brain, Cognition and Behaviour

My interests lie in augmenting human capabilities with the help of intelligent artificial systems to empower people in reaching their full potential. I am especially curious about reinforcement learning, biologically plausible neural network models, and Bayesian methods for machine learning. During my PhD, I will be focusing on developing reinforcement learning algorithms for efficient closed-loop control of neural systems, in the specific context of generating optimal phosphene vision to restore some actionable perception for the visually impaired through neurotechnology.

Abstract taken from Google Scholar:

With the recent progress in developing large-scale micro-electrodes, cortical neuroprotheses supporting hundreds of electrodes will be viable in the near future. We describe work in building a visual stimulation system that receives camera input images and produces stimulation patterns for driving a large set of electrodes. The system consists of a convolutional neural network FPGA accelerator and a recording and stimulation Application-Specific Integrated Circuit (ASIC) that produces the stimulation patterns. It is aimed at restoring visual perception in visually impaired subjects. The FPGA accelerator, VPDNN, runs a visual prosthesis network that generates an output used to create stimulation patterns, which are then converted by the ASIC into current pulses to drive a multi-electrode array. The accelerator exploits spatial sparsity and the use of reduced bit precision parameters for reduced computation, memory …

Go to article

Abstract taken from Google Scholar:

Visual neuroprostheses are a promising approach to restore basic sight in visually impaired people. A major challenge is to condense the sensory information contained in a complex environment into meaningful stimulation patterns at low spatial and temporal resolution. Previous approaches considered task-agnostic feature extractors such as edge detectors or semantic segmentation, which are likely suboptimal for specific tasks in complex dynamic environments. As an alternative approach, we propose to optimize stimulation patterns by end-to-end training of a feature extractor using deep reinforcement learning agents in virtual environments. We present a task-oriented evaluation framework to compare different stimulus generation mechanisms, such as static edge-based and adaptive end-to-end approaches like the one introduced here. Our experiments in Atari games show that stimulation patterns obtained …

Go to article

Abstract taken from Google Scholar:

Advances in reinforcement learning (RL) often rely on massive compute resources and remain notoriously sample inefficient. In contrast, the human brain is able to efficiently learn effective control strategies using limited resources. This raises the question whether insights from neuroscience can be used to improve current RL methods. Predictive processing is a popular theoretical framework which maintains that the human brain is actively seeking to minimize surprise. We show that recurrent neural networks which predict their own sensory states can be leveraged to minimise surprise, yielding substantial gains in cumulative reward. Specifically, we present the Predictive Processing Proximal Policy Optimization (P4O) agent; an actor-critic reinforcement learning agent that applies predictive processing to a recurrent variant of the PPO algorithm by integrating a world model in its hidden state. P4O significantly outperforms a baseline recurrent variant of the PPO algorithm on multiple Atari games using a single GPU. It also outperforms other state-of-the-art agents given the same wall-clock time and exceeds human gamer performance on multiple games including Seaquest, which is a particularly challenging environment in the Atari domain. Altogether, our work underscores how insights from the field of neuroscience may support the development of more capable and efficient artificial agents.

Go to article

/1