artcogsysArtificial Cognitive SystemsHomeResearchPeoplePublicationsEducationCodeContact
Assistant Professor

Sander Keemink

Assistant Professor - AI department, Donders Institute for Brain, Cognition and Behaviour

Abstract taken from Google Scholar:

Efficient and robust control using spiking neural networks (SNNs) is still an open problem. Whilst behaviour of biological agents is produced through sparse and irregular spiking patterns, which provide both robust and efficient control, the activity patterns in most artificial spiking neural networks used for control are dense and regular — resulting in potentially less efficient codes. Additionally, for most existing control solutions network training or optimization is necessary, even for fully identified systems, complicating their implementation in on-chip low-power solutions. The neuroscience theory of Spike Coding Networks (SCNs) offers a fully analytical solution for implementing dynamical systems in recurrent spiking neural networks — while maintaining irregular, sparse, and robust spiking activity — but it’s not clear how to directly apply it to control problems. Here, we extend SCN theory by incorporating closed-form …

Go to article

Abstract taken from Google Scholar:

Stochastic lattice models (sLMs) are computational tools for simulating spatiotemporal dynamics in physics, computational biology, chemistry, ecology, and other fields. Despite their widespread use, it is challenging to fit sLMs to data, as their likelihood function is commonly intractable and the models non-differentiable. The adjacent field of agent-based modelling (ABM), faced with similar challenges, has recently introduced an approach to approximate gradients in network-controlled ABMs via reparameterization tricks. This approach enables efficient gradient-based optimization with automatic differentiation (AD), which allows for a directed local search of suitable parameters rather than estimation via black-box sampling. In this study, we investigate the feasibility of using similar reparameterization tricks to fit sLMs through backpropagation of approximate gradients. We consider four common scenarios: fitting to single-state transitions, fitting to trajectories, inference of lattice states, and identification of stable lattice configurations. We demonstrate that all tasks can be solved by AD using four example sLMs from sociology, biophysics, image processing, and physical chemistry. Our results show that AD via approximate gradients is a promising method to fit sLMs to data for a wide variety of models and tasks.

Go to article

Abstract taken from Google Scholar:

The brain e ciently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While nonlinear computations can be implemented successfully in spiking neural networks, this requires supervised training and the resulting connectivity can be hard to interpret. In contrast, the required connectivity for any computation in the form of a linear dynamical system can be directly derived and understood with the spike coding network (SCN) framework. These networks also have biologically realistic activity patterns and are highly robust to cell death. Here we extend the SCN framework to directly implement any polynomial dynamical system, without the need for training. This results in networks requiring a mix of synapse types (fast, slow, and multiplicative), which we term multiplicative spike coding networks (mSCNs). Using mSCNs, we demonstrate how to directly derive the required connectivity for several nonlinear dynamical systems. We also show how to carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work demonstrates a novel method for implementing nonlinear computations in spiking neural networks, while keeping the attractive features of standard SCNs (robustness, realistic activity patterns, and interpretable connectivity). Finally, we discuss the biological plausibility of our approach, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.

Go to article

Abstract taken from Google Scholar:

Neurons mainly communicate through spikes, and much effort has been spent to understand how the dynamics of spiking neural networks (SNNs) relates to their connectivity. Meanwhile, most major advances in machine learning have been made with simpler, rate-based networks, with SNNs only recently showing competitive results, largely thanks to transferring insights from rate to spiking networks. However, it is still an open question exactly which computations SNNs perform. Recently, the time-averaged firing rates of several SNNs were shown to yield the solutions to convex optimization problems. Here we turn these findings around and show that virtually all inhibition-dominated SNNs can be understood through the lens of convex optimization, with network connectivity, timescales, and firing thresholds being intricately linked to the parameters of underlying convex optimization problems. This approach yields new, geometric insights into the computations performed by spiking networks. In particular, we establish a class of SNNs whose instantaneous output provides a solution to linear or quadratic programming problems, and we thereby reveal their input-output mapping. Using these insights, we derive local, supervised learning rules that can approximate given convex input-output functions, and we show that the resulting networks are consistent with many features from biological networks, such as low firing rates, irregular firing, E/I balance, and robustness to perturbations and synaptic delays.

Go to article

Abstract taken from Google Scholar:

A central tenet of neuroscience is that the brain works through large populations of interacting neurons. With recent advances in recording techniques, the inner working of these populations has come into full view. Analyzing the resulting large-scale data sets is challenging because of the often complex and ‘mixed’dependency of neural activities on experimental parameters, such as stimuli, decisions, or motor responses. Here we review recent insights gained from analyzing these data with dimensionality reduction methods that ‘demix’these dependencies. We …

Go to article

Abstract taken from Google Scholar:

In vivo calcium imaging has become a method of choice to image neuronal population activity throughout the nervous system. These experiments generate large sequences of images. Their analysis is computationally intensive and typically involves motion correction, image segmentation into regions of interest (ROIs), and extraction of fluorescence traces from each ROI. Out of focus fluorescence from surrounding neuropil and other cells can strongly contaminate the signal assigned to a given ROI. In this study, we introduce the FISSA toolbox (Fast Image Signal Separation Analysis) for neuropil decontamination. Given pre-defined ROIs, the FISSA toolbox automatically extracts the surrounding local neuropil and performs blind-source separation with non-negative matrix factorization. Using both simulated and in vivo data, we show that this toolbox performs similarly or better than existing published methods. FISSA …

Go to article

Abstract taken from Google Scholar:

Neurons in the primary visual cortex respond to oriented stimuli placed in the center of their receptive field, yet their response is modulated by stimuli outside the receptive field (the surround). Classically, this surround modulation is assumed to be strongest if the orientation of the surround stimulus aligns with the neuron’s preferred orientation, irrespective of the actual center stimulus. This neuron-dependent surround modulation has been used to explain a wide range of psychophysical phenomena, such as biased tilt perception and saliency of stimuli with contrasting orientation. However, several neurophysiological studies have shown that for most neurons surround modulation is instead center dependent: it is strongest if the surround orientation aligns with the center stimulus. As the impact of such center-dependent modulation on the population level is unknown, we examine this using computational models. We …

Go to article

Abstract taken from Google Scholar:

Throughout the nervous system, information is commonly coded in activity distributed over populations of neurons. In idealized situations where a single, continuous stimulus is encoded in a homogeneous population code, the value of the encoded stimulus can be read out without bias. However, in many situations, multiple stimuli are simultaneously present; for example, multiple motion patterns might overlap. Here we find that when multiple stimuli that overlap in their neural representation are simultaneously encoded in the population, biases in the read-out emerge. Although the bias disappears in the absence of noise, the bias is remarkably persistent at low noise levels. The bias can be reduced by competitive encoding schemes or by employing complex decoders. To study the origin of the bias, we develop a novel general framework based on gaussian processes that allows an accurate calculation of the …

Go to article

Abstract taken from Google Scholar:

The primary visual cortex (V1) has long been considered the main low level visual analysis area of the brain. The classical view is of a feedfoward system functioning as an edge detector, in which each cell has a receptive field (RF) and a preferred orientation. Whilst intuitive, this view is not the whole story. Although stimuli outside a neuron’s RF do not result in an increased response by themselves, they do modulate a neuron’s response to what’s inside its RF. We will refer to such extra-RF effects as contextual modulation. Contextual modulation is thought to underlie several perceptual phenomena, such as various orientation illusions and saliency of specific features (such as a contour or differing element). This gives a view of V1 as more than a collection of edge detectors, with neurons collectively extracting information beyond their RFs. However, many of the accounts linking psychophysics and physiology …

Go to article

Abstract taken from Google Scholar:

Throughout the nervous system information is typically coded in activity distributed over large population of neurons with broad tuning curves. In idealized situations where a single, continuous stimulus is encoded in a homogeneous population code, the value of an encoded stimulus can be read out without bias. Here we find that when multiple stimuli are simultaneously coded in the population, biases in the estimates of the stimuli and strong correlations between estimates can emerge. Although bias produced via this novel mechanism can be reduced by competitive coding and disappears in the complete absence of noise, the bias diminishes only slowly as a function of neural noise level. A Gaussian Process framework allows for accurate calculation of the bias and shows that a bimodal estimate distribution underlies the bias. The results have implications for neural coding and behavioral experiments.

Go to article

/2