Understanding how mind emerges from matter is one of the great remaining questions in science. The Artificial Cognitive Systems lab studies the computational principles that underly natural intelligence and uses these principles to develop more capable and efficient intelligent machines.
Our lab is located at the Spinoza building, Montessorilaan 3, Nijmegen, The Netherlands. When entering the Spinoza building, you should proceed to room B.00.93A at the ground floor of the B wing (low-rise part of the building). Please contact the front desk for more specific directions once you arrive. You can reach the Spinoza building via public transport.
I am interested in the theoretical and computational principles that allow the brain to generate optimal behavior based on sparse reward signals provided by the environment. We create biologically plausible neural network models that further our understanding of natural intelligence and provide a route towards general-purpose intelligent machines. You may find my curriculum vitae here.
I am an assistant professor in AI. My areas of expertise are probabilistic machine learning and theoretical neuroscience. In my work I design probabilistic models of the human brain based on deep neural networks. I am also active in pure machine learning research, especially in the field of variational inference and optimal transport.
I am interested in structural and functional brain connectivity. In particular I study different (probabilistic) generative models and develop techniques efficiently compute them. Two central themes in my research are integration of different imaging modalities (e.g. fMRI and dMRI) and explicit modeling of uncertainty in connectivity estimates.
My research interests relate to spiking and recurrent neural networks and taking inspiration from these models for neuroscientific insights and better ML methods. In particular, thinking about the encoding of information in spike timing, the utility of firing rate trajectories, and more. Beyond this, I'm interested in functional/computational benefits that emerge in systems which are constrained as biology is (e.g. in energy, physical resources etc).
In my research, I investigate the computational mechanisms underlying neural encoding of sound location in naturalistic spatial hearing in normal and hearing-impaired listeners. To achieve this, I use an interdisciplinary approach combining cognitive neuroscience and computational modeling: I develop neurobiological-inspired deep neural network models of processing of sound location in subcortical auditory structures and the auditory cortex. Building on the resulting insights, I aim to optimize signal processing algorithms in cochlear implants to boost neural spatial auditory processing in cochlear implant users.
My research interest is in the application of AI and ML techniques to self-driving vehicles. Applications include scenario-based safety assessment using naturalistic driving data, prediction of road user behaviour, environmental perception, and working towards controllable, explainable and responsible AI.
I am interested in the biological constraints and environmental pressures that drive organisation and information processing in the visual cortex. What aspects of the biological substrate play a key role in the formation of neural circuits and visual function? What visual representations do agents learn in a context in which they actively have to engage with their environment? In order to answer these questions, I aim to develop biologically-plausible neural network models that can solve visual problems in ecologically valid environments.
Reinforcement learning is the problem where an agent must learn how to make decisions in an unknown world only based on incomplete sensory information. During my PhD project, I focus on constructing and navigating internal representations of procedurally generated environments with "dual variational control". Importantly, the uncertainty of the variational distribution serves as an intrinsic motivation for efficient exploration. That is, uncertainty about the world decreases with (smart) exploration, up until the moment where the agent knows how to act to get what it wants.
I'm an external PhD student who studies biologically plausible approximations of the backpropagation algorithm. My research focuses on activity based learning: a form of learning where error signals are propagated using only information locally available at the synapse. Scaling these algorithms from toy problems to real world applications is one of the central themes in my work.
Can we find a generic method to recognize behavior in multi-modal online data streams, independent of domain, species and sensors? Can we tune learned models to perform reliably in specific setups where only limited amount of training data is available? These are the questions I will address during my PhD research.
I am a physician, working partially in the Human Genetics department of the Radboudumc and partially in this lab. As part of my PhD, I am working on applying artificial intelligence in our clinical practice. I hope to improve clinical care for our patients and make the life of doctors easier this way. I will mostly be working on facial recognition and using machine learning and probabilistic programming on phenotypic data, to help assist in diagnosing patients and see if we can predict the outcome of genetic tests.
My interests lie in augmenting human capabilities with the help of intelligent artificial systems to empower people in reaching their full potential. I am especially curious about reinforcement learning, biologically plausible neural network models, and Bayesian methods for machine learning. During my PhD, I will be focusing on developing reinforcement learning algorithms for efficient closed-loop control of neural systems, in the specific context of generating optimal phosphene vision to restore some actionable perception for the visually impaired through neurotechnology.
The visual cortex represents stimuli in a complex and nonlinear way. I want to gain a better understanding of the representations and the dynamics of complex neuronal activity in the visual cortex. I aim to do this by developing advanced and biologically plausible computer models that generate predictions of neural activity recorded in awake monkeys.
My research focuses on the neural and cognitive mechanisms that underlie the segmentation of information into meaningful events. Do the boundaries between segments reflect a change in representation of the external environment? Do these boundaries occur in a nested temporal hierarchy? Does attention affect the segments and in turn memory performance? To answer these questions, I am looking at fMRI and MEG data that are gathered while participants perceive real-life-like stimuli, and using a data-driven method to establish the segmentations.
I apply deep neural networks to affective computing for use in robotics, focusing on methods for the interpretability of black-box machine learning algorithms and aiming to improve human-robot interaction.
Restoring some visual perception in blindness, using intelligent technology? Thanks to current developments in neuro-technology and artificial intelligence, this is becoming a very realistic scenario! My research aims to optimise prosthetic vision by developing clever models that can capture our rather complex visual environment into meaningful stimulation patterns, bridging the gap between human and computer vision.
My research focuses on the understanding and improvement of variational inference and reinforcement learning algorithms, and their combination to allow the control of complex systems. Practical applications include neurotechnology, personalized preventive health, nutrition, behaviour and agriculture.
Visual sensory information, as we receive it on our retina, mainly contains partially hidden objects. Instead of perceiving them as fragmented, we perceive them as completed objects. I am interested in how the brain achieves this amodal completion, and how it (and its underlying neural mechanism) is related to other phenomena such as modal completion and imagery.
We are interested in the mechanisms that allow humans to solve complex problems that require closing of the perception-action cycle. We design cognitively challenging tasks and have human participants execute them to investigate ensuing neurobehavioural responses. For the analysis of neurobehavioural data, we develop new computational models of human brain function and sophisticated machine learning techniques for large-scale neural data analysis.
Next to understanding the empirical basis of complex problem solving, we are interested in the theoretical underpinnings of adaptive behaviour. Specifically, we ask whether computational models that are rooted in AI can provide an account of the learning, inference and control problems that are solved by the human brain. To address this question, we develop new learning algorithms and investigate adaptive behaviour in artificial agents.
Machine learning is a key component in the development of new assistive technology and neurotechnology. We push the state of the art by developing new algorithms that are able to monitor and interpret neurobehavioural data. We also develop new algorithms that allow the manipulation of neural processes to restore or augment cognitive function.
We have several projects available that can be theoretical, empirical or applied in flavor. Theoretical projects focus on new developments in machine learning (neural networks, Bayesian statistics, reinforcement learning) and computational modeling of human brain function. Empirical projects focus on developing experiments to study the brain at work in naturalistic settings, where data is acquired using sophisticated recording techniques. To interpret these data, we develop and make use of sophisticated models and algorithms for large-scale data analysis. Applied projects focus on the development and testing of new algorithms that drive the development of the next generation of intelligent machines. Example application domains are neurotechnology, healthcare, artificial creativity, gameplay and cognitive robotics. For more details on our work, please consult the various pages on ww.artcogsys.com.
We welcome motivated students with an exact mindset and an interest in human cognition. In case you are interested just send us an e-mail stating your background and research interest.
During the lectures, the formal concepts underlying modern neural networks will be developed, including deep neural networks and recurrent neural networks. Also, various classic neural network models will be discussed like the (multi-layer) Perceptron, Hopfield networks and Boltzmann machines. During the practicals, students will get to immers themselves in the theoretical and practical aspects of neural networks. Students will get to implement various models using Python, a programming language which they will learn to use during the course.
A main objective of artificial intelligence is to build machines whose cognitive abilities match (or surpass) those of humans. This is also referred to as strong AI. One way to achieve this goal is by developing cognitive architectures that implement the algorithms used by our own brains. This success of such an approach relies on a continuous interplay between AI and neuroscience.
In this course, we will explore how computational models, particularly neural networks, can yield new insights about the mechanisms that give rise to natural intelligence and provide us with the tools to model cognitive processes in artificial systems.
The course consists of different components: