Understanding how mind emerges from matter is one of the great remaining questions in science. How is it possible that the brain gives rise to subjective experience, allowing us to contemplate ourselves as well as the universe from which we originate?
The Artificial Cognitive Systems lab aims to uncover the theoretical principles and neural mechanisms that mediate natural intelligence. To this end, we develop advanced neural network models and study empirically how the human brain operates in naturalistic settings.
Ultimately, our goal is to create intelligent machines that think like people by combining insights from multiple scientific disciplines. From an applied perspective we are interested in using intelligent machines to address societal challenges.
Our lab is physically located at the Spinoza building, Montessorilaan 3, Nijmegen, The Netherlands. When entering the Spinoza building, you should proceed to room B.00.93A at the ground floor of the B wing (low-rise part of the building). Please contact the front desk for more specific directions once you arrive. You can reach the Spinoza building via public transport.
I am interested in the computational principles that underly adaptive behaviour. The questions that I focus on are how the brain is able to extract information from its environment and use this information in order to generate optimal actions. My main goal is to develop biologically plausible neural network models that further our understanding of natural intelligence and provide a route towards solving the strong AI problem. You may find my curriculum vitae here.
I use computational models on (high-field) fMRI and MEG to investigate the neural mechanisms underlying perception and memory. Specifically, I am interested in how these bottom-up and top-down processes are implemented and interact in the human brain.
I am working on my Veni project together with with Marcel van Gerven, Pieter Medendorp and Roy Kessels. My work focuses on the relation between functional and structural changes in the aging brain and their joint association with cognition.
My primary research interest is in developing computational models of (ultra-high-field) fMRI and MEG data to characterize the relationship between cognitive processes and brain connectivity. Besides brain connectivity, I am also interested in neural coding, unsupervised feature learning and deep learning.
I am working on the NEuronal STimulatiOn for Recovery of function (NESTOR) project with Richard van Wezel and Marcel van Gerven for developing cortical implants to restore sight in blind. In this project, I am developing computer vision models that transform camera input into meaningful phosphene patterns, as well as developing AR/VR simulations of phosphene vision, and running psychophysical experiments.
I am interested in structural and functional brain connectivity. In particular I study different (probabilistic) generative models and develop techniques efficiently compute them. Two central themes in my research are integration of different imaging modalities (e.g. fMRI and dMRI) and explicit modeling of uncertainty in connectivity estimates.
I am mainly interested in visual experience in the absence of visual input. During my PhD I will investigate to what extent visual imagery relies on the same neural mechanisms as visual perception. Besides neuroscience, I also have a strong interest in philosophy of mind and consciousness.
How does the brain extract complex features and concepts from the information entering our senses? I am particularly interested in how individual neurons, forming a complex network, can perform this task. I apply computational models to experimental data of high temporal resolution to unravel the dynamics of the mechanisms involved in this process.
I apply deep neural networks to affective computing for use in robotics, focusing on methods for the interpretability of black-box machine learning algorithms and aiming to improve human-robot interaction.
I am focusing on encoding and decoding models. One major topic is identifying biologically plausible feature transformations by investigating to what extent deep learning can predict human perceptual processing. I also work on the optimisation of this mapping and its inversion using statistical machine learning techniques.
Visual sensory information, as we receive it on our retina, mainly contains partially hidden objects. Instead of perceiving them as fragmented, we perceive them as completed objects. I am interested in how the brain achieves this amodal completion, and how it (and its underlying neural mechanism) is related to other phenomena such as modal completion and imagery.
I develop deep neural networks (models) that work on mobile devices. My work involves researching and developing models, reducing their computational footprint, and developing mobile applications that make use of them.
We are interested in the brain mechanisms that allow humans to solve complex problems that require closing of the perception-action cycle. We design cognitively challenging tasks and have human participants execute them to investigate their behavioral and/or neural responses using sophisticated analysis techniques developed in the group. These techniques are based on Bayesian, neural network and reinforcement learning approaches.
Next to understanding the empirical basis of complex problem solving, we are interested in the theoretical underpinnings of adaptive behaviour. Specifically, we ask whether computational models that are rooted in AI can provide an account of the learning, inference and control problems that are solved by the human brain. To address this question, we develop new learning algorithms and simulate adaptive behaviour in artificial agents.
Understanding how the human brain solves cognitively challenging tasks is facilitated by the development of computational models that solve these tasks. We train computational models that learn to solve the task at hand and interrogate their internal states to find out how the network accomplishes this. We can then relate these internal states to the behaviour and neural signatures that human participants produce when solving the same task. By combining computational modelling and human behaviour and neural data in this way, we can elucidate the mechanisms that underlie human cognition.
Projects focus on improving our understanding of natural intelligence through computational modeling. This can take the form of creating new intelligent algorithms or empirical studies in cognitive neuroscience.
We welcome motivated students with an exact theoretical background and/or with a background in neuroscience. We also support more applied projects in the domain of e.g. health, security and robotics. Python programming experience is a must. In case you are interested just send us an e-mail.
During the lectures, the formal concepts underlying modern neural networks will be developed, including deep neural networks and recurrent neural networks. Also, various classic neural network models will be discussed like the (multi-layer) Perceptron, Hopfield networks and Boltzmann machines. During the practicals, students will get to immers themselves in the theoretical and practical aspects of neural networks. Students will get to implement various models using Python, a programming language which they will learn to use during the course.
A main objective of artificial intelligence is to build machines whose cognitive abilities match (or surpass) those of humans. This is also referred to as strong AI. One way to achieve this goal is by developing cognitive architectures that implement the algorithms used by our own brains. This success of such an approach relies on a continuous interplay between AI and neuroscience.
In this course, we will explore how computational models, particularly neural networks, can yield new insights about the mechanisms that give rise to natural intelligence and provide us with the tools to model cognitive processes in artificial systems.
The course consists of different components: