Understanding how mind emerges from matter is one of the great remaining questions in science. The Artificial Cognitive Systems lab studies the computational principles that underly natural intelligence and uses these principles to develop more capable and efficient intelligent machines.
Our lab is located at the Spinoza building, Montessorilaan 3, Nijmegen, The Netherlands. When entering the Spinoza building, you should proceed to room B.00.93A at the ground floor of the B wing (low-rise part of the building). Please contact the front desk for more specific directions once you arrive. You can reach the Spinoza building via public transport.
I am interested in the theoretical and computational principles that allow the brain to generate optimal behavior based on sparse reward signals provided by the environment. We create biologically plausible neural network models that further our understanding of natural intelligence and provide a route towards general-purpose intelligent machines. You may find my curriculum vitae here.
I am a cognitive scientist with a background in computational modelling and computational neuroscience. I am interested in human cognition in the context of interaction with different artificial cognitive systems (including robots and smart vehicles). To me, such agents currently provide some of the most exciting opportunities to further our understanding of cognition, both by studying how humans interact with them and by validating models of human cognition through implementation in artificial agents that can interact with humans.
My work thus spans a range of disciplines, from theoretical cognitive science (in particular theories of embodiment and how these relate to machine intelligence) over language and concept grounding to (neuro)-computational models of cognitive mechanisms and practical applications in, for example, autonomous vehicles and robots for therapy for children with autism spectrum disorder.
I am an assistant professor in AI. My areas of expertise are probabilistic machine learning and theoretical neuroscience. In my work I design probabilistic models of the human brain based on deep neural networks. I am also active in pure machine learning research, especially in the field of variational inference and optimal transport.
The aim of my work is to improve our understanding of the large-scale organization of brain function and its variability with advancing age and across individuals. I approach this aim from different angles in two interconnected research lines. One line is focused on identifying how different brain regions communicate within functional brain networks and how and why that functional organization is altered with age. The second line revolves around understanding the way the brain integrates and segments information over time by combining naturalistic stimuli with innovative analysis methods.
My primary research interest is in developing computational models of (ultra-high-field) fMRI and MEG data to characterize the relationship between cognitive processes and brain connectivity. Besides brain connectivity, I am also interested in neural coding, unsupervised feature learning and deep learning.
I am working on the NEuronal STimulatiOn for Recovery of function (NESTOR) project with Richard van Wezel and Marcel van Gerven for developing cortical implants to restore sight in blind. In this project, I am developing computer vision models that transform camera input into meaningful phosphene patterns, as well as developing AR/VR simulations of phosphene vision, and running psychophysical experiments.
I am interested in structural and functional brain connectivity. In particular I study different (probabilistic) generative models and develop techniques efficiently compute them. Two central themes in my research are integration of different imaging modalities (e.g. fMRI and dMRI) and explicit modeling of uncertainty in connectivity estimates.
The aim of my work is to understand the computational processes by which the brain and artificial agents can efficiently and robustly derive meaning from the world around us. To gain insight into the intricate system that enables us to see, my research advances along two interconnected lines of research: machine learning for discovery in neuroimaging data, and deep neural network modelling. This interdisciplinary work combines machine learning, computational neuroscience, and computer vision. It is therefore at the heart of the emerging field of cognitive computational neuroscience.
I am interested in neuroscience-inspired artificial intelligence and machine learning approaches for perception and action. My goal is to develop algorithms that allow robots to perceive and act with their body as humans do and at the same time disentangle how animals construct their self-representation through sensorimotor learning. Particularly, I am focusing in variational learning, probabilistic deep learning, predictive coding and active inference.
My research interests relate to spiking and recurrent neural networks and taking inspiration from these models for neuroscientific insights and better ML methods. In particular, thinking about the encoding of information in spike timing, the utility of firing rate trajectories, and more. Beyond this, I'm interested in functional/computational benefits that emerge in systems which are constrained as biology is (e.g. in energy, physical resources etc).
Using neural network modelling and psychophysics, I seek to understand the essential computations underlying perception. What kind of network organization leads to human-like perceptual processing?
In my research, I investigate the computational mechanisms underlying neural encoding of sound location in naturalistic spatial hearing in normal and hearing-impaired listeners. To achieve this, I use an interdisciplinary approach combining cognitive neuroscience and computational modeling: I develop neurobiological-inspired deep neural network models of processing of sound location in subcortical auditory structures and the auditory cortex. Building on the resulting insights, I aim to optimize signal processing algorithms in cochlear implants to boost neural spatial auditory processing in cochlear implant users.
My research interest is in the application of AI and ML techniques to self-driving vehicles. Applications include scenario-based safety assessment using naturalistic driving data, prediction of road user behaviour, environmental perception, and working towards controllable, explainable and responsible AI.
I am interested in the biological constraints and environmental pressures that drive organisation and information processing in the visual cortex. What aspects of the biological substrate play a key role in the formation of neural circuits and visual function? What visual representations do agents learn in a context in which they actively have to engage with their environment? In order to answer these questions, I aim to develop biologically-plausible neural network models that can solve visual problems in ecologically valid environments.
Can we find a generic method to recognize behavior in multi-modal online data streams, independent of domain, species and sensors? Can we tune learned models to perform reliably in specific setups where only limited amount of training data is available? These are the questions I will address during my PhD research.
I am a physician, working partially in the Human Genetics department of the Radboudumc and partially in this lab. As part of my PhD, I am working on applying artificial intelligence in our clinical practice. I hope to improve clinical care for our patients and make the life of doctors easier this way. I will mostly be working on facial recognition and using machine learning and probabilistic programming on phenotypic data, to help assist in diagnosing patients and see if we can predict the outcome of genetic tests.
The visual cortex represents stimuli in a complex and nonlinear way. I want to gain a better understanding of the representations and the dynamics of complex neuronal activity in the visual cortex. I aim to do this by developing advanced and biologically plausible computer models that generate predictions of neural activity recorded in awake monkeys.
How does the brain extract complex features and concepts from the information entering our senses? I am particularly interested in how individual neurons, forming a complex network, can perform this task. I apply computational models to experimental data of high temporal resolution to unravel the dynamics of the mechanisms involved in this process.
I apply deep neural networks to affective computing for use in robotics, focusing on methods for the interpretability of black-box machine learning algorithms and aiming to improve human-robot interaction.
I am focusing on encoding and decoding models. One major topic is identifying biologically plausible feature transformations by investigating to what extent deep learning can predict human perceptual processing. I also work on the optimisation of this mapping and its inversion using statistical machine learning techniques.
Visual sensory information, as we receive it on our retina, mainly contains partially hidden objects. Instead of perceiving them as fragmented, we perceive them as completed objects. I am interested in how the brain achieves this amodal completion, and how it (and its underlying neural mechanism) is related to other phenomena such as modal completion and imagery.
Restoring some visual perception in blindness, using intelligent technology? Thanks to current developments in neuro-technology and artificial intelligence, this is becoming a very realistic scenario! My research aims to optimise prosthetic vision by developing clever models that can capture our rather complex visual environment into meaningful stimulation patterns, bridging the gap between human and computer vision.
We are interested in the mechanisms that allow humans to solve complex problems that require closing of the perception-action cycle. We design cognitively challenging tasks and have human participants execute them to investigate ensuing neurobehavioural responses. For the analysis of neurobehavioural data, we develop new computational models of human brain function and sophisticated machine learning techniques for large-scale neural data analysis.
Next to understanding the empirical basis of complex problem solving, we are interested in the theoretical underpinnings of adaptive behaviour. Specifically, we ask whether computational models that are rooted in AI can provide an account of the learning, inference and control problems that are solved by the human brain. To address this question, we develop new learning algorithms and investigate adaptive behaviour in artificial agents.
Machine learning is a key component in the development of new assistive technology and neurotechnology. We push the state of the art by developing new algorithms that are able to monitor and interpret neurobehavioural data. We also develop new algorithms that allow the manipulation of neural processes to restore or augment cognitive function.
We have several projects available that can be theoretical, empirical or applied in flavor. Theoretical projects focus on new developments in machine learning (neural networks, Bayesian statistics, reinforcement learning) and computational modeling of human brain function. Empirical projects focus on developing experiments to study the brain at work in naturalistic settings, where data is acquired using sophisticated recording techniques. To interpret these data, we develop and make use of sophisticated models and algorithms for large-scale data analysis. Applied projects focus on the development and testing of new algorithms that drive the development of the next generation of intelligent machines. Example application domains are neurotechnology, healthcare, artificial creativity, gameplay and cognitive robotics. For more details on our work, please consult the various pages on ww.artcogsys.com.
We welcome motivated students with an exact mindset and an interest in human cognition. In case you are interested just send us an e-mail stating your background and research interest.
During the lectures, the formal concepts underlying modern neural networks will be developed, including deep neural networks and recurrent neural networks. Also, various classic neural network models will be discussed like the (multi-layer) Perceptron, Hopfield networks and Boltzmann machines. During the practicals, students will get to immers themselves in the theoretical and practical aspects of neural networks. Students will get to implement various models using Python, a programming language which they will learn to use during the course.
A main objective of artificial intelligence is to build machines whose cognitive abilities match (or surpass) those of humans. This is also referred to as strong AI. One way to achieve this goal is by developing cognitive architectures that implement the algorithms used by our own brains. This success of such an approach relies on a continuous interplay between AI and neuroscience.
In this course, we will explore how computational models, particularly neural networks, can yield new insights about the mechanisms that give rise to natural intelligence and provide us with the tools to model cognitive processes in artificial systems.
The course consists of different components: