Two decades into the 21st century, how close are we to to a unified mathematical model of the brain? How close are we to building an artificial intelligence that can surpass it?
In this exploratory symposium, we invite submissions presenting mathematical models of brain function or computational ideas about intelligence. We give priority to those models that can account for brain or behavioural data, or provide simulations to that effect.
One of the major scientific projects of the 20th century was the study of computation. We could build devices that could carry out some of the operations previously only possible in the human mind. This analogy and perspective has proven extremely productive, with neural and cognitive theories inspiring the development of powerful algorithms and vice versa in the computational study of the brain and mind.
In this convention we aim to identify and develop novel computational frameworks for the study of the brain and mind and take those findings back into the creation of novel algorithms for solving difficult problems and simulating intelligence.
Our content comes from four main fields: biocomputation, neural theory, cognitive science and machine learning/artificial intelligence (AI). Each of these fields has developed a distinct computational language and set of concepts pertaining to a set of overlapping underlying principles.
By bringing leading researchers together from these fields together in online and offline settings, we aim to build bridges between them, such that novel findings, insights and frameworks can take spark.








Biocomputation
The prevailing modern scientific paradigm of the brain is a computational one. But if the brain is a computer—which is an 'if'—it must have operating principles, abilities and limitations that are radically different to those of artificial computers. In this session, talks will explore diverse topics within quantitative neuroscience that consider the brain as a device for computation, broadly conceived.
- Professor Dan V. Nicolau Jr (King's College London)
- Yasmine Ayman (Harvard University)
- Professor Wolfgang Maass (Technische Universität Graz): Local Prediction-Learning in High-Dimensional Spaces Enables Neural Networks to Plan
- Professor Sophie Deneve (Ecole Normale Supérieure, Paris)
- Professor Christine Grienberger (Brandeis): Dendritic Computations Underlying Experience-Dependent Hippocampal Representation
- Professor Dan V. Nicolau Jr (King's College London): A Rose by Any Other Name: Towards a Mathematical Theory of the Neuroimmune System
- Dr James Whittington (Oxford/Stanford/Zyphra): Unifying the Mechanisms of the Hippocampal and Prefrontal Cognitive Maps
- Paul Haider (University of Bern): Backpropagation Through Space, Time and the Brain
- Deng Pan (Oxford): Structure Learning in the Human Hippocampus and Orbitofrontal Cortex
- Francesca Mignacco (CUNY Graduate Center & Princeton University): Nonlinear Manifold Capacity Theory with Contextual Information
- Angus Chadwick (University of Edinburgh): Rotational Dynamics Enables Noise Robust Working Memory.
- Carla Griffiths (Sainsbury Wellcome Centre): Neural Mechanisms of Auditory Perceptual Constancy Emerge in Trained Animals
- Harsha Gurnani (University of Washington): Feedback Controllability Constrains Learning Timescales of Motor Adaptation
- Arash Golmohammadi (Department for Neuro and Sensory Physiology, University Medical Center Göttingen): Heterogeneity as an Algorithmic Feature of Neural Networks
- Sacha Sokoloski (University of Tuebingen): Analytically-Tractable Hierarchical Models for Neural Data Analysis and Normative Modelling
- Alejandro Chinea Manrique de Lara (UNED): Cetacean's Brain Evolution: The Intriguing Loss of Cortical Layer IV and the Thermodynamics of Heat Dissipation in the Brain
Neural Theory
While neuroscientists have increasingly powerful deep learning models that predict neural responses, it is not clear that these models are correspondingly increasing our understanding of what neurons are actually doing. In this session, we will take a more mechanistic approach to understanding how networks of neurons afford complex computations, both by both considering mechanistic neural model along with mathematical theories that say how neurons should behave and crucially why they behave that way.
- Dr James Whittington (University of Oxford; Stanford University)
- Dr Francesca Mastrogiuseppe (Champalimaud Center for the Unknown)
- Professor Peter Dayan (Max Planck Institute, Tübingen): Controlling the Controller: Instrumental Manipulations of Pavlovian Influences via Dopamine
- Professor Mackenzie Mathis (EPFL): Learnable Neural Dynamics
- Professor Athena Akrami (UCL): Circuits and Computations for Learning and Exploiting Sensory Statistics
- Professor Nicolas Brunel (Duke): Roles of Inhibition in Shaping the Response of Cortical Networks
- Dr Sophia Sanborn (Science): Symmetry and Universality
- Dr Lea Duncker (Stanford): Evaluating Dynamical Systems Hypotheses Using Direct Neural Perturbations
- Dr Kris Jensen (UCL): An Attractor Model of Planning in Frontal Cortex
- Cristiano Capone (ISS): Online Network Reconfiguration: Non-Synaptic Learning in RNNs
- Sam Hall-McMaster (Harvard University): Neural Prioritization of Past Solutions Supports Generalization
- Alexander Mathis (EPFL): Modeling Sensorimotor Circuits with Machine Learning: Hypotheses, Inductive Biases, Latent Noise and Curricula
- Stefano Diomedi (NRC Italy): Neural Subspaces in Three Parietal Areas During Reaching Planning and Execution
- Sofia Raglio (Sapienza): Clones of Biological Agents Solving Cognitive Task: Hints on Brain Computation Paradigms
- Arno Granier (Bern): Confidence Estimation and Second-Order Errors in Cortical Circuits
- Erik Hermansen (NTNU): The Ontogeny of the Grid Cell Network – Uncovering the Topology of Neural Representations
- Steeve Laquitaine (EPFL): Cell Types and Layers Differently Shape the Geometry of Neural Representations in a Biophysically Detailed Model of the Neocortical Microcircuit.
- Subhadra Mokashe (Brandeis University): Competition Between Memories for Reactivation as a Mechanism for Long-Delay Credit Assignment
- Brendan A. Bicknell (UCL): Fast and Slow Synaptic Plasticity Enables Concurrent Control and Learning
- Vezha Boboeva (Sainsbury Wellcome Centre, UCL): Computational Principles Underlying the Learning of Sequential Regularities in Recurrent Networks
Cognitive Science
How should an intelligent agent behave in order to best realize their goals? What inferences or actions should they make in order to solve an important computational task? Cognitive science aims to answer these questions at an abstract computational level, using tools from probability theory, statistical inference and elsewhere.
In this session we will discuss how such optimal behavior should change under different conditions of uncertainty, background knowledge, multiple agents, or constraints on resource. This can be used to understand human behavior in the real world or the lab, as well as build artificial agents that learn robust and generalizable world models from small amounts of data.
- Dr Ruairidh Battleday (Harvard/MIT)
- Dr Antonella Maselli (NRC Italy)
- Professor Anne Collins (UC Berkeley): Pitfalls and Advances in Computational Cognitive Modeling
- Dr Giovanni Pezzulo (National Research Council of Italy, Rome): Embodied Decision-Making and Planning
- Professor Bill Thompson (University of California, Berkeley): Interactive Discovery of Program-like Social Norms
- Professor Dagmar Sternad (Northeastern): Human Control of Dynamically Complex Objects: Predictability, Stability and Embodiment
- Professor Samuel McDougle (Yale): Abstractions in Motor Memory and Planning
- Dr Fred Callaway (NYU / Harvard): Cultural Evolution of Compositional Problem Solving
- Dr Maria Eckstein (DeepMind): Understanding Human Learning and Abstraction Using Cognitive Models and Artificial Neural Networks
- Nora Harhen (UC Irvine): Developmental Differences in Exploration Reveal Differences in Structure Inference
- Simone D'Ambrogio (Oxford): Discovery of Cognitive Strategies for Information Sampling with Deep Cognitive Modelling and Investigation of Their Neural Basis
- Gaia Molinaro (UC Berkeley): Latent Learning Progress Guides Hierarchical Goal Selection in Humans
- Lucy Lai (Harvard): Policy Regularization in the Brain Enables Robustness and Flexibility
- Roey Schurr (Harvard): Dynamic Computational Phenotyping of Human Cognition
- Yulin Dong (Peking): Optimal Mental Representation of Social Networks Explains Biases in Social Learning and Perception
- Antonino Visalli (Padova): Extensions of the Hierarchical Gaussian Filter to Wiener Diffusion Processes
- Frank Tong (Vanderbilt): Improved Modeling of Human Vision by Incorporating Robustness to Blur in Convolutional Neural Networks
- Lance Ying (Harvard): Grounding Language about Belief in a Bayesian Theory-of-Mind
- Jorge Eduardo Ramírez-Ruiz (Universitat Pompeu Fabra): The Maximum Occupancy Principle (MOP) as a Generative Model of Realistic Behavior
- Rory John Bufacchi (Chinese Academy of Sciences): Egocentric Value Maps of the Near-Body Environment
- Matteo Alleman (Columbia): Modeling Behavioral Imprecision From Neural Representations
- Colin Conwell (Johns Hopkins): Is Visual Cortex Really "Language-Aligned"? Perspectives from Model-to-Brain Comparisons in Human and Monkeys on the Natural Scenes Dataset
- Ryan Low (UCL): A Normative Account of the Psychometric Function and How It Changes with Stimulus and Reward Distributions
Artificial Intelligence
Machine learning and artificial intelligence (AI) aim to create algorithms that solve difficult problems and simulate complex intelligent behavior. Many of these algorithms are based on findings and theory from the study of the brain and mind.
Recent rapid advances in these fields have seen the creation of algorithms and agents that can—finally—solve complex real-world problems across a wide range of domains. What are these advances and how can we take them further? What remains beyond their capacity and how can we overcome that? What might forever lie beyond their capabilities—or will anything?
In this session we will hear from some of the world's leading experts in academia and tech. We will also hear from proponents of structure and from proponents of scale. And we will also hear some radical suggestions for reframing many fundamental problems of intelligence.
- Dr Ishita Dasgupta (Google DeepMind)
- Dr Ilia Sucholutsky (Princeton University)
- Dr Feryal Behbahani (Google DeepMind)
- Professor Kevin Ellis (Cornell): Doing Experiments and Acquiring Concepts Using Language and Code
- Professor Najoung Kim (BU, Google): Comparing Human and Machine Inductive Biases for Compositional Linguistic Generalization Using Semantic Parsing: Results and Methodological Challenges
- Professor Rafal Bogacz (Oxford): Modelling Diverse Learning Tasks with Predictive Coding
- Dr André Barreto (DeepMind): Generalised Policy Updates and Neuroscience
- Dr Wilka Carvalho (Harvard): Predictive Representations: Building Blocks of Intelligence
- Quentin Ferry (MIT): Emergence and Function of Abstract Representations in Self-Supervised Transformers
- Michael Spratling (University of Luxembourg): A Margin-Based Replacement for Cross-Entropy Loss that Improves the Robustness of Deep Neural Networks on Image Classification Tasks
- Luke Eilers (University of Bern): A Generalized Neural Tangent Kernel for Surrogate Gradient Learning
- Samuel Lippl (Columbia University): The Impact of Task Structure, Representational Geometry and Learning Mechanism on Compositional Generalization
- Anita Keshmirian (Ludwig Maximilian University of Munich): Investigating Causal Judgments in Humans and Large Language Models
- Sunayana Rane (Princeton): Can Generative Multimodal Models Count to Ten?
- Michael Lepori (Brown): A Mechanistic Analysis of Same-Different Relations in ViTs
- Paul Riechers (Beyond Institute for Theoretical Science; BITS): Computational Mechanics Predicts Internal Representations of Transformers
- Aly Lidayan (UC Berkeley): RL Algorithms Are BAMDP Policies: Understanding Exploration, Intrinsic Motivation and Optimality
- Nasir Ahmad (Donders Institute for Brain, Cognition and Behaviour): Correlations are Ruining your Gradient Descent
- Motahareh Pourrahimi (McGill; Mila): Human-Like Behavior and Neural Representations Emerge in a Neural Network Trained to Search for Natural Objects from Pixels
- Pablo Lanillos (Spanish National Research Council): Object-Centric Reasoning and Control from Pixels
- Chiara Mastrogiuseppe (Universitat Pompeu Fabra): Controlled Maximal Variability Leads to Reliable Performance in Recurrent Neural Networks
Tues 28th May 2024 (UTC+1)
Weds 29th May 2024 (UTC+1)
Thurs 30th May 2024 (UTC+1)
Fri 31st May 2024 (UTC+1)





