Are world models necessary to achieve human-level agents, or is there a model-free short-cut?
Our new #ICML2025 paper tackles this question from first principles, and finds a surprising answer, agents _are_ world modelsβ¦ π§΅
arxiv.org/abs/2506.01622
04.06.2025 15:48 β π 42 π 15 π¬ 2 π 4
Learning Dynamics of RNNs in Closed-Loop Environments
Recurrent neural networks (RNNs) trained on neuroscience-inspired tasks offer powerful models of brain computation. However, typical training paradigms rely on open-loop, supervised settings, whereas ...
All our motor control modelling efforts focus on closed-loop systems for this reason:
"...closed-loop and open-loop training produce fundamentally different learning dynamics, even when using identical architectures and converging to the same final solution."
arxiv.org/abs/2505.13567
29.05.2025 17:28 β π 23 π 5 π¬ 0 π 0
ΧΧΧΧΧ§
31.12.2024 09:54 β π 1 π 0 π¬ 0 π 0
AI researcher at Google DeepMind -- views on here are my own.
Interested in cognition & AI, consciousness, ethics, figuring out the future.
Theory of Neural Networks (how the heck do they work?) | Postdoc @EPFL, prev. Swartz Fellow @Harvard and PhD @FZJuelich | https://alexvanmeegen.github.io | Background art by https://bettina-hachmann.de
Mathematician at UCLA. My primary social media account is https://mathstodon.xyz/@tao . I also have a blog at https://terrytao.wordpress.com/ and a home page at https://www.math.ucla.edu/~tao/
neuro and AI (they/she)
Allen Institute for Neural Dynamics, theory lead | UW affiliate asst prof
Dreaming of a cloudy sky | Associate Prof. | Computer Science & BioFrontiers, CUBoulder | Ext. Faculty, Santa Fe Institute | www.peleglab.com
Recently a principal scientist at Google DeepMind. Joining Anthropic. Most (in)famous for inventing diffusion models. AI + physics + neuroscience + dynamical systems.
https://mega002.github.io
AI researcher @GoogleDeepMind. PhD @Caltech. Interested in autonomous exploration and self-improvement, both in humans and embodied AI agents. Views my own.
Professor at NYU; Scientific Director at Ctr for Computational Neurocience, Flatiron Institute. Research in Computational Vision (neurons, perception, machines). Opinions my own.
Theoretical neuroscientist interested in brain-body interactions and evolution of adaptive behavior. Associate Professor at Scripps Research Institute in San Diego.
Assistant professor of computer science at Technion; visiting scholar at @KempnerInst 2025-2026
https://belinkov.com/
Associate Prof at U Penn. Learning, memory, sleep, neural network modeling...
CTO of Technology & Society at Google, working on fundamental AI research and exploring the nature and origins of intelligence.
Comp Neuro PhD Candidate @ UPenn | efficient and approximate inference in the brain | NSF Graduate Research Fellow | NYU β17
https://jacobaparker.github.io/
Interpretable Deep Networks. http://baulab.info/ @davidbau
Reverse engineering neural networks at Anthropic. Previously Distill, OpenAI, Google Brain.Personal account.
Computational neuroscientist @princetonneuro.bsky.social deciphering natural and advancing artificial intelligence.
Scot abroad and post-doc at KCL in computational psychiatry. Interested in natural and artificial thinking and learning
https://ingrdmrtn.github.io/