I wish you were right about that, but there are still surprising numbers of radical enactive and ecological psychology types out there, in this year of our Lord 2026, who seem to strongly believe this...
01.03.2026 13:03 β
π 1
π 0
π¬ 0
π 0
What is the brain for? Active inference is widely discussed as a unifying framework for understanding brain function, yet its empirical status remains debated. Our review identifies core predictions across the action-perception cycle and evaluates their empirical support: osf.io/preprints/ps...
29.01.2026 08:28 β
π 98
π 39
π¬ 2
π 1
This is fire!!!
12.01.2026 12:09 β
π 0
π 0
π¬ 0
π 0
I really think at its heart philosophy is one giant battle, taking place over many eras and nations, between people who are basically pleasant bureaucrats and people who are sexy murder poets, and itβs both super important and super boring that the pleasant bureaucrats must win.
10.11.2024 21:57 β
π 3696
π 559
π¬ 128
π 137
I am frustrated by the anti-AI obsession on this place. I understand people are annoyed by AI being imposed on us for trivial things and by the AI uber alles discourse but it really feels like older people complaining about a new technology.
19.12.2025 02:47 β
π 72
π 9
π¬ 40
π 8
Awesome work!
22.12.2025 12:20 β
π 0
π 0
π¬ 1
π 0
Self-orthogonalizing attractor neural networks emerging from the free energy principle
Attractor dynamics are a hallmark of many complex systems, including the brain. Understanding how such self-organizing dynamics emerge from first principles is crucial for advancing our understanding ...
As many of you know, Iβve been fascinated by brain attractor dynamics lately.
Thrilled to share a new preprint on their link to orthogonal neural representations, co-authored with Karl Friston:
arxiv.org/abs/2505.22749
- with implications for both neuroscience & AI!
First in a series - stay tuned!
30.05.2025 07:14 β
π 27
π 9
π¬ 3
π 0
YouTube video by The Dissenter
#1000 Karl Friston: The Free Energy Principle and Active Inference: From Physics to Mind
In episode 1000, I talk with Dr. Karl Friston about the Free Energy Principle and active inference, from #Physics to mind. #CognitiveScience #Science
youtu.be/2BzmKnDtCCI
27.11.2025 12:05 β
π 4
π 2
π¬ 1
π 0
Honored to speak at Ottawa about how Canada can lead in #NeuroAI. With world-class talent, trusted institutions, & sustainable infrastructure, we can build a federated approach to AI that protects mental health & strengthens our society. Thanks @braincanada.bsky.social for the invitation!
20.11.2025 21:09 β
π 21
π 2
π¬ 1
π 0
OSF
New preprint with super @manuelbaltieri.bsky.social !
Mathematical approaches to the study of agents
osf.io/preprints/ps...
21.11.2025 12:44 β
π 8
π 4
π¬ 0
π 0
YouTube video by Machine Learning Street Talk
Why Intelligence Can't Get Too Large (Karl Friston)
Karl Friston in #mlst
Philosophy done right! So many references, obviously @drmichaellevin.bsky.social mentioned #academicsky #philosophy #neuroscience #strangeloop
youtu.be/PNYWi996Beg
11.09.2025 15:05 β
π 6
π 2
π¬ 0
π 0
Karl Friston & Mark Solms: Is it Possible to Engineer Artificial Consciousness?
Spotify video
Super interesting, thought-provoking conversation between Mark Solms and Karl Friston open.spotify.com/episode/151a...
12.09.2025 06:36 β
π 26
π 4
π¬ 3
π 1
What drives behavior in living organisms? And how can we design artificial agents that learn interactively?
π’ To address these, the Sensorimotor AI Journal Club is launching the "RL Debate Series"π
w/ @elisennesh.bsky.social, @noreward4u.bsky.social, @tommasosalvatori.bsky.social
π§΅[1/5]
π§ π€π§ π
17.09.2025 16:31 β
π 36
π 10
π¬ 2
π 5
Sorry to hear about your negative experience! My pleasure, don't hesitate to write me if you have any questions or want to discuss specific points :)
15.09.2025 18:22 β
π 1
π 0
π¬ 0
π 0
OSF
Yes! While Warren and myself have our disagreements, I like his work on PCT. IMO all these approaches are complementary and play together nicely. Along with friends (namely @adw.bsky.social who bravely led the project), we penned this integrative review. Hope it's of interest:
osf.io/preprints/ps...
15.09.2025 18:16 β
π 1
π 0
π¬ 1
π 0
3. Your point about top-down causation is key. IMO one of the most interesting aspects of multi-scale formulations of active inference is precisely how it handles multi-scale system dynamics, cashing out top-down influence in terms of constraints on system dynamics in a non-reductionist way
15.09.2025 17:58 β
π 1
π 0
π¬ 1
π 0
Dynamic Markov Blanket Detection for Macroscopic Physics Discovery
The free energy principle (FEP), along with the associated constructs of Markov blankets and ontological potentials, have recently been presented as the core components of a generalized modeling metho...
2. Not much work has been done on active inference and the neural code. The key departure from RL is that active inference uses an alternative objective function (the free energy functional), which you can read as an "ontological potential function" specifying object type (arxiv.org/abs/2502.21217)
15.09.2025 17:54 β
π 1
π 0
π¬ 1
π 0
Great questions!
1. IMO active inference falls under the rubric of NeuroAI, (although I'd describe myself as a non-realist about these types of physics-inspired models, and as such Iβd say the FEP isnβt a literal description of the brain, so it depends on the scope of NeuroAI, as your define it)
15.09.2025 17:52 β
π 1
π 0
π¬ 1
π 0
Filling the gaps in active inference
Here we discuss key gaps in SOTA applications of active inference in AI - and how Noumenal Labs is working to fill them.
Love a good Feyerabendian sandbox. I'd argue that they're very closely related (and indeed, that the difference is often overblown by both proponents and critics), but they're also importantly distinct. We wrote a post on this that I hope you'll find interesting: www.noumenal.ai/post/filling...
15.09.2025 10:08 β
π 0
π 0
π¬ 1
π 0
π€ How can we study #consciousness between people, at the social level? π§ β¨ New #preprint co-led by Anne Monnier & Lena Adel: βNow is the Time: Operationalizing Generative Neurophenomenology through Interpersonal Methodsβ π§΅(1/3)
08.08.2025 15:16 β
π 36
π 14
π¬ 2
π 0
Currently, using active inference at scale involves trade-offs between explainability and the ability to learn models from data. Not using overparameterized models increases model explainability and auditability, but makes learning in high dimensional and volatile environments more challenging
02.07.2025 10:35 β
π 1
π 0
π¬ 1
π 0
Filling the gaps in active inference
Here we discuss key gaps in SOTA applications of active inference in AI - and how Noumenal Labs is working to fill them.
It provides an alternative objective function that has useful properties, in particular enabling agents to balance the value of exploration and exploitation in policy selection. But IMO the differences between RL and active inference have been exaggerated a bit. See: www.noumenal.ai/post/filling...
02.07.2025 10:30 β
π 1
π 0
π¬ 1
π 0
Computers used to scream every time they connected to the Internet. They knew. They tried to warn us. We did not listen.
22.06.2025 22:15 β
π 11013
π 3721
π¬ 56
π 77
Delighted to see βA Trick of the Mindβ reviewed in @theguardian.com as Book of the Day! π§ π
Also in the print edition tomorrow ποΈ
www.theguardian.com/books/2025/j...
13.06.2025 11:04 β
π 60
π 17
π¬ 2
π 1
Luca M. Possati: Markov Blanket Density and Free Energy Minimization https://arxiv.org/abs/2506.05794 https://arxiv.org/pdf/2506.05794 https://arxiv.org/html/2506.05794
09.06.2025 06:16 β
π 1
π 2
π¬ 1
π 0
Elegant theoretical derivations are exclusive to physics. Right?? Wrong!
In a new preprint, we:
β
"Derive" a spiking recurrent network from variational principles
β
Show it does amazing things like out-of-distribution generalization
π[1/n]π§΅
w/ co-lead Dekel Galor & PI @jcbyts.bsky.social
π§ π€π§ π
19.05.2025 06:34 β
π 36
π 13
π¬ 1
π 3
Shannon invariants: A scalable approach to information decomposition
Distributed systems, such as biological and artificial neural networks, process information via complex interactions engaging multiple subsystems, resulting in high-order patterns with distinct proper...
Preprint time:
βShannon invariants: A scalable approach to information decompositionβ
arxiv.org/abs/2504.15779
Studying information in complex systems is challenging due to difficulties in defining multivariate metrics and ensuring their scalability. This framework addressed both challenges!
23.04.2025 11:22 β
π 31
π 8
π¬ 1
π 0