We discovered that language models leave a natural "signature" on their API outputs that's extremely hard to fake. Here's how it works π
π arxiv.org/abs/2510.14086 1/
@nicolaslegrand.bsky.social
Senior Researcher in computational cognitive science @ Center for Humanities Computing, Aarhus University. Active inference - LLM - Reinforcement learning - Bayesian modelling | Creating a neural network library for predictive coding.
We discovered that language models leave a natural "signature" on their API outputs that's extremely hard to fake. Here's how it works π
π arxiv.org/abs/2510.14086 1/
Online Now: Cognitive modeling of real-world behavior for understanding mental health
26.09.2025 12:40 β π 37 π 15 π¬ 0 π 2Introducing hMFC: A Bayesian hierarchical model of trial-to-trial fluctuations in decision criterion! Now out in @plos.org Comp Bio.
led by Robin Vloeberghs with @anne-urai.bsky.social Scott Linderman
Paper: desenderlab.com/wp-content/u... Thread βββ
#PsychSciSky #Neuroscience #Neuroskyence
Happy to announce our paper got accepted to #NeurIPS!
@akjagadish.bsky.social @marvinmathony.bsky.social @ericschulz.bsky.social & Tobi Ludwig
arxiv.org/abs/2502.00879
If you are concerned with performance, I also recommend checking out SleepECG (Systole uses their version of the Pan-Tompkins algorithm under the hood) : sleepecg.readthedocs.io/en/stable/
16.09.2025 11:52 β π 1 π 0 π¬ 0 π 0If you want to report bugs or ask questions, you can reach out here: github.com/LegrandNico/...
16.09.2025 11:52 β π 2 π 0 π¬ 1 π 0Hi @koeniglab.bsky.social ! Thanks for the shout-out. I created Systole while I was a postdoc in the ECG lab, but since I left a few years ago, I am no longer actively maintaining it at the moment.
16.09.2025 11:52 β π 1 π 0 π¬ 1 π 0does someone good at coding & analysis want to work remotely w/ us in the coming few months (before end of 2025), as a paid consultant? project will be on neurofeedback (fMRI, ECoG, calcium imaging). we'll work towards developing the experiments & analysis pipelines together. if so pls DM me ur CVπ§ π
01.09.2025 13:06 β π 41 π 37 π¬ 4 π 0A Gaussian process showing that the allowed time series are forced to be compatible with data
Iβm especially proud of this article I wrote about Gaussian Processes for the Recast blog! π₯³
GPs are super interesting, but itβs not easy to wrap your head around them at first π€
This is a medium level (more intuition than math) introduction to GPs for time series.
getrecast.com/gaussian-pro...
An illustration of a man falling out of a piece of paper, with text that says: How an academic betrayal led me to change my authorship practices.
"The day the paper was published should have been a moment of pride. Instead, it felt like a quiet erasure." #ScienceWorkingLife https://scim.ag/4p3eH5g
25.08.2025 13:24 β π 60 π 21 π¬ 2 π 3I made this Computational Psychiatry Starter Pack a while ago and was wondering if I may be missing anyone who has joined bluesky since?
I will add anyone who uses computational models to adress questions in psychiatry research. :)
go.bsky.app/5PTy9Zj
My first, first author paper, comparing the properties of memory-augmented large language models and human episodic memory, out in @cp-trendscognsci.bsky.social!
authors.elsevier.com/a/1lV174sIRv...
Hereβs a quick π§΅(1/n)
After five years of confused staring at Greek letters, it is my absolute pleasure to finally share our (with @smfleming.bsky.social) computational model of mental imagery and reality monitoring: Perceptual Reality Monitoring as Higher-Order inference on Sensory Precision β¨
osf.io/preprints/ps...
Our new paper is out in PNAS: "Evolving general cooperation with a Bayesian theory of mind"!
Humans are the ultimate cooperators. We coordinate on a scale and scope no other species (nor AI) can match. What makes this possible? π§΅
www.pnas.org/doi/10.1073/...
memo is a new probabilistic programming language for modeling social inferences quickly. Looks like a real advance over previous approaches: fast, python-based, easily integrated into data analysis. Super cool!
pypi.org/project/memo...
and
osf.io/preprints/ps...
Thrilled to see our TinyRNN paper in @nature! We show how tiny RNNs predict choices of individual subjects accurately while staying fully interpretable. This approach can transform how we model cognitive processes in both healthy and disordered decisions. doi.org/10.1038/s415...
02.07.2025 19:03 β π 329 π 141 π¬ 9 π 4Interoception vs. Exteroception: Cardiac interoception competes with tactile perception, yet also facilitates self-relevance encoding https://www.biorxiv.org/content/10.1101/2025.06.25.660685v1
28.06.2025 00:15 β π 15 π 11 π¬ 0 π 0Also in @cp-trendscognsci.bsky.social this month, a perspective by @philcorlett.bsky.social and a new computational model of paranoia and persecutory delusions @philcorlett.bsky.social www.cell.com/trends/cogni...
28.06.2025 10:21 β π 7 π 5 π¬ 1 π 0Impressive and much-needed review on reinforcement learning models of interoception by @lilweb.bsky.social this month out in @cp-trendscognsci.bsky.social Will definitely have a look at this one π www.cell.com/trends/cogni...
28.06.2025 10:16 β π 31 π 7 π¬ 0 π 0We need your help!!! π§ π§ͺπ€
If you are human, you fall asleep at least once a day! What happens in your mind then?
Scientists know actually very little about this private moment.
We propose a 20-min survey to get as much data as possible!
Here is the link:
redcap.link/DriftingMinds
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Parshin Shojaeeββ Iman Mirzadehβ Keivan Alizadeh Maxwell Horton Samy Bengio Mehrdad Farajtabar Apple Abstract Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scal- ing properties, and limitations remain insufficiently understood. Current evaluations primarily fo- cus on established mathematical and coding benchmarks, emphasizing final answer accuracy. How- ever, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning tracesβ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of composi- tional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs βthinkβ. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter- intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low- complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit β¦
If I have time I'll put together a more detailed thread tomorrow, but for now, I think this new paper about limitations of Chain-of-Thought models could be quite important. Worth a look if you're interested in these sorts of things.
ml-site.cdn-apple.com/papers/the-i...
Led by postdoc Doyeon Lee and grad student Joseph Pruitt, our lab has a new Perspectives piece in PNAS Nexus:
"Metacognitive sensitivity: The key to calibrating trust and optimal decision-making with AI"
academic.oup.com/pnasnexus/ar...
With co-authors Tianyu Zhou and Eric Du 1/
A schematic of our method. On the left are shown Bayesian inference (visualized using Bayesβ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled βmeta-learningβ combines Bayesian inference and neural networks into a βprior-trained neural networkβ, described as a neural network that has the priors of a Bayesian model β visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled βlearningβ goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence βcolorless green ideas sleep furiouslyβ).
π€π§ Paper out in Nature Communications! π§ π€
Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?
Our answer: Use meta-learning to distill Bayesian priors into a neural network!
www.nature.com/articles/s41...
1/n
How does the brain stop thoughts? Find out in my article in @natrevneuro.nature.com with Subbu Subbulakshmi & Maite Crespo-Garcia www.nature.com/articles/s41... that integrates 25 yrs of psychology and neuroscience on this vital function.@mrccbu.bsky.social sky.social #neuroskyence #neuroscience
20.05.2025 09:36 β π 90 π 39 π¬ 2 π 2Elegant theoretical derivations are exclusive to physics. Right?? Wrong!
In a new preprint, we:
β
"Derive" a spiking recurrent network from variational principles
β
Show it does amazing things like out-of-distribution generalization
π[1/n]π§΅
w/ co-lead Dekel Galor & PI @jcbyts.bsky.social
π§ π€π§ π
Redefining respiratory sinus arrhythmia as respiratory heart rate variability: an international Expert Recommendation for terminological clarity
#interoception #neuroskyence
rdcu.be/elzfV
First draft online version of The RLHF Book is DONE. Recently I've been creating the advanced discussion chapters on everything from Constitutional AI to evaluation and character training, but I also sneak in consistent improvements to the RL specific chapter.
rlhfbook.com
π¨ New preprint! How do we know what is real? so...
"Unreal? A Behavioral, Physiological & Computational Model of the Sense of Reality" is out!
The result of 4 years of incredible teamworkπ
www.biorxiv.org/content/10.1...