Excited to be with my team at #ccn2025 this week! Iβll be presenting part of the workshop on Thursday. Come say hi!
11.08.2025 20:11 β π 5 π 1 π¬ 0 π 0@lucyzmf.bsky.social
PhD in brain decoding @ENS
Excited to be with my team at #ccn2025 this week! Iβll be presenting part of the workshop on Thursday. Come say hi!
11.08.2025 20:11 β π 5 π 1 π¬ 0 π 0π₯βBrain-to-Text Decodingβ is now out on ArXiv: arxiv.org/abs/2502.17480
Our paper from AI at Meta and @bcbl.bsky.social presents Brain2Qwerty, an AI model that decodes text from non-invasive recordings of the brain.
Below a detailed thread π§΅1/7
This research was made possible by our great team @jarodlevy.bsky.social, StΓ©phane d'Ascoli, JΓ©rΓ©my Rapin, F.-Xavier Alario, Pierre Bourdillon, Svetlana Pinet, @jeanremiking.bsky.social at AI at Meta, @bcbl.bsky.social, @cnrs.fr , @psl-univ.bsky.social, and HΓ΄pital Fondation Rothschild!  
8/8
Interested in efficiently decoding these brain signals? Go check out our companion AI paper:
ai.meta.com/research/pub...
7/8
Result 4: This dynamic code is observed for all levels of the language hierarchy. Critically, it is level-dependent: context representations βmoveβ more slowly in brain activity than letter representations, allowing a seamless unfolding of language representations.
6/8
Result 3: 
How does the brain avoid the interference induced by such overlapping representations? 
Thanks to a dynamic code! The representations of successive letters continuously move across different neural subspaces.
Result 2: Paradoxically, the representations of letters last much longer than their respective corresponding actions, resulting in a representational overlap of successive letters in brain activity. 
4/8
Result 1: We find that, before typing each word, the brain activity is marked by a top-down sequence of representations: context-level representations can be decoded before those of words, syllables, and letters. 
3/8
Method: We used MEG to record the brain activity of participants while they typed sentences. 
Using linear decoding, we then evaluate whether the brain represents a hierarchy of linguistic features before each word is typed.
2/8
Our paper from AI at Meta and @bcbl.bsky.social is out on arxiv π₯
βFrom Thought to Action: How a Hierarchy of Neural Dynamics Supports Language Productionβ
arxiv.org/abs/2502.07429
How does the brain transform a thought into a sequence of motor actions?
Results summarized in π§΅1/8
Result 2: Paradoxically, the representations of letters last much longer than their respective corresponding actions, resulting in a representational overlap of successive letters in brain activity. 
4/8
Result 1: We find that, before typing each word, the brain activity is marked by a top-down sequence of representations: context-level representations can be decoded before those of words, syllables, and letters. 
3/8
Method: We used MEG to record the brain activity of participants while they typed sentences. 
Using linear decoding, we then evaluate whether the brain represents a hierarchy of linguistic features before each word is typed.
2/8