πŸŒ™ Lune Bellec's Avatar

πŸŒ™ Lune Bellec

@lune-bellec.bsky.social

πŸ³οΈβ€πŸŒˆ πŸ³οΈβ€βš§οΈ πŸŒˆβ€β™ΎοΈ Prof in psychology at UniversitΓ© de MontrΓ©al. Founder of the https://cneuromod.ca project: breeding individual πŸ€– to mimic individual human 🧠. Delegate for digital health at the Montreal Geriatrics Institute https://criugm.qc.ca/

3,380 Followers  |  1,917 Following  |  193 Posts  |  Joined: 13.11.2024  |  1.935

Latest posts by lune-bellec.bsky.social on Bluesky

The Biological Psychiatry family of journals is now officially on Bluesky!

Follow us for the latest research in psychiatric neuroscience, cognitive neuroimaging, and global open science from our three leading journals.

25.09.2025 09:36 β€” πŸ‘ 65    πŸ” 22    πŸ’¬ 0    πŸ“Œ 0
LinkedIn This link will take you to a page that’s not on LinkedIn

And to match their spirit of openness, we’ve released the code, containers, and data. Anyone can rerun the entire analysis.

Co-lead authors: @clarken.bsky.social and @surchs.bsky.social
Paper: doi.org/10.1093/giga...
Github: github.com/SIMEXP/autis...
Zenodo archive: doi.org/10.5281/zeno...

End/🧡

08.09.2025 14:04 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The signature was discovered in a balanced cohort of ~1,000 individuals and replicated in an independent sample (thanks to the ABIDE I & II wonderful participants and the researchers who shared their data πŸ’œπŸ’œπŸ’œ). 5/🧡

08.09.2025 14:03 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Scatterplot showing individual risk (positive predictive value) versus prevalence in the general population for different autism risk markers. Rare monogenic syndromes (green diamonds) confer very high risk but are extremely rare; common genetic variants (yellow triangles) are widespread but confer very low risk; copy number variants (pink triangles) sit in between. Previous imaging-based models (red dots) achieve modest risk. The new High-Risk Signature (orange circle) replicates across datasets, confers a sevenfold increased risk of autism, and is present in about 1 in 200 people.

Scatterplot showing individual risk (positive predictive value) versus prevalence in the general population for different autism risk markers. Rare monogenic syndromes (green diamonds) confer very high risk but are extremely rare; common genetic variants (yellow triangles) are widespread but confer very low risk; copy number variants (pink triangles) sit in between. Previous imaging-based models (red dots) achieve modest risk. The new High-Risk Signature (orange circle) replicates across datasets, confers a sevenfold increased risk of autism, and is present in about 1 in 200 people.

A positive result means someone is about seven times more likely to actually have an autism diagnosis. This rivals the best imaging markers, while still being found in about 1 in 200 people in the general population. 4/🧡

08.09.2025 14:03 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Diagram comparing how different autism risk markers identify individuals. Each circle represents the overlap between people labeled by a marker (grey), people with autism (purple), and those labeled who actually have autism (blue). Monogenic syndromes label very few people but with high accuracy; existing imaging models label many people but with low accuracy; the High-Risk Signature (HRS) approach identifies a small subset with a higher proportion of true autism cases.

Diagram comparing how different autism risk markers identify individuals. Each circle represents the overlap between people labeled by a marker (grey), people with autism (purple), and those labeled who actually have autism (blue). Monogenic syndromes label very few people but with high accuracy; existing imaging models label many people but with low accuracy; the High-Risk Signature (HRS) approach identifies a small subset with a higher proportion of true autism cases.

We turned the problem on its head. Instead of trying to classify everyone, we built a brain signature that only makes predictions when it’s confident. 3/🧡

08.09.2025 14:01 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Real life isn’t balanced. Autism affects about 1% of the population. In that setting, a biomarker with 80% balanced accuracy would catch one true case for every twenty false alarms. 2/🧡

08.09.2025 14:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Many brain imaging β€œbiomarkers” for autism have been proposed. Most aim for balanced accuracy (matching sensivity/specificity) on datasets where cases and controls are split 50/50. 1/🧡

08.09.2025 13:59 β€” πŸ‘ 12    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0

"A murder of butterflies" has a nice ring to it.

24.08.2025 12:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

This year at #CCN25 we showed the importance of OOD evaluation to adjudicate between brain models. Our results demonstrate these trivial but key facts :
- high encoding accuracy β‰  functional convergence
- human brain β‰  NES console β‰  4-layers CNN
- videogames are cool

w/ @lune-bellec.bsky.social πŸ™Œ

13.08.2025 15:51 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1
Post image

Mapping cerebral blood perfusion and its links to multi-scale brain organization across the human lifespan | doi.org/10.1371/jour...

How does blood perfusion map onto canonical features of brain structure and function? @asafarahani.bsky.social investigates @plosbiology.org ‡️

08.08.2025 14:24 β€” πŸ‘ 55    πŸ” 16    πŸ’¬ 1    πŸ“Œ 2
Poster titled "Neuromod: The Courtois Project on Neuronal Modelling" with logos from UniversitΓ© de MontrΓ©al and the Centre de recherche de l'Institut universitaire de gΓ©riatrie de MontrΓ©al.

Large bold text reads:
6 BRAINS – 987H-fMRI – 18 TASKS
Followed by the subtitle:
Naturalistic & Controlled – Multimodal / Perception + Action
Each letter in "18 TASKS" contains thumbnails from various visual tasks.

The central table summarizes 32 datasets grouped by primary domain (Vision, Audition, Language, Memory, Action, Other). For each dataset, the table indicates which stimulus modalities were used (Vision, Speech, Audio, Motion), what responses were collected (Physiology, Eye tracking, Explanations, Actions), and how many sessions and subjects were scanned. The overall visual style is playful and bold, with rainbow colors for modality types and rich iconography indicating data types.

Poster titled "Neuromod: The Courtois Project on Neuronal Modelling" with logos from UniversitΓ© de MontrΓ©al and the Centre de recherche de l'Institut universitaire de gΓ©riatrie de MontrΓ©al. Large bold text reads: 6 BRAINS – 987H-fMRI – 18 TASKS Followed by the subtitle: Naturalistic & Controlled – Multimodal / Perception + Action Each letter in "18 TASKS" contains thumbnails from various visual tasks. The central table summarizes 32 datasets grouped by primary domain (Vision, Audition, Language, Memory, Action, Other). For each dataset, the table indicates which stimulus modalities were used (Vision, Speech, Audio, Motion), what responses were collected (Physiology, Eye tracking, Explanations, Actions), and how many sessions and subjects were scanned. The overall visual style is playful and bold, with rainbow colors for modality types and rich iconography indicating data types.

In 2019, the CNeuroMod team and 6 participants began a massive data collection journey: twice-weekly MRI scans for most of 5 years. Data collection is now complete! 1/🧡

07.08.2025 20:30 β€” πŸ‘ 13    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0
Preview
Automated testing with GitHub Actions Better Code, Better Science: Chapter 4, Part 7

Automated testing with GitHub Actions - the latest in my Better Code, Better Science series russpoldrack.substack.com/p/automated-...

05.08.2025 15:29 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

I find AI coding most useful to comment / suggest on what I do. Your disastrous experience with code generation matches mine. But as a side kick it's incredibly positive IMO.

01.08.2025 21:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ₯... we are SO happy to officially announce that registration is now OPEN for our OHBM Virtual Satellite Meeting, taking place September 10-12!

This has been a major goal of the SEA-SIG for a while now and we're so excited to show you what we've been working on!

🌱🌎✨🧠

01.08.2025 15:51 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
four brain maps showing noise ceiling estimates in response to image presentation

four brain maps showing noise ceiling estimates in response to image presentation

New CNeuroMod-THINGS open-access fMRI dataset: 4 participants Β· ~4 000 images (720 categories) each shown 3Γ— (12k trials per subject)Β· individual functional localizers & NSD-inspired QC . Preprint: arxiv.org/abs/2507.09024 Congrats Marie St-Laurent and @martinhebart.bsky.social !!

30.07.2025 01:57 β€” πŸ‘ 35    πŸ” 17    πŸ’¬ 1    πŸ“Œ 0
Preview
Precision functional mapping reveals less inter-individual variability in the child vs. adult human brain Human brain organization shares a common underlying structure, though recent studies have shown that features of this organization also differ significantly across individual adults. Understanding the...

1/11 Very excited to say that our preprint, Precision functional mapping reveals less inter-individual variability in the child vs. adult human brain, is up on biorxiv!
www.biorxiv.org/content/10.1...

28.07.2025 21:53 β€” πŸ‘ 32    πŸ” 10    πŸ’¬ 1    πŸ“Œ 2
Post image Post image Post image

Google’s Gemini 2.5 paper has 3295 authors

arxiv.org/abs/2507.06261

13.07.2025 13:21 β€” πŸ‘ 58    πŸ” 6    πŸ’¬ 7    πŸ“Œ 6
Preview
Advancing neural decoding with deep learning - Nature Computational Science A recent study introduces a neural code conversion method that aligns brain activity across individuals without shared stimuli, using deep neural network-derived features to match stimulus content.

Excited to share our News&Views on Kamitani Lab's NatComputSci paper! Their neural code converter enables transformation of brain activity patterns across individuals, and it doesn't need shared stimuli or connectivity information!
www.nature.com/articles/s43...

11.07.2025 14:03 β€” πŸ‘ 16    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

Excited to co-organize our NeurIPS 2025 workshop on Foundation Models for the Brain and Body!
We welcome work across ML, neuroscience, and biosignals β€” from new approaches to large-scale models. Submit your paper or demo! 🧠 πŸ§ͺ 🦾

11.07.2025 19:51 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This is why I think the platonic rep hypothesis doesn’t apply to brain-ANN alignment, since most existing (functional?) models are implicitly or explicitly trained to mimic humans.

The assumption of PRH is that the networks are trained independently which doesn’t hold in brain-ANN comparisons.

10.07.2025 15:02 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Brain2Model Transfer: Training sensory and decision models with human neural activity as a teacher Transfer learning enhances the training of novel sensory and decision models by employing rich feature representations from large, pre-trained teacher models. Cognitive neuroscience shows that the hum...

Using intracranial brain recordings to guide representations in a vision-action AI model leads to faster and better training arxiv.org/abs/2506.208...

27.06.2025 16:21 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
The Many Faces of Fear: Univariate, Predictive and Representational … Brainhack School

Just wrapped up my first real foray into analyzing brain data at Brainhack School 2025 a couple weeks ago πŸ§ πŸ’»

I focused on comparing fMRI techniques on a single subject, using fear as a case study.

school-brainhack.github.io/project/many...

25.06.2025 01:55 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Benchmarking methods for mapping functional connectivity in the brain | doi.org/10.1038/s415...

What is the best measure of functional connectivity (FC)?

led by @zhenqi.bsky.social in @natmethods.nature.com ‡️

20.06.2025 19:56 β€” πŸ‘ 89    πŸ” 44    πŸ’¬ 4    πŸ“Œ 3

Research assistant job posting is live! Come join us in Calgary and be part of a supportive, interdisciplinary team.

careers.ucalgary.ca/jobs/1631661...

20.06.2025 20:53 β€” πŸ‘ 10    πŸ” 10    πŸ’¬ 1    πŸ“Œ 1
Post image Post image Post image Post image

Few originals available
Anyone interested can DM me.

19.06.2025 14:09 β€” πŸ‘ 51    πŸ” 9    πŸ’¬ 2    πŸ“Œ 0

an absolute gain on a relative measure. Confusing indeed.

14.06.2025 19:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
DeepMReye DeepMReye is a software package for magnetic resonance-based eye tracking for (f)MRI experiments. Contact: m.nau[at]vu.nl & markus.frey1[at]gmail.com - DeepMReye

#OSCAwards2025 | open access, data, materials & software: OpenMReye: camera-free magnetic resonance-based eye tracking for research and clinical applications πŸ‘€πŸ©» by @matthiasnau.bsky.social
Find out more: github.com/DeepMReye/

12.06.2025 21:40 β€” πŸ‘ 11    πŸ” 6    πŸ’¬ 0    πŸ“Œ 1

Hi friends, I have news!

Giga connectome, a BIDS app for post-fMRIPrep connectome extraction, is now in @joss-openjournals.bsky.social πŸŽ‰

Thanks to @remigau.bsky.social, @clarken.bsky.social, Quentin Dessain, and @lune-bellec.bsky.social for working on this together

12.06.2025 22:02 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Text-to-LoRA: Instant Transformer Adaption

arxiv.org/abs/2506.06105

Generative models can now produce text, image, video. They should also be able to generate models! We trained a Hypernetwork to generate new task-specific LoRA models by simply giving it a description of the task as a text prompt.

12.06.2025 01:50 β€” πŸ‘ 45    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0
Preview
Language Models in Plato's Cave Why language models succeeded where video models failed, and what that teaches us about AI

AI may still need some neuroscience:

"AI systems will not acquire the flexibility and adaptability of human intelligence until they can actually learn like humans do, shining brightly with their own light rather than observing a shadow from ours."

πŸ§ πŸ€–

sergeylevine.substack.com/p/language-m...

12.06.2025 02:15 β€” πŸ‘ 32    πŸ” 7    πŸ’¬ 1    πŸ“Œ 1

@lune-bellec is following 20 prominent accounts