Viet Anh Khoa Tran's Avatar

Viet Anh Khoa Tran

@ktran.de.bsky.social

PhD student on Dendritic Learning/NeuroAI with Willem Wybo, at Emre Neftci's lab (@fz-juelich.de). ktran.de

119 Followers  |  893 Following  |  7 Posts  |  Joined: 22.11.2024  |  1.4971

Latest posts by ktran.de on Bluesky

Preview
Contrastive Consolidation of Top-Down Modulations Achieves Sparsely Supervised Continual Learning Biological brains learn continually from a stream of unlabeled data, while integrating specialized information from sparsely labeled examples without compromising their ability to generalize. Meanwhil...

This is research from the new Dendritic Learning Group at PGI-15 (โ€ช@fz-juelich.deโ€ฌ).
A huge thanks to my supervisor Willem Wybo and our institute head Emre Neftci!
๐Ÿ“„ Preprint: arxiv.org/abs/2505.14125
๐Ÿš€ Project page: ktran.de/papers/tmcl/

Supported by (@fzj-jsc.bsky.social) and WestAI.
(6/6)

10.06.2025 13:17 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This research opens up an exciting possibility: predictive coding as a fundamental cortical learning mechanism, guided by area-specific modulations that act as high-level control over the learning process. (5/6)

10.06.2025 13:17 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Furthermore, we can dynamically adjust the stability-plasticity trade-off by adapting the strength of the modulation invariance term. (4/6)

10.06.2025 13:17 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Key finding: With only 1% labels, our method outperforms comparable continual learning algorithms both on the continual task and when transferred to other tasks.
Therefore, we continually learn generalizable representations, unlike conventional, class-collapsing methods (e.g. Cross-Entropy). (3/6)

10.06.2025 13:17 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Feedforward weights learn via view-invariant self-supervised learning, mimicking predictive coding. Top-down class modulations, informed by new labels, orthogonalize same-class representations. These are then consolidated into the feedforward pathway through modulation invariance. (2/6)

10.06.2025 13:17 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

New #NeuroAI preprint on #ContinualLearning!

Continual learning methods struggle in mostly unsupervised environments with sparse labels (e.g. parents telling their child the object is an 'apple').
We propose that in the cortex, predictive coding of high-level top-down modulations solves this! (1/6)

10.06.2025 13:17 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Feedforward weights learn via view-invariant self-supervised learning, mimicking predictive coding. Top-down class modulations, informed by new labels, orthogonalize same-class representations. These are then consolidated into the feedforward pathway through modulation invariance. (2/6)

10.06.2025 13:13 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@ktran.de is following 20 prominent accounts