Gido van de Ven's Avatar

Gido van de Ven

@gmvandeven.bsky.social

Researcher on continual learning, taking a deep learning as well as cognitive science perspective. At KU Leuven, Belgium.

561 Followers  |  370 Following  |  7 Posts  |  Joined: 07.11.2024  |  1.5287

Latest posts by gmvandeven.bsky.social on Bluesky

PhD Position for Deep Learning for Optogenetic Sensory Restoration How to apply Who to contact? ekfz@sinzlab.org Email subject Start with

🚨 We’re hiring! 🚨
Together with Marcus Jeschke and Emilie Mace we are looking for a PhD student to join us for developing AI tools for optogenetic sensory restauration.
Apply now: sinzlab.org/positions/20...
#PhDposition #AI #Neuroprosthetics #ML #NeuroAI #Hiring

12.05.2025 08:37 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

It has been claimed that for the performance it doesn’t really matter how the Fisher is computed. But while this holds to some extent for Split MNIST, already with Split CIFAR-10 significant differences in performance emerge.

At ICLR? Come and hear more at poster #483 on Saturday-morning!

25.04.2025 17:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
On the Computation of the Fisher Information in Continual Learning One of the most popular methods for continual learning with deep neural networks is Elastic Weight Consolidation (EWC), which involves computing the Fisher Information. The exact way in which the Fish...

How do you compute the Fisher when using EWC?

Different ways can be found in the continual learning literature, with the most-used one making rather crude approximations.

This has bothered me (and others!) for a long time, and I finally take this on in an ICLR blogpost: arxiv.org/abs/2502.11756

25.04.2025 17:13 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Why has continual ML not had its breakthrough yet?

In our new collaborative paper w/ many amazing authors, we argue that β€œContinual Learning Should Move Beyond Incremental Classification”!

We highlight 5 examples to show where CL algos can fail & pinpoint 3 key challenges

arxiv.org/abs/2502.11927

18.02.2025 13:33 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
GitHub - ContinualAI/clvision-challenge-2023: Development kit for the CLVISION @ CVPR 2023 Challenge Development kit for the CLVISION @ CVPR 2023 Challenge - ContinualAI/clvision-challenge-2023

Try out the class-incremental learning with repetition benchmarks of the challenge yourself! github.com/ContinualAI/...

02.12.2024 13:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Results without repetition

Results without repetition

Results with repetition

Results with repetition

These winning strategies clearly outperform experience replay on data streams *with* repetition, but on a β€œstandard” task-based continual learning stream *without* repetition, experience replay performs better.

02.12.2024 13:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Schematic of DWGRNet

Schematic of DWGRNet

Schematic of Horde

Schematic of Horde

Schematic of HAT-CIR

Schematic of HAT-CIR

A striking outcome of the challenge was that all winning teams used some kind of ensemble-based approach, in which separate sub-networks per task/experience are learned and later combined for making predictions.

02.12.2024 13:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Continual learning in the presence of repetition Continual learning (CL) provides a framework for training models in ever-evolving environments. Although re-occurrence of previously seen objects or t…

Does continual learning change when there is repetition in the data stream?

The report of the #CVPR2023 CLVision challenge on **Continual learning in the presence of repetition** is out in Neural Networks. #OpenAccess

www.sciencedirect.com/science/arti...

02.12.2024 13:02 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Aligning Generalisation Between Humans and Machines Recent advances in AI -- including generative approaches -- have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target...

This is one of the most important arguments in the AI discourse, I think: With a large group of experts we explain why generalization of humans and machines works very differently. This is a fundamental points that has crucial implications for language models, as well: arxiv.org/abs/2411.15626

27.11.2024 12:46 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0
Post image Post image

Is generalisation a process, an operation, or a product? 🀨

Read about the different ways generalisation is defined, parallels between humans & machines, methods & evaluation in our new paper: arxiv.org/abs/2411.15626

co-authored with many smart minds as a product of Dagstuhl πŸ™πŸŽ‰

27.11.2024 13:30 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Thanks Simon! I’d be keen to be added as well ☺️

13.11.2024 20:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@gmvandeven is following 20 prominent accounts