sqIRL Lab's Avatar

sqIRL Lab

@sqirllab.bsky.social

We are "squIRreL", the Interpretable Representation Learning Lab based at IDLab - University of Antwerp & imec. Research Areas: #RepresentationLearning, Model #Interpretability, #explainability, #DeepLearning #ML #AI #XAI #mechinterp

118 Followers  |  334 Following  |  14 Posts  |  Joined: 19.11.2024  |  1.9005

Latest posts by sqirllab.bsky.social on Bluesky

The deadline for the #AIMLAI workshop held jointly with #ECMLPKDD2025 has been extended until June 21st.
Looking forward to last-minute submissions on work around #interpretability and #explainability of #AI / #ML

project.inria.fr/aimla
#mechinterp #xai

17.06.2025 08:08 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Part of the #sqIRL lab at the IDLab day 2025 #uantwerp

10.06.2025 14:14 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Our lab got two papers accepted at #ECMLPKDD2025 on the topics of #Interpretability for Spiking NNs and self-supervised representation learning with embedded interpretability .
Congrats to Jasper, Hamed, Fabian and our collaborators.

#SNN #SIM #AI #ML #neuromorphic #xai #interpretableML

02.06.2025 12:46 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
2025 Progam Committee

Honored to be selected among the Outstanding Reviewers at #CVPR2025.
#UAntwerp @sqirllab.bsky.social #IDLab

cvpr.thecvf.com/Conferences/...

13.05.2025 11:54 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

It is confirmed, the #AIMLAI workshop will be held jointly with @ecmlpkdd.org.
We invite the submissions of long and short papers covering work around #interpretability and #explainability of #AI/#ML.

Deadline: 14/06/25
CfP: shorturl.at/yYQ9G
Website: shorturl.at/W9r1A

#XAI #mechinterp #ECMLPKDD

08.05.2025 20:57 β€” πŸ‘ 1    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image Post image

Last week our lab celebrated the doctoral defense of Hamed Behzadi. It has been four years since Hamed join us, and his evolution into fully fledged independent researcher has been constant. Congratulations!

See shorturl.at/296MV for some of the work produced by Hamed.

#ML #AI #Interpretability

29.04.2025 08:45 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Preview
Improving Neural Network Accuracy by Concurrently Training with a... Recently within Spiking Neural Networks, a method called Twin Network Augmentation (TNA) has been introduced. This technique claims to improve the validation accuracy of a Spiking Neural Network...

Benjamin Vandersmissen (25/04 evening) will show the effects that using a twin network has on learning processes and share insights on how TNA leads to superior predictive performance in a number of tasks for several architectures. #deeplearning #ML #ICLR2025 #sqIRL
openreview.net/forum?id=TEm...

24.04.2025 08:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Bilinear MLPs enable weight-based mechanistic interpretability A mechanistic understanding of how MLPs do computation in deep neural net- works remains elusive. Current interpretability work can extract features from hidden activations over an input dataset...

Thomas Dooms will show how to bilinear MLPs can server as more transparent component that provides a better lens to study the relationships between inputs, outputs and the weights that define the models. #mechinterp #interpretability #ML #AI #XAI #ICLR2025 #sqIRL
openreview.net/forum?id=gI0...

24.04.2025 08:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If you are at #ICLR2025 and interested on how to understand DNNs from its weights and on how to improve predictive performance of a DNN via Twin Network Augmentation, we encourage you to get in touch with Thomas and Benjamin who will be presenting our work there. #sqIRL #UAntwerp #XAI

24.04.2025 08:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Post image Post image

We had the opportunity to contribute to the Research Day of the Antwerp Center of Responsible #AI ( #ACRAI ) where Salma and Hamed presented their work on #explainability-driven #HSI analysis and model #interpretability, respectively.
#ML @uantwerpen.be
www.uantwerpen.be/en/research-...

21.02.2025 13:30 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

This week we had the visit of Prof. Eliana Pastor (DBDMG @PoliTO) who gave a presentation on her research around the topics of #trustworthyAI, #Bias analysis and #FairnessAI. Very good work and interesting ideas. @elianapastor.bsky.social we hope to host you again soon. #explainability #AI #ML

19.02.2025 09:17 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Bilinear MLPs enable weight-based mechanistic interpretability A mechanistic understanding of how MLPs do computation in deep neural networks remains elusive. Current interpretability work can extract features from hidden activations over an input dataset but gen...

Bilinear MLPs Enable Weight-based Mechanistic Interpretability
M. Pearce, T. Dooms, A. Rigg, J. Oramas, L. Sharkey

We show that bilinear layers can serve as an interpretable replacement for current activation functions; enabling weight-based interpretability.

preprint: arxiv.org/abs/2410.08417

23.01.2025 22:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Improving Neural Network Accuracy by Concurrently Training with a... Recently within Spiking Neural Networks, a method called Twin Network Augmentation (TNA) has been introduced. This technique claims to improve the validation accuracy of a Spiking Neural Network...

Improving Neural Network Accuracy by Concurrently Training with a Twin Network
B. Vandersmissen, L. Deckers, J. Oramas

We show the effectiveness of TNA lies on a better exploration of the parameter space and the learning of more robust and diverse features

preprint: openreview.net/forum?id=TEm...

23.01.2025 22:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A great start for 2025.
Proud to announce that our group (#sqIRL/IDLab) got two papers accepted at #ICLR2025. A first for our young lab.

Thanks to our collaborators, the FAIR Program and the Dept. of CS @uantwerpen.bsky.social for supporting this research.

#AI #ML #interpretability #XAI

23.01.2025 22:11 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Recent work published by the #sqIRL Lab on the training of competitive deeper Forward-Forward Networks. #FF #localLearning #ML #RepresentationLearning

15.12.2024 16:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

AI Conferences @neuripsconf.bsky.social @cvprconference.bsky.social @iccv.bsky.social @eccv.bsky.social @iclr-conf.bsky.social @blog.neurips.cc.web.brid.gy

go.bsky.app/45EuhSi

22.11.2024 01:51 β€” πŸ‘ 36    πŸ” 14    πŸ’¬ 4    πŸ“Œ 0

Neural networks are not black boxes. They're the opposite of black boxes: we have extensive access to their internals.

I think people have accepted this framing so innately that they've forgotten it's not true and it even warps how they do experiments.

09.12.2024 19:18 β€” πŸ‘ 61    πŸ” 9    πŸ’¬ 8    πŸ“Œ 4

A starter pack of people working on interpretability / explainability of all kinds, using theoretical and/or empirical approaches.

Reply or DM if you want to be added, and help me reach others!

go.bsky.app/DZv6TSS

14.11.2024 17:00 β€” πŸ‘ 80    πŸ” 26    πŸ’¬ 34    πŸ“Œ 0

50-day Free Access Link: eur01.safelinks.protection.outlook.com?url=https%3A...

03.12.2024 17:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Towards the characterization of representations learned via capsule-based network architectures Capsule Neural Networks (CapsNets) have been re-introduced as a more compact and interpretable alternative to standard deep neural networks. While rec…

Congratulations to Saja Tawalbeh on the acceptance of her work on the #interpretability of #Capsule Networks at the #Neurocomputing Journal.
Pre-proof: www.sciencedirect.com/science/arti...
@uantwerpen.bsky.social #XAI #ML #AI #sqIRL

03.12.2024 08:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

Check out this starter pack!

go.bsky.app/BYkRryU

30.11.2024 04:13 β€” πŸ‘ 34    πŸ” 13    πŸ’¬ 1    πŸ“Œ 0

@sqirllab is following 20 prominent accounts