The deadline for the #AIMLAI workshop held jointly with #ECMLPKDD2025 has been extended until June 21st.
Looking forward to last-minute submissions on work around #interpretability and #explainability of #AI / #ML
project.inria.fr/aimla
#mechinterp #xai
17.06.2025 08:08 β π 1 π 1 π¬ 0 π 0
Part of the #sqIRL lab at the IDLab day 2025 #uantwerp
10.06.2025 14:14 β π 1 π 1 π¬ 0 π 0
Our lab got two papers accepted at #ECMLPKDD2025 on the topics of #Interpretability for Spiking NNs and self-supervised representation learning with embedded interpretability .
Congrats to Jasper, Hamed, Fabian and our collaborators.
#SNN #SIM #AI #ML #neuromorphic #xai #interpretableML
02.06.2025 12:46 β π 3 π 1 π¬ 0 π 1
2025 Progam Committee
Honored to be selected among the Outstanding Reviewers at #CVPR2025.
#UAntwerp @sqirllab.bsky.social #IDLab
cvpr.thecvf.com/Conferences/...
13.05.2025 11:54 β π 4 π 1 π¬ 0 π 0
It is confirmed, the #AIMLAI workshop will be held jointly with @ecmlpkdd.org.
We invite the submissions of long and short papers covering work around #interpretability and #explainability of #AI/#ML.
Deadline: 14/06/25
CfP: shorturl.at/yYQ9G
Website: shorturl.at/W9r1A
#XAI #mechinterp #ECMLPKDD
08.05.2025 20:57 β π 1 π 2 π¬ 0 π 0
Improving Neural Network Accuracy by Concurrently Training with a...
Recently within Spiking Neural Networks, a method called Twin Network Augmentation (TNA) has been introduced. This technique claims to improve the validation accuracy of a Spiking Neural Network...
Benjamin Vandersmissen (25/04 evening) will show the effects that using a twin network has on learning processes and share insights on how TNA leads to superior predictive performance in a number of tasks for several architectures. #deeplearning #ML #ICLR2025 #sqIRL
openreview.net/forum?id=TEm...
24.04.2025 08:04 β π 0 π 0 π¬ 0 π 0
Bilinear MLPs enable weight-based mechanistic interpretability
A mechanistic understanding of how MLPs do computation in deep neural net-
works remains elusive. Current interpretability work can extract features from
hidden activations over an input dataset...
Thomas Dooms will show how to bilinear MLPs can server as more transparent component that provides a better lens to study the relationships between inputs, outputs and the weights that define the models. #mechinterp #interpretability #ML #AI #XAI #ICLR2025 #sqIRL
openreview.net/forum?id=gI0...
24.04.2025 08:04 β π 1 π 0 π¬ 1 π 0
If you are at #ICLR2025 and interested on how to understand DNNs from its weights and on how to improve predictive performance of a DNN via Twin Network Augmentation, we encourage you to get in touch with Thomas and Benjamin who will be presenting our work there. #sqIRL #UAntwerp #XAI
24.04.2025 08:04 β π 1 π 0 π¬ 1 π 1
We had the opportunity to contribute to the Research Day of the Antwerp Center of Responsible #AI ( #ACRAI ) where Salma and Hamed presented their work on #explainability-driven #HSI analysis and model #interpretability, respectively.
#ML @uantwerpen.be
www.uantwerpen.be/en/research-...
21.02.2025 13:30 β π 2 π 2 π¬ 0 π 0
This week we had the visit of Prof. Eliana Pastor (DBDMG @PoliTO) who gave a presentation on her research around the topics of #trustworthyAI, #Bias analysis and #FairnessAI. Very good work and interesting ideas. @elianapastor.bsky.social we hope to host you again soon. #explainability #AI #ML
19.02.2025 09:17 β π 1 π 1 π¬ 0 π 0
Bilinear MLPs enable weight-based mechanistic interpretability
A mechanistic understanding of how MLPs do computation in deep neural networks remains elusive. Current interpretability work can extract features from hidden activations over an input dataset but gen...
Bilinear MLPs Enable Weight-based Mechanistic Interpretability
M. Pearce, T. Dooms, A. Rigg, J. Oramas, L. Sharkey
We show that bilinear layers can serve as an interpretable replacement for current activation functions; enabling weight-based interpretability.
preprint: arxiv.org/abs/2410.08417
23.01.2025 22:11 β π 0 π 0 π¬ 0 π 0
Improving Neural Network Accuracy by Concurrently Training with a...
Recently within Spiking Neural Networks, a method called Twin Network Augmentation (TNA) has been introduced. This technique claims to improve the validation accuracy of a Spiking Neural Network...
Improving Neural Network Accuracy by Concurrently Training with a Twin Network
B. Vandersmissen, L. Deckers, J. Oramas
We show the effectiveness of TNA lies on a better exploration of the parameter space and the learning of more robust and diverse features
preprint: openreview.net/forum?id=TEm...
23.01.2025 22:11 β π 0 π 0 π¬ 1 π 0
A great start for 2025.
Proud to announce that our group (#sqIRL/IDLab) got two papers accepted at #ICLR2025. A first for our young lab.
Thanks to our collaborators, the FAIR Program and the Dept. of CS @uantwerpen.bsky.social for supporting this research.
#AI #ML #interpretability #XAI
23.01.2025 22:11 β π 0 π 1 π¬ 1 π 0
Recent work published by the #sqIRL Lab on the training of competitive deeper Forward-Forward Networks. #FF #localLearning #ML #RepresentationLearning
15.12.2024 16:56 β π 1 π 0 π¬ 0 π 0
AI Conferences @neuripsconf.bsky.social @cvprconference.bsky.social @iccv.bsky.social @eccv.bsky.social @iclr-conf.bsky.social @blog.neurips.cc.web.brid.gy
go.bsky.app/45EuhSi
22.11.2024 01:51 β π 36 π 14 π¬ 4 π 0
Neural networks are not black boxes. They're the opposite of black boxes: we have extensive access to their internals.
I think people have accepted this framing so innately that they've forgotten it's not true and it even warps how they do experiments.
09.12.2024 19:18 β π 61 π 9 π¬ 8 π 4
A starter pack of people working on interpretability / explainability of all kinds, using theoretical and/or empirical approaches.
Reply or DM if you want to be added, and help me reach others!
go.bsky.app/DZv6TSS
14.11.2024 17:00 β π 80 π 26 π¬ 34 π 0
50-day Free Access Link: eur01.safelinks.protection.outlook.com?url=https%3A...
03.12.2024 17:02 β π 0 π 0 π¬ 0 π 0
Check out this starter pack!
go.bsky.app/BYkRryU
30.11.2024 04:13 β π 34 π 13 π¬ 1 π 0
A career network featuring science jobs in academia and industry.
Visit our platform at www.science.hr
PhD candidate - Centre for Cognitive Science at TU Darmstadt,
explanations for AI, sequential decision-making, problem solving
Pronounced: CHEE-DEM
Postdoc at Cancer Early Detection Advanced Research (CEDAR) Center, @ohsuknight.bsky.social | Interpretable ML| Computational and Systems Biology
Pathologist. Assistant Professor at the University of Minnesota
Thoracic and H&N pathology
πΊπΈπͺπ¨
Founder/CEO THE XAVIER GROUP Ltd.
Strategy Consultant; Futurist; Comp Anticipatory Design Scientist - dad; frmr Pres American Industrial Preparedness Assn; member US Internet Industry Assn; OSINT; biomeds; THINK TANKSπ§ͺ
http://linkedin.com/in/franksowa
High-quality datasets designed to spark ideas, solve problems, and drive innovation. Fresh data added all the time for your AI projects, research, or curiosity. Letβs turn raw numbers into real impact π
Because Science is for everyone! π§ͺπ
Learn more β¬οΈ
http://linktr.ee/standupforscience
We are a global leader in research and education into the mind and brain.
www.ucl.ac.uk/brain-sciences
Professor of Computer Vision, @BristolUni. Senior Research Scientist @GoogleDeepMind - passionate about the temporal stream in our lives.
http://dimadamen.github.io
Reader in Computer Vision and Machine Learning @ School of Informatics, University of Edinburgh.
https://homepages.inf.ed.ac.uk/omacaod
Flatiron Research Fellow #FlatironCCN. PhD from #mitbrainandcog. Incoming Asst Prof #CarnegieMellon in Fall 2025. I study how humans and computers hear and see.
Le plus grand centre de recherche universitaire en apprentissage profond β The world's largest academic research center in deep learning.
Ph.D. student @cs.ubc.ca, working on ML (learning dynamics, simplicity bias, iterated learning, LLM) https://joshua-ren.github.io/
something new | Gemini RL+TTS @ Google DeepMind | Conversational AI @ Meta | RL Agents @ EA | ML+Information Theory @ MIT+Harvard+Duke | Georgia Tech PhD | Ψ²Ω Ψ²ΩΨ―Ϊ―Ϋ Ψ’Ψ²Ψ§Ψ―Ϋ
π{NYC, YYZ}
π https://beirami.github.io/
Pure mathematician working in Ergodic Theory, Fractal Geometry, and (recently) Large Language Models. Senior Lecturer (= Associate Professor) at the University of Manchester.
Group Leader in TΓΌbingen, Germany
Iβm π«π· and I work on RL and lifelong learning. Mostly posting on ML related topics.
I run AI Plans, an AI Safety lab focused on solving AI Alignment before 2029.
For several weeks I used a stone for a pillow.
I once spent a quarter of my paycheck on cheese.
Ping me! DM me (not working atm due to totalitarian UK law)!
SurpassAI
Scientist, artist, educator, chef, fisherman, outdoor enthusiast, bookworm, linguist, peacemaker, volunteer, etc. in no particular order.
Please don't ask because I will not date you, I will not send you money, and I will not be your disciple. Thanks.
Professor @ NTNU, Research Director @ NorwAI
Research on AI, CBR, XAI, intelligence systems, AI+Helath
Views are my own