I'm very happy to present our work "Everything, Everywhere, All at Once: Is Mechanistic Interpretability Identifiable?" this afternoon at #ICLR2025! Come have a chat at stand #439 :)
26.04.2025 02:26 — 👍 10 🔁 1 💬 0 📌 0
The largest workshop on analysing and interpreting neural networks for NLP.
BlackboxNLP will be held at EMNLP 2025 in Suzhou, China
blackboxnlp.github.io
Explainable/Interpretable AI researchers and enthusiasts - DM to join the XAI Slack! Blue Sky and Slack maintained by Nick Kroeger
Master student at ENS Paris-Saclay / aspiring AI safety researcher / improviser
Prev research intern @ EPFL w/ wendlerc.bsky.social and Robert West
MATS Winter 7.0 Scholar w/ neelnanda.bsky.social
https://butanium.github.io
PhD in Computer Vision
Supervised and Inspired by Prof. Dr.-Ing Margret Keuper
Member of the Data & Web Science Group @ University of Mannheim.
Postdoc at Visual Inference Lab, TU Darmstadt
PhD Student at the Max Planck Institute for Informatics @cvml.mpi-inf.mpg.de @maxplanck.de | Explainable AI, Computer Vision, Neuroexplicit Models
Web: sukrutrao.github.io
Associate professor in machine learning at the University of Amsterdam. Topics: (online) learning theory and the mathematics of explainable AI.
www.timvanerven.nl
Theory of Interpretable AI seminar: https://tverven.github.io/tiai-seminar
Researcher in ML/NLP at the University of Edinburgh (faculty at Informatics and EdinburghNLP), Co-Founder/CTO at www.miniml.ai, ELLIS (@ELLIS.eu) Scholar, Generative AI Lab (GAIL, https://gail.ed.ac.uk/) Fellow -- www.neuralnoise.com, he/they
Postdoc at @sardine-lab-it.bsky.social working on fair and safe language technologies. | gattanasio.cc | he/him | http://questovirgolettatoesiste.com
https://yuzhaouoe.github.io/ | third-year PhD Student @ University of Edinburgh | Prev. Intern @ Microsoft Research Cambridge | Opening the Black Box for Efficient Training/Inference
PhD Student | Works on Explainable AI | https://donatellagenovese.github.io/
Professor and Head of Machine Learning Department at Carnegie Mellon. Board member OpenAI. Chief Technical Advisor Gray Swan AI. Chief Expert Bosch Research.
PhD student at AIML Lab, TU Darmstadt, Germany.
Teaching AI models 'genuine' (causal) reasoning | Website: https://moritz-willig.de/
Sawchuk Chair & Prof at USC Viterbi School of Engineering | Founding Director of USC Center for Neurotech | Developing AI/ML methods & neurotech to decode the brain & treat its conditions 🧠🤖💻 https://nseip.usc.edu/
Computational neuroscientist, NeuroAI lab @EPFL
🛠️ Actionable Interpretability🔎 @icmlconf.bsky.social 2025 | Bridging the gap between insights and actions ✨ https://actionable-interpretability.github.io
Cortical surface modelling and interpretable/explainable #AI, geometric deep learning #neuroscience. Open science: HCP, dhcp, UKBiobank
> Language + CogSci + Evolution + NLP/ML/AI
COMPLEXITY, FUNCTION & FORM in
- language, culture, cognition
- evo dynamics
- info & computation
- explanation
homeostatic property cluster at large
LangEvo is Hard Reading List
https://t.ly/gfGj
Stanford Linguistics and Computer Science. Director, Stanford AI Lab. Founder of @stanfordnlp.bsky.social . #NLP https://nlp.stanford.edu/~manning/