An image with the Vancouver skyline and the words "sign up to review". At the top are the logos of both the Actionable Interpretability workshop (a magnifying glass) and the ICML conference (a brain).
๐จ We're looking for more reviewers for the workshop!
๐ Review period: May 24-June 7
If you're passionate about making interpretability useful and want to help shape the conversation, we'd love your input.
๐ก๐ Self-nominate here:
docs.google.com/forms/d/e/1F...
20.05.2025 00:05 โ ๐ 6 ๐ 5 ๐ฌ 0 ๐ 0
Submission deadline extended to May 19!
Working on or have thoughts on real-world applications of interpretability, and how we can use it in practice? Consider submitting to our workshop at ICML 2025. More at actionable-interpretability.github.io and below๐
06.05.2025 00:17 โ ๐ 0 ๐ 1 ๐ฌ 0 ๐ 0
I'm not at ICLR - but if you are, stop by poster session 3, where @tylerachang.bsky.social is presenting our latest work on training data attribution for LLM pretraining!
24.04.2025 23:59 โ ๐ 4 ๐ 0 ๐ฌ 0 ๐ 0
Check out our new work on scaling training data attribution (TDA) toward LLM pretraining - and some interesting things we found along the way!
medium.com/people-ai-re... and more in the thread below from most excellent student researcher @tylerachang.bsky.social
13.12.2024 19:01 โ ๐ 7 ๐ 1 ๐ฌ 0 ๐ 0
The largest workshop on analysing and interpreting neural networks for NLP.
BlackboxNLP will be held at EMNLP 2025 in Suzhou, China
blackboxnlp.github.io
๐ ๏ธ Actionable Interpretability๐ @icmlconf.bsky.social 2025 | Bridging the gap between insights and actions โจ https://actionable-interpretability.github.io
PhD student at Brown University working on interpretability. Prev. at Ai2, Google
Assistant professor of computer science at Technion; visiting scholar at @KempnerInst 2025-2026
https://belinkov.com/
NLP | Interpretability | PhD student at the Technion
Sr. Principal Research Manager at Microsoft Research, NYC // Machine Learning, Responsible AI, Transparency, Intelligibility, Human-AI Interaction // WiML Co-founder // Former NeurIPS & current FAccT Program Co-chair // Brooklyn, NY // http://jennwv.com
Asst Prof. @ UCSD | PI of LeM๐N Lab | Former Postdoc at ETH Zรผrich, PhD @ NYU | computational linguistics, NLProc, CogSci, pragmatics | he/him ๐ณ๏ธโ๐
alexwarstadt.github.io
Research Scientist Google DeepMind People + AI Research. HCI + AI + Programming Support + Sensemaking. ex @SCSatCMU @UMich @MSFTResearch @GoogleAI. He/him
http://cljournal.org
Computational Linguistics, established in 1974, is the official flagship journal of the Association for Computational Linguistics (ACL).
Google Chief Scientist, Gemini Lead. Opinions stated here are my own, not those of Google. Gemini, TensorFlow, MapReduce, Bigtable, Spanner, ML things, ...
https://roadtolarissa.com/
Assistant Professor at UCLA. Alum @StanfordNLP. NLP, Cognitive Science, Accessibility. https://www.coalas-lab.com/elisakreiss
PhD Student in the STAI group at the University of Tรผbingen and IMPRS-IS | Volunteering at KI macht Schule and Viva con Agua
elisanguyen.github.io
prev. intern at Vector Institute, MSR DL
Machine Learning | Stein Fellow @ Stanford Stats (current) | Assistant Prof @ CMU (incoming) | PhD @ MIT (prev)
https://andrewilyas.com
Professor in Scalable Trustworthy AI @ University of Tรผbingen | Advisor at Parameter Lab & ResearchTrend.AI
https://seongjoonoh.com | https://scalabletrustworthyai.github.io/ | https://researchtrend.ai/
We are an independent nonprofit organization that believes collaboration opportunities and research training should be openly accessible and free.
Web: https://mlcollective.org/
Twitter: @ml_collective
creations with code and networks