Andreas Madsen's Avatar

Andreas Madsen

@andreasmadsen.bsky.social

Ph.D. in NLP Interpretability from Mila. Previously: independent researcher, freelancer in ML, and Node.js core developer.

321 Followers  |  172 Following  |  10 Posts  |  Joined: 07.10.2024  |  1.6345

Latest posts by andreasmadsen.bsky.social on Bluesky

Also thanks to @sarath-chandar.bsky.social and @sivareddyg.bsky.social for supporting me during my Ph.D., which helped me get this far! I would highly recommend them if you are looking for a Ph.D. supervisor.

07.02.2025 17:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Positions:
* Full-stack
* Research Engineer
* Research Scientist
* Systems Infrastructure Engineer
* Research intern
Feel free to reach out but chances are I will see your application if you apply online. I will post details on my internship later, but there are more openings.

07.02.2025 17:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Excited to finally announce that I have joined @guidelabs.bsky.social. We are building LLMs from scratch designed to be interpretable. Many have asked what I'm doing after my Ph.D., so great to finally get it out. We have a lot of open positions, from engineering to scientist to intern.

07.02.2025 17:01 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

All investigations of faithfulness show that explanations' faithfulness is by default model and task-dependent. However, this is not the case when using FMMs. Thus, presenting a new paradigm for how to provide and ensure faithful explanations.

28.11.2024 14:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Diagram of faithfulness measurable models. Showing the model is designed to measure the faithfulness of an explanation, and that this can be used to optimize an explanation.

Diagram of faithfulness measurable models. Showing the model is designed to measure the faithfulness of an explanation, and that this can be used to optimize an explanation.

FMMs are when models are designed such that measuring faithfulness is cheap and precise, which makes it possible to optimize explanations toward maximum faithfulness.

28.11.2024 14:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Diagram of self-explanations. Showing input going in, then the regular output and explanation going out.

Diagram of self-explanations. Showing input going in, then the regular output and explanation going out.

Self-explanations are when LLMs explain themselves. Current models are not capable of this, but we suggest how that could be changed.Diagram of self-explanations. Showing input going in, then the regular output and explanation going out.

28.11.2024 14:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We ask the question: How to provide and ensure faithful explanations for general-purpose NLP models? The main thesis is that we should develop new paradigms in interpretability. The two new paradigms explored are faithfulness measurable models (FMMs) and self-explanations.

28.11.2024 14:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
New Faithfulness-Centric Interpretability Paradigms for Natural Language Processing As machine learning becomes more widespread and is used in more critical applications, it's important to provide explanations for these models, to prevent unintended behavior. Unfortunately, many curr...

The full thesis is available at arxiv.org/abs/2411.17992. Thanks to @sivareddyg.bsky.social and @sarath-chandar.bsky.social for supervising me throughout all these years. It's been a great journey and I'm very grateful for their support.

28.11.2024 14:02 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Interpretability Needs a New Paradigm Interpretability is the study of explaining models in understandable terms to humans. At present, interpretability is divided into two paradigms: the intrinsic paradigm, which believes that only model...

I’m thrilled to share that I’ve finished my Ph.D. at Mila and Polytechnique Montreal. For the last 4.5 years, I have worked on creating new faithfulness-centric paradigms for NLP Interpretability. Read my vision for the future of interpretability in our new position paper: arxiv.org/abs/2405.05386

28.11.2024 13:39 β€” πŸ‘ 36    πŸ” 4    πŸ’¬ 3    πŸ“Œ 1

Hi, can you add me thanks πŸ™‚

27.11.2024 16:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@andreasmadsen is following 19 prominent accounts