PhD Student at Northeastern, working to make LLMs interpretable
The largest workshop on analysing and interpreting neural networks for NLP.
BlackboxNLP will be held at EMNLP 2025 in Suzhou, China
blackboxnlp.github.io
PhD student @ Northeastern University, Clinical NLP
https://hibaahsan.github.io/
she/her
PhD candidate in CS at Northeastern University | NLP + HCI for health | she/her 🏃♀️🧅🌈
CS PhD student at Harvard. Interested in Interpretability 🔍, Visualizations 📊, Human-AI Interaction🧍🤖. All opinions are mine. https://yc015.github.io/
PhD (in progress) @ Northeastern! NLP 🤝 LLMs
she/her
ML researcher, building interpretable models at Guide Labs (guidelabs.bsky.social).
PhD student @LIG | Causal abstraction, interpretability & LLMs
Trying to figure things out about how best we can live together
hacker / CS professor https://www.khoury.northeastern.edu/~arjunguha/
PhD student in Interpretable Machine Learning at @tuberlin.bsky.social & @bifold.berlin
https://web.ml.tu-berlin.de/author/laura-kopf/
machine learning, causal inference, science of llm, ai safety, phd student @bleilab, keen bean
https://www.claudiashi.com/
Helping people is good I guess
Trying to do AI interp and control
Used to do economics
timhua.me