Assistant Prof of AI & Decision-Making @MIT EECS
I run the Algorithmic Alignment Group (https://algorithmicalignment.csail.mit.edu/) in CSAIL.
I work on value (mis)alignment in AI systems.
https://people.csail.mit.edu/dhm/
AI Safety @ xAI | AI robustness, PhD @ UC Berkeley | normanmu.com
5th year PhD student at UW CSE, working on Security and Privacy for ML
PhD student at ETH Zurich | Student Researcher at Google | Agents Security and more in general ML Security and Privacy
edoardo.science
spylab.ai
AI privacy and security | PhD student in the SPY Lab at ETH Zurich | Ask me about coffee ☕️
3rd year Phd candidate @ Princeton ECE
Faculty at the ELLIS Institute Tübingen and Max Planck Institute for Intelligent Systems. Leading the AI Safety and Alignment group. PhD from EPFL supported by Google & OpenPhil PhD fellowships.
More details: https://www.andriushchenko.me/
Thinking about how/why AI works/doesn't, and how to make it go well for us.
Currently: AI Agent Security @ US AI Safety Institute
benjaminedelman.com
Academic, AI nerd and science nerd more broadly. Currently obsessed with stravinsky (not sure how that happened).
PhD student at ETH Zurich, working on AI safety. Cambridge MPhil in ML graduate | Alumnus of Mathematical Grammar School | from Serbia
Father of two :-), Working on LLM robustness @TU_Muenchen
sentio ergo sum. developing the science of evals at METR. prev NYU, cohere
AI Safety + Security @ Gray Swan AI
Formerly PleIAs + Stanford
Assistant Professor the Polaris Lab @ Princeton (https://www.polarislab.org/); Researching: RL, Strategic Decision-Making+Exploration; AI+Law
Making AI safer at Google DeepMind
davidlindner.me
Visiting Scientist at Schmidt Sciences. Visiting Researcher at Stanford NLP Group
Interested in AI safety and interpretability
Previously: Anthropic, AI2, Google, Meta, UNC Chapel Hill