Hey, study author hereโI think this is an overgeneralization.
We find that *experienced, open-source developers working on projects theyโre highly familiar with* are slowed down. This is consistent with many developers being sped up often, e.g. when writing one-off scripts
11.07.2025 00:30 โ ๐ 3 ๐ 1 ๐ฌ 1 ๐ 1
Uber driver is telling me how excited he is that he just got this new car because he totaled his previous one ๐
24.01.2025 05:34 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
had a similar experience in africaโzebra and wildebeest were just chilling like right next to lions
28.11.2024 21:44 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
Daily life at METR:
25.11.2024 19:35 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0
Testing, testing
25.11.2024 19:24 โ ๐ 12 ๐ 3 ๐ฌ 1 ๐ 1
If interpretability gave us as much info as transparent cases did, weโd be a lot further along than we currently are
27.04.2023 04:35 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
omg so true, thanks for letting me know!!
27.04.2023 04:33 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Writing about AI for the Financial Times in San Francisco โ ๐ง Cristina.criddle@ft.com ๐ฑ signal - @criddle.67
Currently thinking about AI alignment and consciousness. I've also worked on theory and algorithms for Markov chains.
Frontier AI Safety at Google DeepMind
Google Chief Scientist, Gemini Lead. Opinions stated here are my own, not those of Google. Gemini, TensorFlow, MapReduce, Bigtable, Spanner, ML things, ...
Product @ Momentum. I like updating (my beliefs). ๐ธ๐ฌ๐ฎ๐ณ
๐ธ10% pledger
Aspiring 10x reverse engineer at Google DeepMind
ai safety researcher | phd ETH Zurich | https://danielpaleka.com
Computer Science PhD Student @ Stanford | Geopolitics & Technology Fellow @ Harvard Kennedy School/Belfer | Vice Chair EU AI Code of Practice | Views are my own
Trying to ensure the future is bright. Technical governance research at MIRI
Red-Teaming LLMs / PhD student at ETH Zurich / Prev. research intern at Meta / People call me Javi / Vegan ๐ฑ
Website: javirando.com
Twitter: @CFGeek
Mastodon: @cfoster0@sigmoid.social
When I choose to speak, I speak for myself.
๐ช Tensor-enjoyer ๐งช
We monitor AI safety policies from companies and governments for substantive changes.
Anonymous submissions: https://forms.gle/3RP2xu2tr8beYs5c8
Run by
@TheMidasProject.bsky.social
Fighting fire with (wild)fire (policy) |
DC | obligatory opinions are my own section
Visiting Scientist at Schmidt Sciences. Visiting Researcher at Stanford NLP Group
Interested in AI safety and interpretability
Previously: Anthropic, AI2, Google, Meta, UNC Chapel Hill
AI professor. Director, Foundations of Cooperative AI Lab at Carnegie Mellon. Head of Technical AI Engagement, Institute for Ethics in AI (Oxford). Author, "Moral AI - And How We Get There."
https://www.cs.cmu.edu/~conitzer/
Cofounder pol.is
President compdemocracy.org
San Francisco, CA