Weβve added four new benchmarks to the Epoch AI Benchmarking Hub: Aider Polyglot, WeirdML, Balrog, and Factorio Learning Environment!
Before we only featured our own evaluation results, but this new data comes from trusted external leaderboards. And we've got more on the way π§΅
08.05.2025 15:00 β π 5 π 2 π¬ 1 π 0
Factorio Learning Environment
Claude Sonnet 3.5 builds factories
4. Factorio Learning Environment by Jack Hopkins, MΓ€rt Bakler , and
@akbir.bsky.social
This benchmark uses the factory-building game Factorio to test complex, long-term planning, with settings for lab-play (structured tasks) and open-play (unbounded growth).
jackhopkins.github.io/factorio-lea...
08.05.2025 15:00 β π 3 π 1 π¬ 1 π 0
New Anthropic blog post: Subtle sabotage in automated researchers.
As AI systems increasingly assist with AI research, how do we ensure they're not subtly sabotaging that research? We show that malicious models can undermine ML research tasks in ways that are hard to detect.
25.03.2025 16:03 β π 4 π 3 π¬ 1 π 0
YouTube video by Anthropic
Controlling powerful AI
control is a complimentary approach to alignment.
its really sensible, practical and can be done now, even before systems are superintelligent.
youtu.be/6Unxqr50Kqg?...
18.03.2025 15:22 β π 4 π 1 π¬ 0 π 0
This is the entire goal
01.02.2025 02:13 β π 5 π 0 π¬ 0 π 0
Trump announces 500B in AI funding. Five days ago.
Deepseek r1 release. 8 days ago.
The fact that Deepseek R1 was released three days /before/ Stargate means these guys stood in front of Trump and said they needed half a trillion dollars while they knew R1 was open source and trained for $5M.
Beautiful.
28.01.2025 03:02 β π 13898 π 1772 π¬ 400 π 120
Can anyone get a shorter DeepSeek R1 CoT than this?
24.01.2025 06:11 β π 17 π 1 π¬ 3 π 0
Process based supervision done right, and with pretty CIDs to illustrate :)
23.01.2025 20:33 β π 8 π 1 π¬ 0 π 0
I donβt really have the energy for politics right now. So I will observe without comment:
Executive Order 14110 was revoked (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence)
21.01.2025 00:34 β π 96 π 37 π¬ 2 π 6
R1 model is impressive
21.01.2025 22:21 β π 2 π 0 π¬ 0 π 0
16.01.2025 18:33 β π 33014 π 6865 π¬ 429 π 289
New randomized, controlled trial by the World Bank of students using GPT-4 as a tutor in Nigeria. Six weeks of after-school AI tutoring = 2 years of typical learning gains, outperforming 80% of other educational interventions.
And it helped all students, especially girls who were initially behind.
15.01.2025 20:58 β π 354 π 88 π¬ 15 π 27
Generative AI has flaws and biases, and there is a tendency for academics to fix on that (85% of equity LLM papers focus on harms)β¦
β¦yet in many ways LLMs are uniquely powerful among new technologies for helping people equitably in education and healthcare. We need an urgent focus on how to do that
14.01.2025 17:45 β π 69 π 11 π¬ 2 π 4
On one hand, this paper finds adding inference-time compute (like o1 does) improves medical reasoning, which is an important finding suggesting a way to continue to improve AI performance in medicine
On the other hand, scientific illustrations are apparently just anime now arxiv.org/pdf/2501.06458
14.01.2025 05:56 β π 71 π 5 π¬ 2 π 2
my metabolism is noticeably higher in london than the bay.
13.01.2025 15:49 β π 2 π 0 π¬ 0 π 0
Recommendations for Technical AI Safety Research Directions
What can AI researchers do *today* that AI developers will find useful for ensuring the safety of future advanced AI systems? To ring in the new year, the Anthropic Alignment Science team is sharing some thoughts on research directions we think are important.
alignment.anthropic.com/2025/recomme...
10.01.2025 21:03 β π 22 π 7 π¬ 1 π 1
My hottest take is that nothing makes any sense at all outside of the context of the constantly increasing value of human life, but that increase in value is so invisible (and exists in a world that was built for previous, lower values) that we constantly think the opposite has happened.
05.01.2025 19:08 β π 1772 π 88 π¬ 56 π 3
wait what does that mean?
Does it mean there are bugs in lean, or that it does too much work to check a proof?
05.01.2025 17:22 β π 0 π 0 π¬ 0 π 0
wait isnβt everything just regularisation?
04.01.2025 19:42 β π 1 π 0 π¬ 0 π 0
no - why isnβt lean suffice?
04.01.2025 19:07 β π 2 π 0 π¬ 1 π 0
like i really have outgrown most scenarios where i think my race has held me back but this one wonβt let go
04.01.2025 04:13 β π 2 π 0 π¬ 0 π 0
Nothing kills my excitement of returning to the US like the response i get from CBP officers.
04.01.2025 04:13 β π 7 π 0 π¬ 1 π 0
Felix Hill and some other DMers and I after cold water swimming at Parliament Hill Lido a few years ago
Felix Hill was such an incredible mentor β and occasional cold water swimming partner β to me. He's a huge part of why I joined DeepMind and how I've come to approach research. Even a month later, it's still hard to believe he's gone.
02.01.2025 19:01 β π 123 π 17 π¬ 7 π 5
Felix β Jane X. Wang
From the moment I heard him give a talk, I knew I wanted to work with Felix . His ideas about generalization and situatedness made explicit thoughts that had been swirling around in my head, incohe...
A brilliant colleague and wonderful soul Felix Hill recently passed away. This was a shock and in an effort to sort some things out, I wrote them down. Maybe this will help someone else, but at the very least it helped me. Rest in peace, Felix, you will be missed. www.janexwang.com/blog/2025/1/...
03.01.2025 04:02 β π 63 π 11 π¬ 2 π 0
We are a research institute investigating the trajectory of AI for the benefit of society.
epoch.ai
i'm using the computer online.
Postdoc in Ilana Witten's lab. Former PhD student in Kenneth Harris and Matteo Carandini's lab. Follow for updates on bombcell and other Neuropixels software.
https://github.com/Julie-Fabre
AI Reporter at The Bureau of Investigative Journalism | signal π¬ (efw.40)
Associate Director (Research & Grants) @ Cooperative AI Foundation
Google Chief Scientist, Gemini Lead. Opinions stated here are my own, not those of Google. Gemini, TensorFlow, MapReduce, Bigtable, Spanner, ML things, ...
Visiting Scientist at Schmidt Sciences. Visiting Researcher at Stanford NLP Group
Interested in AI safety and interpretability
Previously: Anthropic, AI2, Google, Meta, UNC Chapel Hill
Staff Software Engineer. Interested in #rust, #typescript and #golang. https://blog.vramana.com
Forward Deployed Angel Investor
https://github.com/jwills
Making streams serverless @s2.dev
British-American living in Brooklyn, NY.
he/him. Runner. Puns. Cats. Theatre. Board games.
Work: Architect / Dev / Ops; AWS; Serverless; consulting as https://symphonia.io .
Also on Mastodon at https://hachyderm.io/@mikebroberts
Every system is a sensor if you hold it right.
Main arena, Baltics. πΈπͺ
mastodon.world/@auonsson , x.com/auonsson backup πΈπͺ
Backend engineer at @monzo
Apple ο£Ώ ML Research in Barcelona, prev OxCSML InfAtEd, part of MLinPL & polonium_org π΅π±, sometimes funny
Works at Google DeepMind on Safe+Ethical AI
a mediocre combination of a mediocre AI scientist, a mediocre physicist, a mediocre chemist, a mediocre manager and a mediocre professor.
see more at https://kyunghyuncho.me/
Co-founder & editor, Works in Progress. Writer, Scientific Discovery. Podcaster, Hard Drugs. Advisor, Coefficient Giving. // Previously at Our World in Data.
Newsletter: https://scientificdiscovery.dev
Podcast: https://harddrugs.worksinprogress.co
π³οΈβπ