2025 AI Safety Index - Future of Life Institute
The Summer 2025 edition of our AI Safety Index, in which AI experts rate leading AI companies on key safety and security domains.
π As reviewer Stuart Russell put it, βSome companies are making token efforts, but none are doing enoughβ¦ This is not a problem for the distant future; itβs a problem for today.β
π Read the full report now: futureoflife.org/ai-safety-in...
18.07.2025 20:05 β π 0 π 1 π¬ 1 π 0
6οΈβ£ OpenAI secured second place, ahead of Google DeepMind.
7οΈβ£ Chinese AI firms Zhipu AI and DeepSeek received failing overall grades.
π§΅
18.07.2025 20:05 β π 0 π 0 π¬ 1 π 0
3οΈβ£ Only 3 out of 7 firms report substantive testing for dangerous capabilities linked to large-scale risks such as bio- or cyber-terrorism (Anthropic, OpenAI, and Google DeepMind).
4οΈβ£ Whistleblowing policy transparency remains a weak spot.
5οΈβ£ Anthropic received the best overall grade (C+).
π§΅
18.07.2025 20:05 β π 1 π 0 π¬ 1 π 0
Key takeaways:
1οΈβ£ The AI industry is fundamentally unprepared for its own stated goals.
2οΈβ£ Capabilities are accelerating faster than risk-management practice, and the gap between firms is widening.
π§΅
18.07.2025 20:05 β π 1 π 0 π¬ 1 π 0
βΌοΈπ Our new AI Safety Index is out!
β‘οΈ Following our 2024 index, 6 independent AI experts rated leading AI companies - OpenAI, Anthropic, Meta, Google DeepMind, xAI, DeepSeek, and Zhipu AI - across critical safety and security domains.
So what were the results? π§΅π
18.07.2025 20:05 β π 3 π 0 π¬ 2 π 2
βΌοΈ Congress is considering a 10-year ban on state AI laws, blocking action on risks like job loss, surveillance, disinformation, and loss of control.
Itβs a huge win for Big Tech - and a big risk for families.
βοΈ Add your name and say no to the federal block on AI safeguards: FutureOfLife.org/Action
12.06.2025 18:44 β π 2 π 1 π¬ 1 π 0
π π» New on the FLI podcast, Zvi Mowshowitz (@thezvi.bsky.social) joins to discuss:
-The recent hot topic of sycophantic AI
-Time horizons of AI agents
-AI in finance and scientific research
-How AI differs from other technology
And more.
π Tune in to the full episode now at the link below:
09.05.2025 18:41 β π 5 π 0 π¬ 1 π 0
β‘οΈ The Singapore Consensus, building on the International AI Safety Report backed by 33 countries, aims to enable more impactful R&D to quickly create safety and evaluation mechanisms, fostering a trustworthy, reliable, secure ecosystem where AI is used for the public good.
08.05.2025 19:29 β π 4 π 0 π¬ 2 π 2
βΌοΈ On April 26, 100+ AI scientists convened at the Singapore Conference on AI to produce the just-released Singapore Consensus on Global AI Safety Research Priorities. π§΅β¬οΈ
08.05.2025 19:29 β π 15 π 7 π¬ 3 π 1
β Be sure to check out @asterainstitute.bsky.social's Residency program, now accepting applications for the Oct. 2025 cohort! The program supports "creative, high-agency scientists, engineers and entrepreneurs" in future-focused, high-impact, open-first innovation.
Learn more: astera.org/residency
04.04.2025 20:35 β π 1 π 0 π¬ 0 π 0
YouTube video by Future of Life Institute
Brain-like AGI and why it's Dangerous (with Steven Byrnes)
π Listen to the episode now on your favourite podcast player, or here: www.youtube.com/watch?v=kJ0K...
04.04.2025 20:35 β π 2 π 0 π¬ 1 π 0
πΊ π» New on the FLI Podcast: @asterainstitute.bsky.social artificial general intelligence (AGI) safety researcher @stevebyrnes.bsky.social joins for a discussion diving into the hot topic of AGI, including different paths to it - and why brain-like AGI would be dangerous. π§΅π
04.04.2025 20:35 β π 5 π 4 π¬ 2 π 0
πͺ Foster transparent development through an AI industry whistleblower program and mandatory security incident reporting.
18.03.2025 17:57 β π 1 π 0 π¬ 1 π 0
π§° Protect American workers and critical infrastructure from AI-related threats by tracking labor displacement and placing export controls on advanced AI models.
18.03.2025 17:57 β π 0 π 0 π¬ 1 π 0
π« Ensure AI systems are free from ideological agendas, and ban models with superhuman persuasive abilities.
18.03.2025 17:57 β π 0 π 0 π¬ 1 π 0
π¨ Protect the presidency from loss of control by mandating βoff-switches"; a targeted moratorium on developing uncontrollable AI systems; and enforcing strong antitrust measures.
18.03.2025 17:57 β π 0 π 0 π¬ 1 π 0
πΊπΈ We're sharing our recommendations for President Trump's AI Action Plan, focused on protecting U.S. interests in the era of rapidly advancing AI.
π§΅ An overview of the measures we recommend π
18.03.2025 17:57 β π 0 π 0 π¬ 1 π 0
π And be sure to read Keep the Future Human, available here: keepthefuturehuman.ai
13.03.2025 20:44 β π 2 π 0 π¬ 0 π 0
YouTube video by Future of Life Institute
Keep the Future Human (with Anthony Aguirre)
π Tune in now at the link, or on your favourite podcast player, to hear how Anthony proposes we change course to secure a safe future with AI: www.youtube.com/watch?v=IqzB...
13.03.2025 20:44 β π 4 π 1 π¬ 1 π 0
π» New on the FLI Podcast! π
β‘οΈ FLI Executive Director Anthony Aguirre joins to discuss his new essay, "Keep the Future Human", which warns that the unchecked development of smarter-than-human, autonomous, general-purpose AI will almost inevitably lead to human replacement - but it doesn't have to:
13.03.2025 20:44 β π 4 π 4 π¬ 1 π 1
Read "Keep The Future Human": keepthefuturehuman.ai
11.03.2025 21:55 β π 3 π 0 π¬ 0 π 0
YouTube video by Siliconversations
The 4 Rules That Could Stop AI Before Itβs Too Late
www.youtube.com/watch?v=zeab...
11.03.2025 21:54 β π 5 π 1 π¬ 1 π 0
π’ βSiliconversations on YouTube released an animated explainer for FLI Executive Director Anthony Aguirreβs new essay, "Keep The Future Human"!
π₯ Watch at the link in the replies for a breakdown of the risks from smarter-than-human AI - and Anthony's proposals to steer us toward a safer future:
11.03.2025 21:54 β π 3 π 1 π¬ 1 π 1
We're at a crossroads: continue down this dangerous path, or choose a future where AI enhances human potential, rather than threatening it.
07.03.2025 19:11 β π 3 π 1 π¬ 1 π 0
In "Keep The Future Human", Anthony explains why we must close the 'gates' to AGI - and instead develop beneficial, safe Tool AI built to serve us, not replace us.
07.03.2025 19:11 β π 1 π 0 π¬ 1 π 0
With the unchecked race to build smarter-than-human AI intensifying, humanity is on track to almost certainly lose control.
That's why FLI Executive Director Anthony Aguirre has published a new essay, "Keep The Future Human".
π§΅ 1/4
07.03.2025 19:11 β π 11 π 8 π¬ 1 π 2
Hi, Iβm PERCEY. Letβs talk about your future.
Out now: perceymademe.ai
Making it up as I go. History, sci-fi, good dogs, film, games. Now: Comms at the Future of Life Institute. Previously: taxes, telecoms, healthcare, guns and more. DC & Phoenix. Opinions are mine but they should be yours, too.
Host of Lex Fridman Podcast.
Interested in robots and humans.
AI policy researcher, wife guy in training, fan of cute animals and sci-fi. Started a Substack recently: https://milesbrundage.substack.com/
President of Signal, Chief Advisor to AI Now Institute
Book: https://thecon.ai
Web: https://faculty.washington.edu/ebender
Global Business Strategist & Board Member | Chairman & CEO | Co-founder, INSEAD AI |
AI x Business medium.com/@robert_14895
βAI News You Missedβ linkedin.com/build-relation/newsletter-follow?entityUrn=7225978947564380160
AI @ OpenAI, Tesla, Stanford
Stanford Linguistics and Computer Science. Director, Stanford AI Lab. Founder of @stanfordnlp.bsky.social . #NLP https://nlp.stanford.edu/~manning/
Technology Policy at Stanford π©πΌβπ» column in FT πͺπΊ Member of European Parliament 2009-2019 πAuthor: The Tech Coup
AI and cognitive science, Founder and CEO (Geometric Intelligence, acquired by Uber). 8 books including Guitar Zero, Rebooting AI and Taming Silicon Valley.
Newsletter (50k subscribers): garymarcus.substack.com
AI safety for children | Founder, The Safe AI for Children Alliance | Exploring AIβs potential for both harm and good!
(Please note that my BlueSky direct messages are not always checked regularly)
Globally ranked top 20 forecaster π―
AI is getting powerful. We're not ready. I work at the Institute for AI Policy and Strategy (IAPS) to shape AI for for global prosperity and human freedom π
CEO Conjecture.dev - I don't know how to save the world but dammit I'm gonna try
Reducing existential risk by informing the public debate. We propose a Conditional AI Safety Treaty: https://time.com/7171432/conditional-ai-safety-treaty-trump/
Reduce extinction risk by pausing frontier AI unless provably safe @pauseai.bsky.social and banning AI weapons @stopkillerrobots.bsky.social | Reduce suffering @postsuffering.bsky.social
https://keepthefuturehuman.ai
PhD student at Northeastern University | MARL | Ex Mila | rupalibhati.github.io
PhD candidate | philosophy of AI π€ π± | Aix-Marseille UniversitΓ©
https://eloise-boisseau.fr/en
www.rummanchowdhury.com
www.humane-intelligence.org
CEO & co-founder, Humane Intelligence
US Science Envoy for AI (Biden Administration)