Future of Life Institute's Avatar

Future of Life Institute

@futureoflife.org.bsky.social

We work on reducing extreme risks and steering transformative technologies to benefit humanity. Learn more: futureoflife.org

721 Followers  |  26 Following  |  89 Posts  |  Joined: 15.11.2024  |  1.7134

Latest posts by futureoflife.org on Bluesky

Preview
2025 AI Safety Index - Future of Life Institute The Summer 2025 edition of our AI Safety Index, in which AI experts rate leading AI companies on key safety and security domains.

πŸ‘‰ As reviewer Stuart Russell put it, β€œSome companies are making token efforts, but none are doing enough… This is not a problem for the distant future; it’s a problem for today.”

πŸ”— Read the full report now: futureoflife.org/ai-safety-in...

18.07.2025 20:05 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

6️⃣ OpenAI secured second place, ahead of Google DeepMind.

7️⃣ Chinese AI firms Zhipu AI and DeepSeek received failing overall grades.

🧡

18.07.2025 20:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3️⃣ Only 3 out of 7 firms report substantive testing for dangerous capabilities linked to large-scale risks such as bio- or cyber-terrorism (Anthropic, OpenAI, and Google DeepMind).

4️⃣ Whistleblowing policy transparency remains a weak spot.

5️⃣ Anthropic received the best overall grade (C+).

🧡

18.07.2025 20:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Key takeaways:
1️⃣ The AI industry is fundamentally unprepared for its own stated goals.

2️⃣ Capabilities are accelerating faster than risk-management practice, and the gap between firms is widening.

🧡

18.07.2025 20:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

β€ΌοΈπŸ“ Our new AI Safety Index is out!

➑️ Following our 2024 index, 6 independent AI experts rated leading AI companies - OpenAI, Anthropic, Meta, Google DeepMind, xAI, DeepSeek, and Zhipu AI - across critical safety and security domains.

So what were the results? πŸ§΅πŸ‘‡

18.07.2025 20:05 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 2    πŸ“Œ 2
Video thumbnail

‼️ Congress is considering a 10-year ban on state AI laws, blocking action on risks like job loss, surveillance, disinformation, and loss of control.

It’s a huge win for Big Tech - and a big risk for families.

✍️ Add your name and say no to the federal block on AI safeguards: FutureOfLife.org/Action

12.06.2025 18:44 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz) On this episode, Zvi Mowshowitz joins me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addresses humanity’s uncertain AI-driven future, the unique features setting AI apart from other technologies, and AI’s growing influence in financial trading. You can follow Zvi's excellent blog here: https://thezvi.substack.com Timestamps: 00:00:00 Preview and introduction 00:02:01 Sycophantic AIs 00:07:28 Bottlenecks for AI agents 00:21:26 Are benchmarks useful? 00:32:39 AI agent time horizons 00:44:18 Impact of automating research 00:53:00 Limits to scaling inference compute 01:02:51 Will the future go well for humanity? 01:12:22 A good plan for safe AI 01:26:03 What makes AI different? 01:31:29 AI in trading

bit.ly/434PInO

09.05.2025 18:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

πŸ†• πŸ“» New on the FLI podcast, Zvi Mowshowitz (@thezvi.bsky.social) joins to discuss:

-The recent hot topic of sycophantic AI
-Time horizons of AI agents
-AI in finance and scientific research
-How AI differs from other technology
And more.

πŸ”— Tune in to the full episode now at the link below:

09.05.2025 18:41 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
The Singapore Consensus on Global AI Safety Research Priorities Building a Trustworthy, Reliable and Secure AI Ecosystem. Read the full report online, or download the PDF.

πŸ”— Read more about these AI safety research priorities: aisafetypriorities.org

08.05.2025 19:29 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

➑️ The Singapore Consensus, building on the International AI Safety Report backed by 33 countries, aims to enable more impactful R&D to quickly create safety and evaluation mechanisms, fostering a trustworthy, reliable, secure ecosystem where AI is used for the public good.

08.05.2025 19:29 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 2    πŸ“Œ 2
Post image

‼️ On April 26, 100+ AI scientists convened at the Singapore Conference on AI to produce the just-released Singapore Consensus on Global AI Safety Research Priorities. πŸ§΅β¬‡οΈ

08.05.2025 19:29 β€” πŸ‘ 15    πŸ” 7    πŸ’¬ 3    πŸ“Œ 1
Post image

βž• Be sure to check out @asterainstitute.bsky.social's Residency program, now accepting applications for the Oct. 2025 cohort! The program supports "creative, high-agency scientists, engineers and entrepreneurs" in future-focused, high-impact, open-first innovation.

Learn more: astera.org/residency

04.04.2025 20:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Brain-like AGI and why it's Dangerous (with Steven Byrnes)
YouTube video by Future of Life Institute Brain-like AGI and why it's Dangerous (with Steven Byrnes)

πŸ”— Listen to the episode now on your favourite podcast player, or here: www.youtube.com/watch?v=kJ0K...

04.04.2025 20:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

πŸ“Ί πŸ“» New on the FLI Podcast: @asterainstitute.bsky.social artificial general intelligence (AGI) safety researcher @stevebyrnes.bsky.social joins for a discussion diving into the hot topic of AGI, including different paths to it - and why brain-like AGI would be dangerous. πŸ§΅πŸ‘‡

04.04.2025 20:35 β€” πŸ‘ 5    πŸ” 4    πŸ’¬ 2    πŸ“Œ 0
Preview
Recommendations for the U.S. AI Action Plan - Future of Life Institute The Future of Life Institute proposal for President Trump’s AI Action Plan.

πŸ”— Read our proposal in full: futureoflife.org/document/rec...

18.03.2025 17:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ’ͺ Foster transparent development through an AI industry whistleblower program and mandatory security incident reporting.

18.03.2025 17:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🧰 Protect American workers and critical infrastructure from AI-related threats by tracking labor displacement and placing export controls on advanced AI models.

18.03.2025 17:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🚫 Ensure AI systems are free from ideological agendas, and ban models with superhuman persuasive abilities.

18.03.2025 17:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🚨 Protect the presidency from loss of control by mandating β€œoff-switches"; a targeted moratorium on developing uncontrollable AI systems; and enforcing strong antitrust measures.

18.03.2025 17:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ‡ΊπŸ‡Έ We're sharing our recommendations for President Trump's AI Action Plan, focused on protecting U.S. interests in the era of rapidly advancing AI.

🧡 An overview of the measures we recommend πŸ‘‡

18.03.2025 17:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ”— And be sure to read Keep the Future Human, available here: keepthefuturehuman.ai

13.03.2025 20:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Keep the Future Human (with Anthony Aguirre)
YouTube video by Future of Life Institute Keep the Future Human (with Anthony Aguirre)

πŸ”— Tune in now at the link, or on your favourite podcast player, to hear how Anthony proposes we change course to secure a safe future with AI: www.youtube.com/watch?v=IqzB...

13.03.2025 20:44 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

πŸ“» New on the FLI Podcast! πŸ‘‡

➑️ FLI Executive Director Anthony Aguirre joins to discuss his new essay, "Keep the Future Human", which warns that the unchecked development of smarter-than-human, autonomous, general-purpose AI will almost inevitably lead to human replacement - but it doesn't have to:

13.03.2025 20:44 β€” πŸ‘ 4    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1

Read "Keep The Future Human": keepthefuturehuman.ai

11.03.2025 21:55 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
The 4 Rules That Could Stop AI Before It’s Too Late
YouTube video by Siliconversations The 4 Rules That Could Stop AI Before It’s Too Late

www.youtube.com/watch?v=zeab...

11.03.2025 21:54 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ“’ ❗Siliconversations on YouTube released an animated explainer for FLI Executive Director Anthony Aguirre’s new essay, "Keep The Future Human"!

πŸŽ₯ Watch at the link in the replies for a breakdown of the risks from smarter-than-human AI - and Anthony's proposals to steer us toward a safer future:

11.03.2025 21:54 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Preview
Home - Keep The Future Human Humanity is on the brink of developing artificial general intelligence that exceeds our own. It's time to close the gates on AGI and superintelligence... before we lose control of our future.

πŸ”— Read and please share the "Keep The Future Human" essay in full, explore the interactive summary, or watch the brief explainer video, at: keepthefuturehuman.ai

07.03.2025 19:11 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

We're at a crossroads: continue down this dangerous path, or choose a future where AI enhances human potential, rather than threatening it.

07.03.2025 19:11 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

In "Keep The Future Human", Anthony explains why we must close the 'gates' to AGI - and instead develop beneficial, safe Tool AI built to serve us, not replace us.

07.03.2025 19:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

With the unchecked race to build smarter-than-human AI intensifying, humanity is on track to almost certainly lose control.

That's why FLI Executive Director Anthony Aguirre has published a new essay, "Keep The Future Human".

🧡 1/4

07.03.2025 19:11 β€” πŸ‘ 11    πŸ” 8    πŸ’¬ 1    πŸ“Œ 2

@futureoflife.org is following 20 prominent accounts