Alexander Berger's Avatar

Alexander Berger

@albrgr.bsky.social

CEO of Open Philanthropy

1,655 Followers  |  649 Following  |  291 Posts  |  Joined: 23.11.2024  |  2.2601

Latest posts by albrgr.bsky.social on Bluesky

New RAND report on an important (and messy) question: When should we actually worry about AI being used to design a pathogen? What’s plausible now vs. near-term vs. later?
(1/12)
I helped convene two expert Delphi panels in AI + Bio to weigh in.

Full report:
www.rand.org/pubs/researc...

05.10.2025 15:45 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image

If you are a funder interested in getting involved, get in touch - we would love to be a resource! We're increasingly working with other donors and we are eager to help donors find highly cost-effective opportunities.

02.10.2025 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

More resources are needed across these different theories of change.

Other reasons right now is leveraged: AI advancements have created better research tools, attracted researchers to the field, and increased policy opportunities.

02.10.2025 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

On building the field's capacity: scholarships, fellowships and educational initiatives like MATS and BlueDot Impact have built out impressive talent pipelines. MATS reports 80% of alumni are working on AI safety!

02.10.2025 16:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

On technical and policy safeguards: Redwood Research's work on loss-of-control scenarios, Theorem's work on developing formal verification methods, and several think tanks' work on technical AI governance show how progress is possible.

02.10.2025 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The rest of the post describes experience from our ~10y in this space which show philanthropy can move the needle.

On visibility into frontier AI R&D: we've supported benchmarks like Percy Liang's CyBench, public data work from @epochai.bsky.social, and research from @csetgeorgetown.bsky.social

02.10.2025 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The upshot is that other donors come to us for advice, we can recommend funding opportunities that we believe are *2-5x more cost-effective* as the marginal grants we make Good Ventures' funding.

02.10.2025 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

There are four key reasons other funders are needed:

(1) There are highly cost-effective grants not in Good Ventures' scope
(2) AI policy needs a diverse funding base
(3) Other orgs can make bets we're missing
(4) Generally, AI safety and security is still underfunded!

02.10.2025 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

To begin: AI is rapidly advancing, which gives funders a narrow window to make a leveraged difference.

02.10.2025 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

People sometimes assume that Open Phil β€œhas it covered” on philanthropy for AI safety & security. That’s not right: some great opportunities really need other funders. Liz Givens and I make the case for why (and why now) in the final post of our series.
www.openphilanthropy.org/research/ai...

02.10.2025 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Alexander Berger (@albrgr.bsky.social) Since 2015, seven years before the launch of ChatGPT, Open Phil has been funding efforts to address potential catastrophic risks from AI. In a new post, Emily Oehlsen and I discuss our history in the area and our current strategy. https://www.openphilanthropy.org/research/our-approach-to-ai-safety-and-security/ https://bsky.app/profile/albrgr.bsky.social/post/3m22xidrkxd2h

Now's the time for other funders to get involved in AI safety and security:
-AI advances have created more great concrete opportunities
-Recent years show progress is possible
-Policy needs diverse funding; other funders can beat Good Ventures' marginal $ by 2-5x🧡
bsky.app/profile/alb...

02.10.2025 16:05 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Despite its importance and increasing salience, there are still relatively few funders in this space. Tomorrow we’ll post Part 3, making the case for why now is an especially high-leverage time for more philanthropists to get involved.

01.10.2025 17:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Our approach to AI safety and security | Open Philanthropy Introduction In Part 1 of this essay series, we discussed our approach to accelerating science and technology while mitigating potential risks, covering the following topics:Β  Why and how we invest in accelerating scientific and technological progress. Science and technology have been essential to improving human well-being, and philanthropy can play an important role in supporting […]

Full post here again: www.openphilanthropy.org/research/ou...

01.10.2025 17:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The third is capacity: we aim to grow and strengthen the fields of research and practice responding to these challenges. This includes support for fellowship programs, career development, conferences, and educational initiatives.

01.10.2025 17:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The second is designing and implementing technological and policy safeguards. This includes both technical AI safety & security and a range of AI governance work:

01.10.2025 17:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

3 prongs to our grantmaking approach in practice.

The first is increasing visibility into cutting-edge AI R&D, with the goal of better understanding AI’s capabilities and risks. This includes supporting AI model evals, threat modeling, and building public understanding.

01.10.2025 17:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

Today, we've scaled our work on AI safety and security significantly. Our work on risks focuses on worst-cases, but we aim to strike a number of important balances:

01.10.2025 17:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

Ten years later, the landscape has changed drastically: AI is much more advanced and has risen hugely in geopolitical importance. There is greater empirical evidence and expert agreement about the catastrophic risks it could pose.

01.10.2025 17:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The strategic landscape was very unclear when we first entered the field. As a result, we mostly funded early-stage research and field-building efforts to increase the number of people taking these questions seriously.

01.10.2025 17:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Since 2015, seven years before the launch of ChatGPT, Open Phil has been funding efforts to address potential catastrophic risks from AI.

In a new post, Emily Oehlsen and I discuss our history in the area and our current strategy.
www.openphilanthropy.org/research/ou...
bsky.app/profile/alb...

01.10.2025 17:01 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

The next post, also co-authored with Emily Oehlsen, is on our approach to AI safety and security. It discusses our history in the area and our main grantmaking strategies for mitigating worst-case AI risks.

That will be out tomorrow, so look out for another long thread!

30.09.2025 16:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Why we fund both progress and safety | Open Philanthropy Advances in frontier science and technology have historically been the key drivers of massive improvements in human well-being. Safety is sometimes portrayed as just an impediment to that progress, but we believe safety is both itself a key kind of progress and a precondition for ongoing innovation, especially when it comes to unprecedented breakthroughs like […]

Full post here again: www.openphilanthropy.org/research/wh...

30.09.2025 16:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

But the vast majority of our work here is on market and policy failures around worst-case risks posed by AI.

We think these risks could be very grave, and that philanthropy is especially well-placed to contribute:

30.09.2025 16:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The rest of the piece considers progress vs. safety in the context of AI, which we think is the most consequential technology currently being developed.

In recent years, our global health and abundance work has aimed to correct some market failures around the benefits of AI:

30.09.2025 16:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In practice we look for pragmatic compromises. To address the chance that metascience work could increase catastrophic risks (e.g., better bio might make bioterrorism easier), we decided to target >=20% of that portfolio being net positive (but not optimized) for biosafety.

30.09.2025 16:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

There is also a deeper, fundamental tension which @michaelnielsen.bsky.social has written about: future discoveries are a black box. We don’t know what we’ll uncover (or whether it will be good or bad on net).

michaelnotebook.com/xriskbrief/...

30.09.2025 16:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

On the other hand, market pressures can lead to dangerous technologies that no single actor is adequately incentivized to prevent or deal with.

(This is by far our largest concern with AI β€” more on that in a moment!)

30.09.2025 16:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

But there can also be real tensions.

On the one hand, concerns about safety can go too far (see: nuclear energy), or become outdated or misapplied (see: IRBs, NEPA).

30.09.2025 16:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Safety and security can also advance science and technology in various ways: by avoiding catastrophes that could derail progress altogether, increasing public adoption of tech, and making tech more useful.

(This can work the other way around, too!)

30.09.2025 16:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Towards a philosophy of safety To fully understand progress, we have to understand risk

There are several ways in which we think these dual commitments are mutually reinforcing.

Firstly, taking inspiration from @jasoncrawford.org’s great post on the topic, we think safety itself is a form of progress:

blog.rootsofprogress.org/towards-a-p...

30.09.2025 16:37 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@albrgr is following 20 prominent accounts