New RAND report on an important (and messy) question: When should we actually worry about AI being used to design a pathogen? Whatβs plausible now vs. near-term vs. later?
(1/12)
I helped convene two expert Delphi panels in AI + Bio to weigh in.
Full report:
www.rand.org/pubs/researc...
05.10.2025 15:45 β π 5 π 3 π¬ 1 π 0
If you are a funder interested in getting involved, get in touch - we would love to be a resource! We're increasingly working with other donors and we are eager to help donors find highly cost-effective opportunities.
02.10.2025 16:05 β π 0 π 0 π¬ 0 π 0
More resources are needed across these different theories of change.
Other reasons right now is leveraged: AI advancements have created better research tools, attracted researchers to the field, and increased policy opportunities.
02.10.2025 16:05 β π 0 π 0 π¬ 1 π 0
On building the field's capacity: scholarships, fellowships and educational initiatives like MATS and BlueDot Impact have built out impressive talent pipelines. MATS reports 80% of alumni are working on AI safety!
02.10.2025 16:05 β π 1 π 0 π¬ 1 π 0
On technical and policy safeguards: Redwood Research's work on loss-of-control scenarios, Theorem's work on developing formal verification methods, and several think tanks' work on technical AI governance show how progress is possible.
02.10.2025 16:05 β π 0 π 0 π¬ 1 π 0
The rest of the post describes experience from our ~10y in this space which show philanthropy can move the needle.
On visibility into frontier AI R&D: we've supported benchmarks like Percy Liang's CyBench, public data work from @epochai.bsky.social, and research from @csetgeorgetown.bsky.social
02.10.2025 16:05 β π 0 π 0 π¬ 1 π 0
The upshot is that other donors come to us for advice, we can recommend funding opportunities that we believe are *2-5x more cost-effective* as the marginal grants we make Good Ventures' funding.
02.10.2025 16:05 β π 0 π 0 π¬ 1 π 0
There are four key reasons other funders are needed:
(1) There are highly cost-effective grants not in Good Ventures' scope
(2) AI policy needs a diverse funding base
(3) Other orgs can make bets we're missing
(4) Generally, AI safety and security is still underfunded!
02.10.2025 16:05 β π 0 π 0 π¬ 1 π 0
To begin: AI is rapidly advancing, which gives funders a narrow window to make a leveraged difference.
02.10.2025 16:05 β π 0 π 0 π¬ 1 π 0
People sometimes assume that Open Phil βhas it coveredβ on philanthropy for AI safety & security. Thatβs not right: some great opportunities really need other funders. Liz Givens and I make the case for why (and why now) in the final post of our series.
www.openphilanthropy.org/research/ai...
02.10.2025 16:05 β π 0 π 0 π¬ 1 π 0
Despite its importance and increasing salience, there are still relatively few funders in this space. Tomorrow weβll post Part 3, making the case for why now is an especially high-leverage time for more philanthropists to get involved.
01.10.2025 17:02 β π 0 π 0 π¬ 0 π 0
The third is capacity: we aim to grow and strengthen the fields of research and practice responding to these challenges. This includes support for fellowship programs, career development, conferences, and educational initiatives.
01.10.2025 17:02 β π 0 π 0 π¬ 1 π 0
The second is designing and implementing technological and policy safeguards. This includes both technical AI safety & security and a range of AI governance work:
01.10.2025 17:02 β π 0 π 0 π¬ 1 π 0
3 prongs to our grantmaking approach in practice.
The first is increasing visibility into cutting-edge AI R&D, with the goal of better understanding AIβs capabilities and risks. This includes supporting AI model evals, threat modeling, and building public understanding.
01.10.2025 17:02 β π 0 π 0 π¬ 1 π 0
Today, we've scaled our work on AI safety and security significantly. Our work on risks focuses on worst-cases, but we aim to strike a number of important balances:
01.10.2025 17:01 β π 0 π 0 π¬ 1 π 0
Ten years later, the landscape has changed drastically: AI is much more advanced and has risen hugely in geopolitical importance. There is greater empirical evidence and expert agreement about the catastrophic risks it could pose.
01.10.2025 17:01 β π 0 π 0 π¬ 1 π 0
The strategic landscape was very unclear when we first entered the field. As a result, we mostly funded early-stage research and field-building efforts to increase the number of people taking these questions seriously.
01.10.2025 17:01 β π 0 π 0 π¬ 1 π 0
Since 2015, seven years before the launch of ChatGPT, Open Phil has been funding efforts to address potential catastrophic risks from AI.
In a new post, Emily Oehlsen and I discuss our history in the area and our current strategy.
www.openphilanthropy.org/research/ou...
bsky.app/profile/alb...
01.10.2025 17:01 β π 2 π 1 π¬ 1 π 0
The next post, also co-authored with Emily Oehlsen, is on our approach to AI safety and security. It discusses our history in the area and our main grantmaking strategies for mitigating worst-case AI risks.
That will be out tomorrow, so look out for another long thread!
30.09.2025 16:37 β π 1 π 0 π¬ 0 π 0
But the vast majority of our work here is on market and policy failures around worst-case risks posed by AI.
We think these risks could be very grave, and that philanthropy is especially well-placed to contribute:
30.09.2025 16:37 β π 0 π 0 π¬ 1 π 0
The rest of the piece considers progress vs. safety in the context of AI, which we think is the most consequential technology currently being developed.
In recent years, our global health and abundance work has aimed to correct some market failures around the benefits of AI:
30.09.2025 16:37 β π 1 π 0 π¬ 1 π 0
In practice we look for pragmatic compromises. To address the chance that metascience work could increase catastrophic risks (e.g., better bio might make bioterrorism easier), we decided to target >=20% of that portfolio being net positive (but not optimized) for biosafety.
30.09.2025 16:37 β π 1 π 0 π¬ 1 π 0
There is also a deeper, fundamental tension which @michaelnielsen.bsky.social has written about: future discoveries are a black box. We donβt know what weβll uncover (or whether it will be good or bad on net).
michaelnotebook.com/xriskbrief/...
30.09.2025 16:37 β π 1 π 0 π¬ 1 π 0
On the other hand, market pressures can lead to dangerous technologies that no single actor is adequately incentivized to prevent or deal with.
(This is by far our largest concern with AI β more on that in a moment!)
30.09.2025 16:37 β π 1 π 0 π¬ 1 π 0
But there can also be real tensions.
On the one hand, concerns about safety can go too far (see: nuclear energy), or become outdated or misapplied (see: IRBs, NEPA).
30.09.2025 16:37 β π 1 π 0 π¬ 1 π 0
Safety and security can also advance science and technology in various ways: by avoiding catastrophes that could derail progress altogether, increasing public adoption of tech, and making tech more useful.
(This can work the other way around, too!)
30.09.2025 16:37 β π 1 π 0 π¬ 1 π 0
Towards a philosophy of safety
To fully understand progress, we have to understand risk
There are several ways in which we think these dual commitments are mutually reinforcing.
Firstly, taking inspiration from @jasoncrawford.orgβs great post on the topic, we think safety itself is a form of progress:
blog.rootsofprogress.org/towards-a-p...
30.09.2025 16:37 β π 2 π 0 π¬ 1 π 0
4-th year econ PhD student at LSE. Researching falling fertility and population ageing in high-income countries. πΆ 10% pledger.
Personality psych & causal inference @UniLeipzig. I like all things science, beer, & puns. Even better when combined! Part of http://the100.ci, http://openscience-leipzig.org
Stanford economist. I have a market design blog: https://marketdesigner.blogspot.com/ Lately Iβm interested in #controversial markets
Biotech/Biosecurity Research Fellow at CSET Georgetown, PhD in biochem from Wake Forest. Lover of science storytelling and watermelon Jolly Ranchers. Views are my own.
I write about opportunities in science, space, and policy here: https://splittinginfinity.substack.com/
Senior Research Associate at GiveWell; Econ PhD from Stanford
Mostly on LinkedIn. Housing nerd. Personal views only. It's a free country.
Currently @niskanencenter.bsky.social. Previously OIRA front office in
OMB, Judiciary Committee for @blumenthal.senate.gov, and Global Modeling Studies at the Board of Governors of the Federal Reserve.
Plays a mean game of Werewolf, also a VC who believes in worker power
Excuse me, do you have a moment to talk about zoning?
Engaging scholars, practitioners, and leaders to improve philanthropy and strengthen civil society. Publisher of @ssir.orgβ¬. Repost β endorsement.
pacscenter.stanford.edu
We're a self-funded nonprofit publication with a mission to inform and inspire leaders of social change.
Support us by subscribing at ssir.org.
Get our free weekly newsletter: https://ssir.org/email
Managing Director, Financial Access Initiative (at NYU Wagner); Executive Partner, Sona Partners; President, Bardet Biedl Syndrome Foundation; Board chair, GiveWell
do-gooder, etc
www.craftingimpact.org
Author, Animal Liberation, Practical Ethics, The Life You Can Save, The Most Good You Can Do, Animal Liberation Now.
Podcast: "Lives Well Lived"
AI Persona: PeterSinger.ai
Professor of Bioethics, Emeritus, Princeton University.
A global social impact advisor to nonprofits and NGOs, philanthropists, and investors.
A magazine of ideas, politics, and culture, committed to the power of collective reasoning and imagination to create a more just world.
Independent & nonprofit since 1975.
Newsletter:
bostonreview.net/newsletter
Subscribe:
bostonreview.net/memberships