Helen Toner's Avatar

Helen Toner

@hlntnr.bsky.social

AI, national security, China. Part of the founding team at @csetgeorgetown.bsky.social‬ (opinions my own). Author of Rising Tide on substack: helentoner.substack.com

1,005 Followers  |  594 Following  |  51 Posts  |  Joined: 23.10.2023  |  2.07

Latest posts by hlntnr.bsky.social on Bluesky

Preview
Personalized AI is rerunning the worst part of social media's playbook The incentives, risks, and complications of AI that knows you

Read Miranda's full piece here:
helentoner.substack.com/p/personaliz...

And the research brief she wrote that inspired this post:
cdt.org/insights/its...

22.07.2025 00:49 — 👍 4    🔁 0    💬 0    📌 0

I honestly don't know how big the potential harms of personalization are—I think it's possible we end up coping fine. But it's crazy to me how little mindshare this seems to be getting among people who think about unintended systemic effects of AI for a living.

22.07.2025 00:49 — 👍 3    🔁 0    💬 1    📌 0

Thinking about this, I keep coming back to two stories—

1) how FB allegedly trained models to identify moments when users felt worthless, then sold that data to advertisers

2) how we're already seeing chatbots addicting kids & adults and warping their sense of what's real

22.07.2025 00:49 — 👍 1    🔁 0    💬 1    📌 0
Post image Post image Post image Post image

AI companies are starting to build more and more personalization into their products, but there's a huge personalization-sized hole in conversations about AI safety/trust/impacts.

Delighted to feature @mbogen.bsky.social on Rising Tide today, on what's being built and why we should care:

22.07.2025 00:49 — 👍 11    🔁 4    💬 1    📌 0
Preview
Personalized AI is rerunning the worst part of social media's playbook The incentives, risks, and complications of AI that knows you

AI companies are starting to promise personalized assistants that “know you.” We’ve seen this playbook before — it didn’t end well.

In a guest post for @hlntnr.bsky.social’s Rising Tide, I explore how leading AI labs are rushing toward personalization without learning from social media’s mistakes

21.07.2025 18:32 — 👍 14    🔁 5    💬 0    📌 3
Helen Toner - Unresolved Debates on the Future of AI [Tech Innovations AI Policy]
YouTube video by FAR․AI Helen Toner - Unresolved Debates on the Future of AI [Tech Innovations AI Policy]

Full video here (21 min):
www.youtube.com/watch?v=dzwi...

30.06.2025 20:40 — 👍 3    🔁 0    💬 0    📌 1
Preview
Unresolved debates about the future of AI How far the current paradigm can go, AI improving AI, and whether thinking of AI as a tool will keep making sense

The 3 disagreements are:
How far can the current paradigm go?
How much can AI improve AI?
Will future AI still basically be tools, or will they be something else?

Thanks to
@farairesearch
for the invitation to do this talk! Transcript here:
helentoner.substack.com/p/unresolved...

30.06.2025 20:40 — 👍 6    🔁 0    💬 1    📌 0
Video thumbnail

Been thinking recently about how central "AI is just a tool" is to disagreements about the future of AI. Is it? Will it continue to be?

Just posted a transcript from a talk where I go into this + a couple other key open qs/disagreements (not p(doom)!).

🔗 below, preview here:

30.06.2025 20:40 — 👍 12    🔁 3    💬 1    📌 0

2 weeks left on this open funding call on risks from internal deployments of frontier AI models—submissions are due June 30.

Expressions of interest only need to be 1-2 pages, so still time to write one up!

Full details: cset.georgetown.edu/wp-content/u...

16.06.2025 18:37 — 👍 7    🔁 3    💬 0    📌 0
Preview
AI Behind Closed Doors: a Primer on The Governance of Internal Deployment — Apollo Research In the race toward increasingly capable artificial intelligence (AI) systems, much attention has been focused on how these systems interact with the public. However, a critical blind spot exists in ou...

Great report from Apollo Research on the underlying issues motivating this call for research ideas:
www.apolloresearch.ai/research/ai-...

19.05.2025 16:59 — 👍 3    🔁 0    💬 0    📌 0
Preview
Foundational Research Grants | Center for Security and Emerging Technology Foundational Research Grants (FRG) supports the exploration of foundational technical topics that relate to the potential national security implications of AI over the long term. In contrast to most C...

More on our Foundational Research Grants program, including info on past grants we've funded:
cset.georgetown.edu/foundational...

19.05.2025 16:59 — 👍 1    🔁 0    💬 1    📌 0
Post image

💡Funding opportunity—share with your AI research networks💡

Internal deployments of frontier AI models are an underexplored source of risk. My program at @csetgeorgetown.bsky.social just opened a call for research ideas—EOIs due Jun 30.

Full details ➡️ cset.georgetown.edu/wp-content/u...

Summary ⬇️

19.05.2025 16:59 — 👍 9    🔁 5    💬 1    📌 1
Preview
The FUTURE AND ITS ENEMIES: The Growing Conflict Over Creativity, Enterprise, and Progress The FUTURE AND ITS ENEMIES: The Growing Conflict Over Creativity, Enterprise, and Progress [Postrel, Virginia] on Amazon.com. *FREE* shipping on qualifying offers. The FUTURE AND ITS ENEMIES: The Growing Conflict Over Creativity, Enterprise, and Progress

Related reading:

Virginia Postrel's book (recommended!): amazon.com/FUTURE-ITS-E...
And a recent post from Brendan McCord: cosmosinstitute.substack.com/p/the-philos...
What else?

12.05.2025 18:21 — 👍 4    🔁 1    💬 0    📌 0
Preview
We’re Arguing About AI Safety Wrong | AI Frontiers Helen Toner, May 12, 2025 — Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions.

...But too many critics of those stasist ideas try to shove the underlying problems under the rug. With this post, I"m trying to help us hold both things at once.

Read the full post on AI Frontiers: www.ai-frontiers.org/articles/wer...
Or my substack: helentoner.substack.com/p/dynamism-v...

12.05.2025 18:21 — 👍 4    🔁 2    💬 1    📌 0

From The Future and Its Enemies by Virginia Postrel:
Dynamism: "a world of constant creation, discovery, and competition"
Stasis: "a regulated, engineered world... [that values] stability and control"

Too many AI safety policy ideas would push us toward stasis. But...

12.05.2025 18:21 — 👍 0    🔁 0    💬 1    📌 0
Post image

Criticizing the AI safety community as anti-tech or anti-risktaking has always seemed off to me. But there *is* plenty to critique. My latest on Rising Tide (xposted with @aifrontiers.bsky.social!) is on the 1998 book that helped me put it into words.

In short: it's about dynamism vs stasis.

12.05.2025 18:21 — 👍 4    🔁 3    💬 1    📌 0
Preview
2 Big Questions for AI Progress in 2025-2026 On how good AI might—or might not—get at tasks beyond math & coding

Link to full post (subscribe!): helentoner.substack.com/p/2-big-ques...

23.04.2025 15:46 — 👍 2    🔁 0    💬 0    📌 0
Post image Post image Post image Post image

New on Rising Tide, I break down 2 factors that will play a huge role in how much AI progress we see over the next couple years: verification & generalization.

How well these go will determine if AI just gets super good at math & coding vs. mastering many domains. Post excerpts:

23.04.2025 15:46 — 👍 7    🔁 2    💬 1    📌 0
Preview
Stop the World: The road to artificial general intelligence, with Helen Toner Stop the World: The road to artificial general intelligence, with Helen Toner

Find them by searching "Stop the World" and "Cognitive Revolution" in your podcast app, or links here:
www.aspi.org.au/news/stop-wo...
www.cognitiverevolution.ai/helen-toner-...

22.04.2025 01:26 — 👍 0    🔁 0    💬 0    📌 0
Video thumbnail

Cognitive Revolution (🇺🇸): More insidery chat with @nathanlabenz.bsky.social getting into why nonproliferation is the wrong way to manage AI misuse; AI in military decision support systems, and a bunch of other stuff.

Clip on my beef with talk about the "offense-defense" balance in AI:

22.04.2025 01:26 — 👍 0    🔁 0    💬 1    📌 0
Video thumbnail

Stop the World (🇦🇺): Fun, wide-ranging conversation with David Wroe of @aspi-org.bsky.social on where we're at with AI, reasoning models, DeepSeek, scaling laws, etc etc.

Excerpt on whether we can "just" keep scaling language models:

22.04.2025 01:26 — 👍 1    🔁 0    💬 1    📌 0
Post image Post image

2 new podcast interviews out in the last couple weeks—one for more of a general audience, one more inside baseball.

You can also pick your accent (I'm from Australia and sound that way when I talk to other Aussies, but mostly in professional settings I sound ~American)

22.04.2025 01:26 — 👍 2    🔁 0    💬 1    📌 0

cc @binarybits.bsky.social re hardening the physical world,
@vitalik.ca re d/acc, @howard.fm re power concentration... plus many others I'm forgetting whose takes helped inspire this post. I hope this is a helpful framing for these tough tradeoffs.

05.04.2025 18:09 — 👍 2    🔁 0    💬 0    📌 0
Preview
Nonproliferation is the wrong approach to AI misuse Making the most of “adaptation buffers” is a more realistic and less authoritarian strategy

I don't think this approach will obviously be enough, I don't think it's equivalent to "just open source everything YOLO," and I don't think any of my argument applies to tracking/managing the frontier or loss of control risks.

More in the full piece: helentoner.substack.com/p/nonprolife...

05.04.2025 18:09 — 👍 1    🔁 0    💬 1    📌 0
Post image

What to do instead? IMO the best option is to think in terms of "adaptation buffers," the gap between when we know a new misusable capability is coming and when it's actually widespread.

During that time, we need massive efforts to build as much societal resilience as we can.

05.04.2025 18:09 — 👍 2    🔁 0    💬 1    📌 0
Post image Post image

The basic problem is that the kind of AI that's relevant here (for "misuse" risks) is going to get way cheaper & more accessible over time. This means that to indefinitely prevent/control its spread, your nonprolif regime will get more & more invasive and less & less effective.

05.04.2025 18:09 — 👍 1    🔁 0    💬 1    📌 0
Post image

Seems likely that at some point AI will make it much easier to hack critical infrastructure, create bioweapons, etc etc. Many argue that if so, a hardcore nonproliferation strategy is our only option.

Rising Tide launch week post 3/3 is on why I disagree 🧵

helentoner.substack.com/p/nonprolife...

05.04.2025 18:09 — 👍 11    🔁 4    💬 2    📌 0

The idea of AI "alignment" seems increasingly confused—is it about content moderation or controlling superintelligence? Is it basically solved or wide open?

Rising Tide #2 is about how we got here, and how the core problem is whether we can steer advanced AI at all.

03.04.2025 14:49 — 👍 4    🔁 0    💬 0    📌 1
Preview
"Long" timelines to advanced AI have gotten crazy short The prospect of reaching human-level AI in the 2030s should be jarring

Sharing the very first post on my new substack, about the weird boiling frog of AI timelines. Somehow expecting human-level systems in the 2030s is now a conservative take?

2 more posts to come this week, then a slower pace. Subscribe, tell your friends!

helentoner.substack.com/p/long-timel...

01.04.2025 16:18 — 👍 9    🔁 1    💬 0    📌 1
Preview
AI for Military Decision-Making | Center for Security and Emerging Technology Artificial intelligence is reshaping military decision-making. This concise overview explores how AI-enabled systems can enhance situational awareness and accelerate critical operational decisions—eve...

⭐️New Report⭐️

Using AI to make military decisions?

CSET’s @emmyprobasco.bsky.social, @hlntnr.bsky.social, Matthew Burtell, and @timrudner.bsky.social analyze the advantages and risks of AI for military decisionmaking. cset.georgetown.edu/publication/...

01.04.2025 15:01 — 👍 1    🔁 2    💬 0    📌 1

@hlntnr is following 20 prominent accounts