Read Miranda's full piece here:
helentoner.substack.com/p/personaliz...
And the research brief she wrote that inspired this post:
cdt.org/insights/its...
@hlntnr.bsky.social
AI, national security, China. Part of the founding team at @csetgeorgetown.bsky.social (opinions my own). Author of Rising Tide on substack: helentoner.substack.com
Read Miranda's full piece here:
helentoner.substack.com/p/personaliz...
And the research brief she wrote that inspired this post:
cdt.org/insights/its...
I honestly don't know how big the potential harms of personalization are—I think it's possible we end up coping fine. But it's crazy to me how little mindshare this seems to be getting among people who think about unintended systemic effects of AI for a living.
22.07.2025 00:49 — 👍 3 🔁 0 💬 1 📌 0Thinking about this, I keep coming back to two stories—
1) how FB allegedly trained models to identify moments when users felt worthless, then sold that data to advertisers
2) how we're already seeing chatbots addicting kids & adults and warping their sense of what's real
AI companies are starting to build more and more personalization into their products, but there's a huge personalization-sized hole in conversations about AI safety/trust/impacts.
Delighted to feature @mbogen.bsky.social on Rising Tide today, on what's being built and why we should care:
AI companies are starting to promise personalized assistants that “know you.” We’ve seen this playbook before — it didn’t end well.
In a guest post for @hlntnr.bsky.social’s Rising Tide, I explore how leading AI labs are rushing toward personalization without learning from social media’s mistakes
Full video here (21 min):
www.youtube.com/watch?v=dzwi...
The 3 disagreements are:
How far can the current paradigm go?
How much can AI improve AI?
Will future AI still basically be tools, or will they be something else?
Thanks to
@farairesearch
for the invitation to do this talk! Transcript here:
helentoner.substack.com/p/unresolved...
Been thinking recently about how central "AI is just a tool" is to disagreements about the future of AI. Is it? Will it continue to be?
Just posted a transcript from a talk where I go into this + a couple other key open qs/disagreements (not p(doom)!).
🔗 below, preview here:
2 weeks left on this open funding call on risks from internal deployments of frontier AI models—submissions are due June 30.
Expressions of interest only need to be 1-2 pages, so still time to write one up!
Full details: cset.georgetown.edu/wp-content/u...
Great report from Apollo Research on the underlying issues motivating this call for research ideas:
www.apolloresearch.ai/research/ai-...
More on our Foundational Research Grants program, including info on past grants we've funded:
cset.georgetown.edu/foundational...
💡Funding opportunity—share with your AI research networks💡
Internal deployments of frontier AI models are an underexplored source of risk. My program at @csetgeorgetown.bsky.social just opened a call for research ideas—EOIs due Jun 30.
Full details ➡️ cset.georgetown.edu/wp-content/u...
Summary ⬇️
Related reading:
Virginia Postrel's book (recommended!): amazon.com/FUTURE-ITS-E...
And a recent post from Brendan McCord: cosmosinstitute.substack.com/p/the-philos...
What else?
...But too many critics of those stasist ideas try to shove the underlying problems under the rug. With this post, I"m trying to help us hold both things at once.
Read the full post on AI Frontiers: www.ai-frontiers.org/articles/wer...
Or my substack: helentoner.substack.com/p/dynamism-v...
From The Future and Its Enemies by Virginia Postrel:
Dynamism: "a world of constant creation, discovery, and competition"
Stasis: "a regulated, engineered world... [that values] stability and control"
Too many AI safety policy ideas would push us toward stasis. But...
Criticizing the AI safety community as anti-tech or anti-risktaking has always seemed off to me. But there *is* plenty to critique. My latest on Rising Tide (xposted with @aifrontiers.bsky.social!) is on the 1998 book that helped me put it into words.
In short: it's about dynamism vs stasis.
Link to full post (subscribe!): helentoner.substack.com/p/2-big-ques...
23.04.2025 15:46 — 👍 2 🔁 0 💬 0 📌 0New on Rising Tide, I break down 2 factors that will play a huge role in how much AI progress we see over the next couple years: verification & generalization.
How well these go will determine if AI just gets super good at math & coding vs. mastering many domains. Post excerpts:
Find them by searching "Stop the World" and "Cognitive Revolution" in your podcast app, or links here:
www.aspi.org.au/news/stop-wo...
www.cognitiverevolution.ai/helen-toner-...
Cognitive Revolution (🇺🇸): More insidery chat with @nathanlabenz.bsky.social getting into why nonproliferation is the wrong way to manage AI misuse; AI in military decision support systems, and a bunch of other stuff.
Clip on my beef with talk about the "offense-defense" balance in AI:
Stop the World (🇦🇺): Fun, wide-ranging conversation with David Wroe of @aspi-org.bsky.social on where we're at with AI, reasoning models, DeepSeek, scaling laws, etc etc.
Excerpt on whether we can "just" keep scaling language models:
2 new podcast interviews out in the last couple weeks—one for more of a general audience, one more inside baseball.
You can also pick your accent (I'm from Australia and sound that way when I talk to other Aussies, but mostly in professional settings I sound ~American)
cc @binarybits.bsky.social re hardening the physical world,
@vitalik.ca re d/acc, @howard.fm re power concentration... plus many others I'm forgetting whose takes helped inspire this post. I hope this is a helpful framing for these tough tradeoffs.
I don't think this approach will obviously be enough, I don't think it's equivalent to "just open source everything YOLO," and I don't think any of my argument applies to tracking/managing the frontier or loss of control risks.
More in the full piece: helentoner.substack.com/p/nonprolife...
What to do instead? IMO the best option is to think in terms of "adaptation buffers," the gap between when we know a new misusable capability is coming and when it's actually widespread.
During that time, we need massive efforts to build as much societal resilience as we can.
The basic problem is that the kind of AI that's relevant here (for "misuse" risks) is going to get way cheaper & more accessible over time. This means that to indefinitely prevent/control its spread, your nonprolif regime will get more & more invasive and less & less effective.
05.04.2025 18:09 — 👍 1 🔁 0 💬 1 📌 0Seems likely that at some point AI will make it much easier to hack critical infrastructure, create bioweapons, etc etc. Many argue that if so, a hardcore nonproliferation strategy is our only option.
Rising Tide launch week post 3/3 is on why I disagree 🧵
helentoner.substack.com/p/nonprolife...
The idea of AI "alignment" seems increasingly confused—is it about content moderation or controlling superintelligence? Is it basically solved or wide open?
Rising Tide #2 is about how we got here, and how the core problem is whether we can steer advanced AI at all.
Sharing the very first post on my new substack, about the weird boiling frog of AI timelines. Somehow expecting human-level systems in the 2030s is now a conservative take?
2 more posts to come this week, then a slower pace. Subscribe, tell your friends!
helentoner.substack.com/p/long-timel...
⭐️New Report⭐️
Using AI to make military decisions?
CSET’s @emmyprobasco.bsky.social, @hlntnr.bsky.social, Matthew Burtell, and @timrudner.bsky.social analyze the advantages and risks of AI for military decisionmaking. cset.georgetown.edu/publication/...