Jane Goodall with monarch butterfly scarf
“It actually doesn’t take much to be considered a difficult woman. That’s why there are so many of us.”
― Jane Goodall
💙 RIP to a real one. My childhood hero
@timrudner.bsky.social
Assistant Professor, University of Toronto. Junior Research Fellow, Trinity College, Cambridge. AI Fellow, Georgetown University. Probabilistic Machine Learning, AI Safety & AI Governance. Prev: Oxford, Yale, UC Berkeley, NYU. https://timrudner.com
Jane Goodall with monarch butterfly scarf
“It actually doesn’t take much to be considered a difficult woman. That’s why there are so many of us.”
― Jane Goodall
💙 RIP to a real one. My childhood hero
Today's Lawfare Daily is a @scalinglaws.bsky.social episode, produced with @utexaslaw.bsky.social, where @kevintfrazier.bsky.social spoke to @gushurwitz.bsky.social and @neilchilson.bsky.social about how academics can overcome positively contribute to the work associated with AI governance.
01.10.2025 13:39 — 👍 7 🔁 3 💬 0 📌 0Beautiful paper!
01.10.2025 12:39 — 👍 1 🔁 0 💬 0 📌 0It was a pleasure speaking at @yaleisp.bsky.social yesterday!
26.09.2025 10:50 — 👍 1 🔁 0 💬 0 📌 0Tomorrow’s ISP Ideas Lunch update:
We’re excited to host @timrudner.bsky.social (U. Toronto & Vector Institute). He’ll speak on “formal guarantees” in AI + key AI safety concepts!
Our new lab for Human & Machine Intelligence is officially open at Princeton University!
Consider applying for a PhD or Postdoc position, either through Computer Science or Psychology. You can register interest on our new website lake-lab.github.io (1/2)
Democracy rewired is a 5-part series exploring how AI is reshaping democratic values — from individual agency to global sovereignty. The big question: can AI strengthen democracy?
25.08.2025 18:55 — 👍 3 🔁 1 💬 1 📌 0I'm thrilled to join the Schwartz Reisman Institute for Technology and Society as a Faculty Affiliate!
16.08.2025 16:14 — 👍 3 🔁 0 💬 0 📌 0Congrats! CDS PhD Student Vlad Sobal, Courant PhD Student Kevin Zhang, CDS Faculty Fellow timrudner.bsky.social, CDS Profs @kyunghyuncho.bsky.social and @yann-lecun.bsky.social, and Brown's Randall Balestriero won the Best Paper Award at ICML's 'Building Physically Plausible World Models' Workshop!
12.08.2025 16:12 — 👍 1 🔁 1 💬 1 📌 0CDS Faculty Fellow @timrudner.bsky.social served as general chair for the 7th Symposium on Advances in Approximate Bayesian Inference, held April alongside ICLR 2025.
The symposium explored connections between probabilistic machine learning and AI safety, NLP, RL, and AI for science.
Congratulations again!
14.07.2025 17:13 — 👍 1 🔁 0 💬 0 📌 0Congratulations Umang!
15.05.2025 16:26 — 👍 1 🔁 0 💬 0 📌 0CDS Faculty Fellow Tim G. J. Rudner (@timrudner.bsky.social) and colleagues at CSET — @emmyprobasco.bsky.social, @hlntnr.bsky.social, and Matthew Burtell — examine responsible AI deployment in military decision-making.
Read our post on their policy brief: nyudatascience.medium.com/ai-in-milita...
The result in this paper I'm most excited about:
We showed that planning in world model latent space allows successful zero-shot generalization to *new* tasks!
Project website: latent-planning.github.io
Paper: arxiv.org/abs/2502.14819
#1: Can Transformers Learn Full Bayesian Inference In Context? with @arikreuter.bsky.social @timrudner.bsky.social @vincefort.bsky.social
01.05.2025 12:36 — 👍 6 🔁 1 💬 0 📌 0Very excited that our work (together with my PhD student @gbarto.bsky.social and our collaborator Dmitry Vetrov) was recognized with a Best Paper Award at #AABI2025!
#ML #SDE #Diffusion #GenAI 🤖🧠
Congratulations to the #AABI2025 Proceedings Track Best Paper Award recipients!
29.04.2025 20:55 — 👍 10 🔁 1 💬 0 📌 0Congratulations to the #AABI2025 Workshop Track Outstanding Paper Award recipients!
29.04.2025 20:54 — 👍 20 🔁 8 💬 0 📌 1We concluded #AABI2025 with a panel discussion on
**The Role of Probabilistic Machine Learning in the Age of Foundation Models and Agentic AI**
Thanks to Emtiyaz Khan, Luhuan Wu, and @jamesrequeima.bsky.social for participating!
.@jamesrequeima.bsky.social gave the third invited talk of the day at #AABI2025!
**LLM Processes**
Luhuan Wu is giving the second invited talk of the day at #AABI2025!
**Bayesian Inference for Invariant Feature Discovery from Multi-Environment Data**
Watch it on our livestream: timrudner.com/aabi2025!
Emtiyaz Khan is giving the first invited talk of the day at #AABI2025!
29.04.2025 01:55 — 👍 7 🔁 2 💬 0 📌 0We just kicked off #AABI2025 at NTU in Singapore!
We're livestreaming the talks here: timrudner.com/aabi2025!
Schedule: approximateinference.org/schedule/
#ICLR2025 #ProbabilisticML
Make sure to get your tickets to #AABI2025 if you are in Singapore on April 29 (just after #ICLR2025) and interested in probabilistic ML, inference, and decision-making!
Tickets (free but limited!): lu.ma/5syzr79m
More info: approximateinference.org
#ProbabilisticML #Bayes #UQ #ICLR2025 #AABI2025
Make sure to get your tickets to AABI if you are in Singapore on April 29 (just after #ICLR2025) and interested in probabilistic modeling, inference, and decision-making!
Tickets (free but limited!): lu.ma/5syzr79m
More info: approximateinference.org
#Bayes #MachineLearning #ICLR2025 #AABI2025
CDS Faculty Fellow @timrudner.bsky.social, with @minanrn.bsky.social & Christian Schoeberl, analyzed AI explainability evals, finding a focus on system correctness over real-world effectiveness. They call for the creation of standards for AI safety evaluations.
cset.georgetown.edu/publication/...
A great Pew Research survey:
"How the U.S. Public and AI Experts View Artificial Intelligence"
Everyone working in ML should read this and ask themselves why experts and non-experts have such divergent views about the potential of AI to have a positive impact.
www.pewresearch.org/internet/202...
This is an excellent article!
Steering foundation models towards trustworthy behaviors is one of the most important research directions today.
Helen is a deep and rigorous thinker, and you should definitely subscribe to her Substack!
I'm super excited to see our #CSET report on **AI-enabled military decision support systems** being released today!
Great work by @emmyprobasco.bsky.social, @hlntnr.bsky.social, and Matthew Burtell!
Evaluations of AI explainability claims are not clear cut.
@minanrn.bsky.social, @timrudner.bsky.social & Christian Schoeberl show that in the domain of recommender systems – where explanations are key – there are different notions of what explainability means: cset.georgetown.edu/publication/...