Tim G. J. Rudner's Avatar

Tim G. J. Rudner

@timrudner.bsky.social

Assistant Professor, University of Toronto. Junior Research Fellow, Trinity College, Cambridge. AI Fellow, Georgetown University. Probabilistic Machine Learning, AI Safety & AI Governance. Prev: Oxford, Yale, UC Berkeley, NYU. https://timrudner.com

5,558 Followers  |  563 Following  |  85 Posts  |  Joined: 03.01.2024  |  2.0698

Latest posts by timrudner.bsky.social on Bluesky

Jane Goodall with monarch butterfly scarf

Jane Goodall with monarch butterfly scarf

“It actually doesn’t take much to be considered a difficult woman. That’s why there are so many of us.”
― Jane Goodall

💙 RIP to a real one. My childhood hero

02.10.2025 02:56 — 👍 32669    🔁 6881    💬 481    📌 322
The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference)
YouTube video by Lawfare The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference)

Today's Lawfare Daily is a @scalinglaws.bsky.social episode, produced with @utexaslaw.bsky.social, where @kevintfrazier.bsky.social spoke to @gushurwitz.bsky.social and @neilchilson.bsky.social about how academics can overcome positively contribute to the work associated with AI governance.

01.10.2025 13:39 — 👍 7    🔁 3    💬 0    📌 0

Beautiful paper!

01.10.2025 12:39 — 👍 1    🔁 0    💬 0    📌 0

It was a pleasure speaking at @yaleisp.bsky.social yesterday!

26.09.2025 10:50 — 👍 1    🔁 0    💬 0    📌 0

Tomorrow’s ISP Ideas Lunch update:

We’re excited to host @timrudner.bsky.social (U. Toronto & Vector Institute). He’ll speak on “formal guarantees” in AI + key AI safety concepts!

25.09.2025 01:53 — 👍 1    🔁 1    💬 1    📌 0
Post image

Our new lab for Human & Machine Intelligence is officially open at Princeton University!

Consider applying for a PhD or Postdoc position, either through Computer Science or Psychology. You can register interest on our new website lake-lab.github.io (1/2)

08.09.2025 13:59 — 👍 51    🔁 15    💬 2    📌 0

Democracy rewired is a 5-part series exploring how AI is reshaping democratic values — from individual agency to global sovereignty. The big question: can AI strengthen democracy?

25.08.2025 18:55 — 👍 3    🔁 1    💬 1    📌 0

I'm thrilled to join the Schwartz Reisman Institute for Technology and Society as a Faculty Affiliate!

16.08.2025 16:14 — 👍 3    🔁 0    💬 0    📌 0
Post image

Congrats! CDS PhD Student Vlad Sobal, Courant PhD Student Kevin Zhang, CDS Faculty Fellow timrudner.bsky.social, CDS Profs @kyunghyuncho.bsky.social and @yann-lecun.bsky.social, and Brown's Randall Balestriero won the Best Paper Award at ICML's 'Building Physically Plausible World Models' Workshop!

12.08.2025 16:12 — 👍 1    🔁 1    💬 1    📌 0
Post image Post image Post image Post image

CDS Faculty Fellow @timrudner.bsky.social served as general chair for the 7th Symposium on Advances in Approximate Bayesian Inference, held April alongside ICLR 2025.

The symposium explored connections between probabilistic machine learning and AI safety, NLP, RL, and AI for science.

17.07.2025 19:10 — 👍 3    🔁 1    💬 0    📌 0

Congratulations again!

14.07.2025 17:13 — 👍 1    🔁 0    💬 0    📌 0

Congratulations Umang!

15.05.2025 16:26 — 👍 1    🔁 0    💬 0    📌 0
Preview
AI in Military Decision Support: Balancing Capabilities with Risk CDS Faculty Fellow Tim G. J. Rudner and colleagues at CSET outline responsible practices for deploying AI in military decision-making.

CDS Faculty Fellow Tim G. J. Rudner (@timrudner.bsky.social) and colleagues at CSET — @emmyprobasco.bsky.social, @hlntnr.bsky.social, and Matthew Burtell — examine responsible AI deployment in military decision-making.

Read our post on their policy brief: nyudatascience.medium.com/ai-in-milita...

14.05.2025 19:23 — 👍 2    🔁 1    💬 0    📌 0

The result in this paper I'm most excited about:

We showed that planning in world model latent space allows successful zero-shot generalization to *new* tasks!

Project website: latent-planning.github.io

Paper: arxiv.org/abs/2502.14819

07.05.2025 21:26 — 👍 7    🔁 0    💬 0    📌 0

#1: Can Transformers Learn Full Bayesian Inference In Context? with @arikreuter.bsky.social @timrudner.bsky.social @vincefort.bsky.social

01.05.2025 12:36 — 👍 6    🔁 1    💬 0    📌 0

Very excited that our work (together with my PhD student @gbarto.bsky.social and our collaborator Dmitry Vetrov) was recognized with a Best Paper Award at #AABI2025!

#ML #SDE #Diffusion #GenAI 🤖🧠

30.04.2025 00:02 — 👍 19    🔁 2    💬 1    📌 0
Post image

Congratulations to the #AABI2025 Proceedings Track Best Paper Award recipients!

29.04.2025 20:55 — 👍 10    🔁 1    💬 0    📌 0
Post image

Congratulations to the #AABI2025 Workshop Track Outstanding Paper Award recipients!

29.04.2025 20:54 — 👍 20    🔁 8    💬 0    📌 1
Post image

We concluded #AABI2025 with a panel discussion on

**The Role of Probabilistic Machine Learning in the Age of Foundation Models and Agentic AI**

Thanks to Emtiyaz Khan, Luhuan Wu, and @jamesrequeima.bsky.social for participating!

29.04.2025 20:49 — 👍 10    🔁 3    💬 1    📌 0
Post image

.@jamesrequeima.bsky.social gave the third invited talk of the day at #AABI2025!

**LLM Processes**

29.04.2025 20:41 — 👍 5    🔁 2    💬 0    📌 0
Post image

Luhuan Wu is giving the second invited talk of the day at #AABI2025!

**Bayesian Inference for Invariant Feature Discovery from Multi-Environment Data**

Watch it on our livestream: timrudner.com/aabi2025!

29.04.2025 04:02 — 👍 3    🔁 2    💬 0    📌 0
Post image

Emtiyaz Khan is giving the first invited talk of the day at #AABI2025!

29.04.2025 01:55 — 👍 7    🔁 2    💬 0    📌 0
Post image

We just kicked off #AABI2025 at NTU in Singapore!

We're livestreaming the talks here: timrudner.com/aabi2025!

Schedule: approximateinference.org/schedule/

#ICLR2025 #ProbabilisticML

29.04.2025 01:47 — 👍 10    🔁 4    💬 0    📌 1
Preview
AABI 2025 · Luma 7th Symposium on Advances of Approximate Bayesian Inference (AABI) https://approximateinference.org/schedule

Make sure to get your tickets to #AABI2025 if you are in Singapore on April 29 (just after #ICLR2025) and interested in probabilistic ML, inference, and decision-making!

Tickets (free but limited!): lu.ma/5syzr79m
More info: approximateinference.org

#ProbabilisticML #Bayes #UQ #ICLR2025 #AABI2025

18.04.2025 03:42 — 👍 6    🔁 2    💬 0    📌 0
Post image

Make sure to get your tickets to AABI if you are in Singapore on April 29 (just after #ICLR2025) and interested in probabilistic modeling, inference, and decision-making!

Tickets (free but limited!): lu.ma/5syzr79m
More info: approximateinference.org

#Bayes #MachineLearning #ICLR2025 #AABI2025

13.04.2025 07:43 — 👍 17    🔁 8    💬 0    📌 1
Preview
Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches | Center for Security and Emerging Technology Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluat...

CDS Faculty Fellow @timrudner.bsky.social, with @minanrn.bsky.social & Christian Schoeberl, analyzed AI explainability evals, finding a focus on system correctness over real-world effectiveness. They call for the creation of standards for AI safety evaluations.

cset.georgetown.edu/publication/...

17.04.2025 16:05 — 👍 4    🔁 2    💬 0    📌 0
Preview
How the U.S. Public and AI Experts View Artificial Intelligence These groups are far apart in their enthusiasm and predictions for AI, but both want more personal control and worry about too little regulation.

A great Pew Research survey:

"How the U.S. Public and AI Experts View Artificial Intelligence"

Everyone working in ML should read this and ask themselves why experts and non-experts have such divergent views about the potential of AI to have a positive impact.

www.pewresearch.org/internet/202...

05.04.2025 21:28 — 👍 10    🔁 3    💬 2    📌 1

This is an excellent article!

Steering foundation models towards trustworthy behaviors is one of the most important research directions today.

Helen is a deep and rigorous thinker, and you should definitely subscribe to her Substack!

04.04.2025 01:42 — 👍 4    🔁 1    💬 0    📌 0

I'm super excited to see our #CSET report on **AI-enabled military decision support systems** being released today!

Great work by @emmyprobasco.bsky.social, @hlntnr.bsky.social, and Matthew Burtell!

01.04.2025 15:32 — 👍 0    🔁 1    💬 0    📌 0
Preview
Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches | Center for Security and Emerging Technology Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluat...

Evaluations of AI explainability claims are not clear cut.

@minanrn.bsky.social, @timrudner.bsky.social & Christian Schoeberl show that in the domain of recommender systems – where explanations are key – there are different notions of what explainability means: cset.georgetown.edu/publication/...

27.02.2025 15:14 — 👍 3    🔁 1    💬 0    📌 0

@timrudner is following 20 prominent accounts