Aaron Roth's Avatar

Aaron Roth

@aaroth.bsky.social

Professor at Penn, Amazon Scholar at AWS. Interested in machine learning, uncertainty quantification, game theory, privacy, fairness, and most of the intersections therein

4,056 Followers  |  380 Following  |  303 Posts  |  Joined: 20.10.2023  |  2.399

Latest posts by aaroth.bsky.social on Bluesky

1-panel SMBC comic update with the caption "Science pro tip: you can prove anything you want using regression to the mean" where a scientist explains he can up student grades by smearing peanut butter and googly eyes on the foreheads of the worse performing ones.

1-panel SMBC comic update with the caption "Science pro tip: you can prove anything you want using regression to the mean" where a scientist explains he can up student grades by smearing peanut butter and googly eyes on the foreheads of the worse performing ones.

If you oppose this it's because you want line go down, but I want line go up.

COMIC β—† www.smbc-comics.com/comic/protocol
PATREON β—† www.patreon.com/c/ZachWeiner...
STORE β—† smbc-store.myshopify.com

01.08.2025 22:30 β€” πŸ‘ 38    πŸ” 5    πŸ’¬ 2    πŸ“Œ 1

I was the AEA's President Elect as the first Trump term began. We worried about government statistics then, and appointed a committee that included members (such as Google's Chief Economist) who might be in a position to help make reliable statistics available if the government went dark.
#econsky

02.08.2025 23:53 β€” πŸ‘ 182    πŸ” 70    πŸ’¬ 3    πŸ“Œ 1

Tried this out; generally very impressed with the quality of the feedback!

25.07.2025 01:15 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This was also part of a fun exploration we did at the same time in how to incorporate modern AI tools in the process of doing theoretical and experimental science. We learned a lot about that too --- we'll write more at some point.

24.07.2025 13:41 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

The paper is here: arxiv.org/abs/2507.09683 This is joint work with @mkearnsphilly.bsky.social and Emily Ryu, from a fun visit in June.

24.07.2025 13:41 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Nevertheless, we show that in any DAG information will be aggregated at the end of a path of length D up to excess error ~ 1/sqrt{D}. And there are distributions such that no DAG with depth D can support information aggregation to error < 1/D. So depth is a fundamental parameter.

24.07.2025 13:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

No! Even though y = x1 + x2, x1 is uninformative on its own, since it is independent of y. Hence A's predictor will be the constant function f_A(x1) = 0. This is all B sees, so learns nothing about x1. The best predictor he can learn from x2 has MSE 1/2 -- information was lost.

24.07.2025 13:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

An example to see why the problem is delicate/interesting: Suppose there are two agents, A and B, arranged in a line A->B. A observes x1 and B observes x2. x1 and y are iid standard Gaussians, and x2 = y-x1. So there is a perfect linear predictor: y = x1 + x2. Can B learn it?

24.07.2025 13:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

When is this enough for "information aggregation" to take place? i.e. when will someone in the network be able to produce a predictor that is as accurate as if they had trained a regression model on ALL features, even though nobody sees all features at once?

24.07.2025 13:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Nobody sees other's observations, just their parents' predictions. They can use these as features (together with their direct observations) to solve a regression problem. Their own predictions will be observed by their children. This is how information spreads in the network.

24.07.2025 13:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Suppose we have a regression problem that many people want to solve, but information is distributed across agents --- different agents see different subsets of features. And the agents are embedded in a network (DAG). When one makes predictions, they are observed by its children.

24.07.2025 13:41 β€” πŸ‘ 14    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Intersectional Fairness in Reinforcement Learning with Large State and Constraint Spaces In traditional reinforcement learning (RL), the learner aims to solve a single objective optimization problem: find the policy that maximizes expected reward. However, in many real-world settings, it ...

Come say hi at our #ICML poster today during Poster Session 1 (W-600)! Joint work with @ericeaton.bsky.social @marcelhussing.bsky.social @optimistsinc.bsky.social @aaroth.bsky.social @mkearnsphilly.bsky.social!

arxiv.org/abs/2502.11828

15.07.2025 14:38 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
Swap Regret and Correlated Equilibria Beyond Normal-Form Games Swap regret is a notion that has proven itself to be central to the study of general-sum normal-form games, with swap-regret minimization leading to convergence to the set of correlated equilibria and...

Delighted (and honestly a little bit stunned) that our paper β€œSwap Regret and Correlated Equilibria Beyond Normal-Form Games” was just awarded both the β€œBest Paper” and β€œBest Student Paper” at EC! arxiv.org/abs/2502.20229

03.07.2025 15:21 β€” πŸ‘ 63    πŸ” 5    πŸ’¬ 3    πŸ“Œ 1
Post image

Agentic LLM tooling like Windsurf is amazing. But you can already tell from the interface (you can pick from two dozen models from half a dozen different model providers, seamlessly) that AI is going to be a commodity. Not at all clear that LLM developers will get the surplus.

11.06.2025 23:40 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Amazing set of hires, congratulations!

03.06.2025 22:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Decimating (many times over!) scientific funding and (worse!) restricting international scientists from coming here will give up our enviable status as the world center for science and technology. Even if these policies are reversed 4 years down the line it might be too late. 5/5

31.05.2025 13:21 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But for the same reason it was near impossible for anyone to overtake the US while we were the agreed upon meeting point for scientific talent, it will be near impossible to regain this status if the center moves elsewhere. This is why the current moment is so dangerous. 4/

31.05.2025 13:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This is why for the last 75 years, the United States has dominated scientific discovery and why the world-beating tech companies were founded and headquartered in the US: this has been where the talent is. This was a huge source of American economic (and military) power. 3/

31.05.2025 13:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This makes it hard for any other country to compete: while the US has been the global center for science, no other country has even been able to retain their -own- best students domestically, let alone attract the best students worldwide. 2/

31.05.2025 13:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The United States has had a tremendous advantage in science and technology because it has been the consensus gathering point: the best students worldwide want to study and work in the US because that is where the best students are studying and working. 1/

31.05.2025 13:21 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
EC 2025 Accepted Papers - EC 2025 1. Optimality of Non-Adaptive Algorithms in Online Submodular Welfare Maximization with Stochastic Outcomes Authors: Rajan Udwani (University of California, Berkeley) 2. Investment and misallocation i...

Check out the terrific set of EC 2025 accepted papers! ec25.sigecom.org/program/acce...

17.05.2025 21:26 β€” πŸ‘ 22    πŸ” 11    πŸ’¬ 0    πŸ“Œ 0

What is the complaint? It sounds like it is finding plenty of duplicates, just as it says on the box.

29.04.2025 17:41 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In some sense the bigger deal is perhaps the lower bound --- I don't think it was previously known that you couldn't get bounds diminishing at a polynomial rate with epsilon even for high dimensions, though of course this was suspected given the lack of algorithms.

15.04.2025 22:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A surprising new parameter regime, but right, not a pure improvement. An exponentially better dependence on the dimension at a cost of an exponentially worse dependence on the error parameter. It mirrors recent new swap regret algorithms (one by the same author).

15.04.2025 22:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
High dimensional online calibration in polynomial time In online (sequential) calibration, a forecaster predicts probability distributions over a finite outcome space $[d]$ over a sequence of $T$ days, with the goal of being calibrated. While asymptotical...

Wow! arxiv.org/abs/2504.09096

15.04.2025 19:25 β€” πŸ‘ 17    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Referring to Godels incompleteness theorem in any context other than mathematical logic is also a red flag

09.04.2025 20:06 β€” πŸ‘ 16    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Thanks for the pointer, I'll check it out!

09.04.2025 12:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

The paper is here: arxiv.org/abs/2504.06075 and is joint work with the excellent @ncollina.bsky.social , @iraglobusharris.bsky.social , @surbhigoel.bsky.social , Varun Gupta, and Mirah Shi!

09.04.2025 12:11 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Just as our last paper generalized Aumann's agreement theorem, this paper tractably generalizes "information aggregation" theorems for Bayesian reasoners. Our results lift back to the classic Bayesian setting and give the first distribution free information aggregation theorems.

09.04.2025 12:11 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

For any function classes H(A), H(B), H(J) satisfying this weak learning condition, we show how two parties can collaborate to be as accurate at H(J), by only needing to solve a small number of squared error regression problems on their own data over H(A) and H(B) respectively.

09.04.2025 12:11 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@aaroth is following 20 prominent accounts