TCS+ RSVP: Natalie Collina (2025/12/03)
Title: Swap regret and correlated equilibria beyond normal-form games
π’ Our last TCS+ talk of the season will be Wed, Dec 3 (10am PT, 1pm ET, 19:00 CET): Natalie Collina (@ncollina.bsky.social), from UPenn, will tell us about "Swap regret and correlated equilibria beyond normal-form games"!
RSVP to receive the link (one day before the talk): forms.gle/utLgSxLpqvpx...
25.11.2025 19:43 β π 11 π 8 π¬ 0 π 2
I am recruiting PhD students at NYU Courant to conduct research in learning theory, algorithmic statistics, and trustworthy machine learning, starting Fall 2026. Please share widely! Deadline to apply is December 12, 2025.
13.11.2025 15:00 β π 21 π 9 π¬ 2 π 1
Yes lovely paper. It works for regression trees too.
13.11.2025 00:55 β π 1 π 0 π¬ 0 π 0
Did your fairness/privacy/CS&Law/etc paper just get rejected from ITCS? Oh FORC! Submit tomorrow and join us at Harvard this summer.
10.11.2025 18:12 β π 7 π 1 π¬ 0 π 0
Foundations of Responsible Computing (FORC) is a super exciting new conference focused on the intersection of mathematical research and society. It's also a fantastic and vibrant community.
Check out the CfP, with two deadlines. Also follow the new BSky account @forcconf.bsky.social !
10.11.2025 16:50 β π 5 π 1 π¬ 1 π 1
FORC 2026: Call for Papers
The 7th annual Symposium on Foundations of Responsible Computing (FORC) will be held on June 3-5, 2026 at Harvard University. Brief summary for those who are familiar with past editions (prior to 2β¦
The first of two @forcconf.bsky.social 2026 deadlines is tomorrow, Nov 11! I hope everyone is putting finishing polishes on their submissions. For everyone else who doesn't want to miss out on the fun at Harvard this summer, there is another deadline in Feb. responsiblecomputing.org/forc-2026-ca...
10.11.2025 14:56 β π 5 π 3 π¬ 0 π 0
2026 ESIFΒ Economics and AI+ML Meeting - The Econometric Society
2026 ESIFΒ Economics and AI+ML Meeting (ESIF-AIML2026) June 16-17, 2026 Cornell University Department...
If you work at the intersection of CS and economics (or think your work is of interest to those who do!) consider submitting to the ESIF Economics and AI+ML meeting this summer at Cornell: www.econometricsociety.org/regional-act...
08.11.2025 22:37 β π 7 π 6 π¬ 1 π 0
A few weeks ago everyone was super hype about Nano Banana. Meanwhile, I ask it to do super basic things and it fails. What am I doing wrong??
(why would I want a collage of these amazing researchers? Stay tuned CC @let-all.com π)
More fails in the transcript: gemini.google.com/share/5cc80f...
07.11.2025 15:52 β π 8 π 1 π¬ 2 π 1
Announcing the 7th Learning Theory Alliance mentoring workshop on November 20. Fully free & virtual!
Theme: Harnessing AI for Research, Learning, and Communicating
Ft @aaroth.bsky.social @andrejristeski.bsky.social @profericwong.bsky.social @ktalwar.bsky.social &more
07.11.2025 16:34 β π 15 π 10 π¬ 1 π 3
My own STOC submission. Really did use elegant properties of linear regression that I didn't know about until embarrassingly recently!
05.11.2025 01:32 β π 3 π 0 π¬ 1 π 0
I'm working on it!
04.11.2025 23:54 β π 1 π 0 π¬ 0 π 0
I've been enjoying learning about linear regression. This is a really cool machine learning technique with some really elegant theory --- someone should have taught me about this earlier!
04.11.2025 23:33 β π 29 π 0 π¬ 8 π 2
Robust Decision Making with Partially Calibrated Forecasts
Calibration has emerged as a foundational goal in ``trustworthy machine learning'', in part because of its strong decision theoretic semantics. Independent of the underlying distribution, and independ...
i.e. in this framework, you get the decision theoretic benefits of full calibration at an extremely low (and computationally tractable) level of this hierarchy. The paper is here: arxiv.org/abs/2510.23471 and is joint with Shayan Kiyani, Hamed Hassani, and George Pappas.
30.10.2025 19:02 β π 3 π 0 π¬ 0 π 0
What lies in between? Maybe infinite hierarchy of ever less conservative decision rules as we add to H. But one surprise that we find is that as soon as H contains the decision calibration tests (just one for each action), the optimal decision rule collapses to best response.
30.10.2025 19:02 β π 1 π 0 π¬ 1 π 0
We can interpolate between full calibration and no information: optimize for the worst distribution that is consistent with the H-calibration guarantees of f. When H is empty, we recover the minimax safety strategy. When H is all functions, we recover the best-response rule.
30.10.2025 19:02 β π 1 π 0 π¬ 1 π 0
Full calibration is hard and so rarely satisfied. But predictions aren't useless either. Maybe the forecaster is partially calibrated in that for some class of tests H={h1,...,hk}, we know that |E[(f(x)-y)*h(f(x))]| <= eps. Most relaxations of calibration have this format.
30.10.2025 19:02 β π 1 π 0 π¬ 1 π 0
If the forecasts have no bearing on the outcome at all, then you should ignore them, and you might conservatively play your minimax strategy: argmax_a min_o u(a,o). The forecasts don't tell you how to do anything better. But generally we aren't in either of these two cases.
30.10.2025 19:02 β π 1 π 0 π¬ 1 π 0
How should you use forecasts f:X->R^d to make decisions? It depends what properties they have. If they are fully calibrated (E[y | f(x) = p] = p), then you should be maximally aggressive and act as if they are correct --- i.e. play argmax_a E_{o ~ f(x)}[u(a,o)]. On the other hand
30.10.2025 19:02 β π 16 π 2 π¬ 1 π 0
Congrats to @epsilonrational.bsky.social @ncollina.bsky.social @aaroth.bsky.social on being featured in quanta!
(Also do check out the paper, also involving Sampath Kannan and me that the piece is based on here: arxiv.org/abs/2409.03956)
22.10.2025 20:43 β π 23 π 2 π¬ 0 π 0
And now might be a good time to mention, Iβm on the faculty job market this year! I do work in human-AI collusion, collaboration and competition, with an eye towards building foundations for trustworthy AI. Check out more info on my website here! Nataliecollina.com
22.10.2025 15:56 β π 32 π 13 π¬ 1 π 2
(And Natalie is on the job market..)
22.10.2025 15:49 β π 7 π 0 π¬ 1 π 0
(but that is not the same thing I think as having "no understanding" and only be extruding text)
21.10.2025 19:58 β π 0 π 0 π¬ 0 π 0
I totally agree they are worse at out of distribution kinds of examples, and cannot learn/improve on these tasks the same way people can. They seem to have modest understanding of a huge collection of things rather than deep understanding in anything, and no ability to learn.
21.10.2025 19:58 β π 0 π 0 π¬ 1 π 0
I do not --- but I have been using them quite a bit to draft mathematics using coding tools like Cursor and Windsurf, and I have found them useful. It is very much human in the loop --- but also very useful in my experience.
21.10.2025 19:54 β π 0 π 0 π¬ 0 π 0
But now that they write working code, and can be useful assistants in mathematical research (including my own) I don't see how it is defensible to say that all value/understanding comes from interpretation on the part of the human user. I'd be interested in hearing the best version of the argument.
21.10.2025 15:33 β π 7 π 0 π¬ 1 π 0
I have to say, because of my upbringing in computer science (and in particular TCS), I am partial to the functionalist argument. When LLMs were just chatting and writing poems, I could believe that we were reading more into them than was there because of our anthopomorphic biases.
21.10.2025 15:33 β π 2 π 0 π¬ 1 π 0
Sebastien's argument is functionalist --- "And yet, it moves" --- i.e. LLMs can now actually do things that if a person did them, we would ascribe to intelligence, and that this is all that matters. This is essentially Turing's "polite convention".
21.10.2025 15:33 β π 3 π 0 π¬ 1 π 0
YouTube video by Computer History Museum
CHM Live | The Great Chatbot Debate: Do LLMs Really Understand?
An interesting debate between Emily Bender and Sebastien Bubeck: www.youtube.com/watch?v=YtIQ... ---Emily's thesis is roughly summarized as: "LLMs extrude plausible sounding text, and the illusion of understanding comes entirely from how the listener's human mind interprets language. "
21.10.2025 15:33 β π 8 π 1 π¬ 2 π 0
Computer Science PhD student & Knight-Hennessy scholar at @stanford.edu.
Prev.: @ox.ac.uk with @rhodeshouse.ox.ac.uk, @harvard.edu '23, @maxplanck.de, @ethz.ch, IBM Research.
Theory CS for trustworthy AI
https://silviacasacuberta.com
Assistant Professor @ Rice University,
Former Postdoc @ UC Berkeley EECS
πPhD from MIT EECS
The Symposium on Foundations of Responsible Computing (FORC) is a forum for mathematical research in computation and society writ large.
https://responsiblecomputing.org/
CS Prof at Brown University, PI of the GIRAFFE lab, former AI Policy Advisor in the US Senate.
PhD at MIT CSAIL '23, Harvard '16, former Google APM. Dog mom to NSDTR Ducki.
Technical AI Policy Researcher at HuggingFace @hf.co π€. Current focus: Responsible AI, AI for Science, and @eval-eval.bsky.socialβ¬!
Research group leader @ Max Planck Institute working on theory & social aspect of CS. Previous @UCSC@GoogleDeepMind @Stanford @PKU1898
https://yatongchen.github.io/
MIT postdoc, incoming UIUC CS prof
katedonahue.me
Marketing & Economics Professor the Wharton School of @UPenn. Scholar of digital economy, technology, media. @NBER.org Fellow. www.pinaryildirim.com
Professor @ UVA Law School, writing about discrimination law and theory, bribery and corruption.
Assistant Professor of Machine Learning, Carnegie Mellon University (CMU)
Building a Natural Science of Intelligence π§ π€β¨
Prev: ICoN Postdoctoral Fellow @MIT, PhD @Stanford NeuroAILab
Personal Website: https://cs.cmu.edu/~anayebi
Chief Economics Correspondent for The New York Times. Adjunct at CUNY Newmark. Ex: FiveThirtyEight, WSJ. He/him.
Email: ben.casselman@nytimes.com
Signal: @bencasselman.96
πΈ: Earl Wilson/NYT
Physicist Turned Psychologist | Senior Researcher in #STEMed | Meta-Analysis Nerd | https://d-miller.github.io/
Also posts about π§ͺ science funding to focus my attention.
Personal account. I donβt speak for my employer or any other orgs.
Ecology, evolution, and social dynamics at the University of Pennsylvania. He/him
https://akcay.theoretical.bio
Assistant Professor of Operations Research and Statistics, MIT. Interested in social computing, human-in-the-loop of AI decision making, recommendation systems, and AI policy.
Econometrics, Statistics, Computational Economics, etc
http://donskerclass.github.io
πΊπ² in π¨π. π³οΈββ§οΈ
Scientist, Inventor, author of the NTQR Python package for AI safety through formal verification of unsupervised evaluations. On a mission to eliminate Majority Voting from AI systems. E Pluribus Unum.
Assistant Professor at University of Pennsylvania.
Robot Learning.
https://www.seas.upenn.edu/~dineshj/
The world's leading venue for collaborative research in theoretical computer science. Follow us at http://YouTube.com/SimonsInstitute.