Don't miss the next Statistics and DSI Joint Colloquium!
@uuujf.bsky.social, postdoc fellow at the Simons Institute at @ucberkeleyofficial.bsky.social, presents 'Towards a Less Conservative Theory of Machine Learning: Unstable Optimization and Implicit Regularization' on Thursday, February 5th
29.01.2026 15:05 β
π 1
π 1
π¬ 1
π 0
slides: uuujf.github.io/postdoc/wu20...
26.09.2025 03:49 β
π 0
π 0
π¬ 0
π 0
GD dominates ridge
sharing a new paper w/ Peter Bartlett, @jasondeanlee.bsky.social @shamkakade.bsky.social, Bin Yu
ppl talking about implicit regularization, but how good is it? We show it's surprisingly effective: GD dominates ridge for linear regression, w/ more cool stuff on GD vs SGD
arxiv.org/abs/2509.17251
26.09.2025 03:49 β
π 0
π 0
π¬ 1
π 0
Iβm an award-winning mathematician. Trump just cut my funding.
The βMozart of Mathβ tried to stay out of politics. Then it came for his research.
I wrote an op-ed on the world-class STEM research ecosystem in the United States, and how this ecosystem is now under attack on multiple fronts by the current administration: newsletter.ofthebrave.org/p/im-an-awar...
18.08.2025 15:45 β
π 793
π 325
π¬ 20
π 32
More Than 50 Simons Foundation Grantees to Speak at 2026 International Congress of Mathematicians
More Than 50 Simons Foundation Grantees to Speak at 2026 International Congress of Mathematicians on Simons Foundation
Congratulations to our colleague and friend, former Simons Institute Associate Director Peter Bartlett, who will be delivering one of the plenary lectures for the 2026 International Congress of Mathematicians.
www.simonsfoundation.org/2025/07/11/m...
15.08.2025 06:12 β
π 7
π 1
π¬ 0
π 0
π£Join us at COLT 2025 in Lyon for a community event!
π
When: Mon, June 30 | 16:00 CET
What: Fireside chat w/ Peter Bartlett & Vitaly Feldman on communicating a research agenda, followed by mentorship roundtable to practice elevator pitches & mingle w/ COLT community!
let-all.com/colt25.html
24.06.2025 18:22 β
π 16
π 7
π¬ 0
π 1
effects of stepsize for GD
Sharing two new papers on accelerating GD via large stepsizes!
Classical GD analysis assumes small stepsizes for stability. However, in practice, GD is often used with large stepsizes, which lead to instability.
See my slides for more details on this topic: uuujf.github.io/postdoc/wu20...
04.06.2025 18:55 β
π 1
π 0
π¬ 1
π 0
Jingfeng Wu, Pierre Marion, Peter Bartlett
Large Stepsizes Accelerate Gradient Descent for Regularized Logistic Regression
https://arxiv.org/abs/2506.02336
04.06.2025 05:26 β
π 1
π 1
π¬ 0
π 0
Announcing the first workshop on Foundations of Post-Training (FoPT) at COLT 2025!
π Soliciting abstracts/posters exploring theoretical & practical aspects of post-training and RL with language models!
ποΈ Deadline: May 19, 2025
09.05.2025 17:09 β
π 17
π 6
π¬ 1
π 1
We were very lucky to have Peter Bartlett visit @uwcheritoncs.bsky.social and give a Distinguished Lecture on "Gradient Optimization Methods: The Benefits of a Large Step-size." Very interesting and surprising results.
(Recording will be available eventually)
07.05.2025 10:11 β
π 26
π 3
π¬ 0
π 0
Tips on How to Connect at Academic Conferences
I was a kinda awkward teenager. If you are a CS researcher reading this post, then chances are, you were too. How to navigate social situations and make friends is not always intuitive, and has to β¦
I wrote a post on how to connect with people (i.e., make friends) at CS conferences. These events can be intimidating so here's some suggestions on how to navigate them
I'm late for #ICLR2025 #NAACL2025, but in time for #AISTATS2025 #ICML2025! 1/3
kamathematics.wordpress.com/2025/05/01/t...
01.05.2025 12:57 β
π 69
π 19
π¬ 3
π 2
Ruiqi Zhang, Jingfeng Wu, Licong Lin, Peter L. Bartlett
Minimax Optimal Convergence of Gradient Descent in Logistic Regression via Large and Adaptive Stepsizes
https://arxiv.org/abs/2504.04105
08.04.2025 05:26 β
π 4
π 1
π¬ 0
π 0