Langevin Monte-Carlo Provably Learns Depth Two Neural Nets at Any Size and Data
In this work, we will establish that the Langevin Monte-Carlo algorithm can learn depth-2 neural nets of any size and for any data and we give non-asymptotic convergence rates for it. We achieve this ...
Noisy gradient descent has attracted a lot of attention in the last few years as a mathematically tractable model of actual deep-learning algorithms.
In my recent work with @anirbit.bsky.social and Samyak Jha (arxiv.org/abs/2503.10428), we prove noisy gradient descent learns neural nets.
19.03.2025 11:29 โ ๐ 5 ๐ 0 ๐ฌ 0 ๐ 1
Anti-cynic. Towards a weirder future. Reinforcement Learning, Autonomous Vehicles, transportation systems, the works. Asst. Prof at NYU
https://emerge-lab.github.io
https://www.admonymous.co/eugenevinitsky
AI at Microsoft Research.
If universe is an optimizer, what is its loss function?
Code infra lead for Phi series of models.
Some Open Source Projects: Airsim, TensorWatch, Archai, NanuGPT
All opinions are my own.
UC Berkeley Professor working on AI. Co-Director: National AI Institute on the Foundations of Machine Learning (IFML). http://BespokeLabs.ai cofounder
Google Chief Scientist, Gemini Lead. Opinions stated here are my own, not those of Google. Gemini, TensorFlow, MapReduce, Bigtable, Spanner, ML things, ...
A philosophy webcomic about the inevitable anguish of living a brief life in an absurd world. Also jokes.
theory of neural networks for natural and artificial intelligence
https://pehlevan.seas.harvard.edu/
Assistant Prof of CS at the University of Waterloo, Faculty and Canada CIFAR AI Chair at the Vector Institute. Joining NYU Courant in September 2026. Co-EiC of TMLR. My group is The Salon. Privacy, robustness, machine learning.
http://www.gautamkamath.com
Strengthening Europe's Leadership in AI through Research Excellence | ellis.eu
Professor at Wharton, studying AI and its implications for education, entrepreneurship, and work. Author of Co-Intelligence.
Book: https://a.co/d/bC2kSj1
Substack: https://www.oneusefulthing.org/
Web: https://mgmt.wharton.upenn.edu/profile/emollick
Decision-making under uncertainty, machine learning theory, artificial intelligence ยท anti-ideological ยท Assistant Research Professor, Cornell
https://avt.im/ ยท https://scholar.google.com/citations?user=EGKYdiwAAAAJ&sortby=pubdate
Researcher in machine learning
Associate professor of statistics and data science at UPenn
Professor, Stanford University, Statistics and Mathematics. Opinions are my own.
I work on AI at OpenAI.
Former VP AI and Distinguished Scientist at Microsoft.
Professor at University of Toronto. Research on machine learning, optimization, and statistics.
Professor @UCLA, Research Scientist @ByteDance | Recent work: SPIN, SPPO, DPLM 1/2, GPM, MARS | Opinions are my own
Interests on bsky: ML research, applied math, and general mathematical and engineering miscellany. Also: Uncertainty, symmetry in ML, reliable deployment; applications in LLMs, computational chemistry/physics, and healthcare.
https://shubhendu-trivedi.org
Centre for AI Fundamentals
ELLIS Manchester
Foundational AI research
www.ai-fun.manchester.ac.uk
ellismcr.org
ellis.eu
www.manchester.ac.uk
Professor of Applied Physics at Stanford | Venture Partner a16z | Research in AI, Neuroscience, Physics
Blog: https://argmin.substack.com/
Webpage: https://people.eecs.berkeley.edu/~brecht/