Anirbit's Avatar

Anirbit

@anirbit.bsky.social

Assistant Professor/Lecturer in ML @ The University of Manchester | https://anirbit-ai.github.io/ | working on the theory of neural nets and how they solve differential equations. #AI4SCIENCE

144 Followers  |  69 Following  |  60 Posts  |  Joined: 16.11.2024  |  2.1885

Latest posts by anirbit.bsky.social on Bluesky

@aifunmcr.bsky.social

07.08.2025 16:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Some luck to be hosted by a Godel Prize winner, Prof. Sebastien Pokutta, and to present our work in their group πŸ’₯ Sebastien heads this "Zuse Institute Berlin (#ZIB) " which is an amazing oasis of applied mathematics bringing together experts from different institutes in Berlin.

07.08.2025 16:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Interested in statistics? Prof Subhashis Ghoshal will be delivering the below public lecture tomorrow:

Title: Immersion posterior: Meeting Frequentist Goals under Structural Restrictions
Time: Aug 5 16:00-17:00
Abstract: www.newton.ac.uk/seminar/45562/
Livestream: www.newton.ac.uk/news/watch-l...

04.08.2025 10:45 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Hello #FAU. Thanks for the quick plan to host me and letting me present our exciting mathematics of ML in infinite-dimensions, #operatorlearning. #sciML Their "Pattern Recognition Laboratory" is completing 50 years! @andreasmaier.bsky.social πŸ’₯

02.08.2025 18:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@aifunmcr.bsky.social

24.07.2025 13:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

University of Manchester has a 1 year post-doc position that I am happy to support in our group if you are currently an #EPSRC funded PhD student - and have the required specialization for work in our group. Typicall we prefer candidates who have published in deep-learning theory or fluid theory.

24.07.2025 13:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@aifunmcr.bsky.social

23.07.2025 16:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

#aiforscience

23.07.2025 16:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
DRSciML

Do mark your calendars for "DRSciML" (Dr. Scientific ML πŸ˜‰) on September 9 and 10 πŸ”₯
drsciml.github.io/drsciml/
- We are hosting a 2 day international workshop on understanding scientific-ML.
- We have leading experts from around the world giving talks.
- There might be ticketing. Watch this space!

23.07.2025 16:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

@aifunmcr.bsky.social

23.07.2025 16:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Major ML journals that have come up in the recent years,

- dl.acm.org/journal/topml
- jds.acm.org
- link.springer.com/journal/44439
- academic.oup.com/rssdat
- jmlr.org/tmlr/
- data.mlr.press

No reason why these cant replace everything the current conferences are doing and most likely better.

06.07.2025 19:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks. No, AutoSGD is not going as far as delta-GClip goes. It's Theorem 4.5 is where they have any global minima convergence happening - but it uses assumptions which are not known to be true for nets. Our convergence holds for *all* nets wide enough.

01.07.2025 10:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Do link to the paper! I can have a look and check.

01.07.2025 09:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So, the next time you train a deep-learning model, it's probably worthwhile to have a baseline for the only provable adaptive gradient deep-learning algorithm - our delta-GClip πŸ™‚

01.07.2025 08:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our insight is to introduce an intermediate form of gradient clipping that can leverage the PL* inequality of wide nets - something not known for standard clipping. Given our algorithm works for transformers maybe that points to some yet unkown algebraic property of them. #TMLR

29.06.2025 22:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our "delta-GCLip" is the *only* known adaptive gradient algorithm that provably trains deep-nets AND is practically competitive. That's the message of our recently accepted #TMLR paper - and my 4th TMLR journal πŸ™‚

openreview.net/pdf?id=ABT1X...

#optimization #deeplearningtheory

29.06.2025 22:36 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 2
Preview
GitHub - Anirbit-AI/Slides-from-Team-Anirbit: Slide Presentations of Our Works Slide Presentations of Our Works. Contribute to Anirbit-AI/Slides-from-Team-Anirbit development by creating an account on GitHub.

An updated version of our slides on necessary conditions for #SciML,
- and more specially,
"Machine Learning in Function Spaces/Infinite Dimensions".

Its all about the 2 key inequalities on slides 27 and 33.
Both come via similar proofs.

github.com/Anirbit-AI/S...

23.06.2025 22:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Now our research group has a logo to succinctly convey what we do - prove theorems about using ML to solve PDEs, leaning towards operator learning. Thanks to #ChatGPT4o for converting my sketches into a digital image πŸ”₯ #AI4Science #SciML

07.06.2025 19:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It would be great to be able to see a compiles list of useful PDEs that #PINNs struggle to solve - and how would we measure success there.

We know of edge-cases with simple PDEs, where PINNs struggle, but then often those aren't the cutting-edge of use-cases of PDEs.

24.05.2025 16:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@prochetasen.bsky.social @mingfei.bsky.social @omarrivasplata.bsky.social

09.04.2025 09:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@aifunmcr.bsky.social πŸ™‚

09.04.2025 09:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Regularized Gradient Clipping Provably Trains Wide and Deep Neural Networks We present and analyze a novel regularized form of the gradient clipping algorithm, proving that it converges to global minima of the loss surface of deep neural networks under the squared loss, provi...

A revised version of our delta-GClip algorithm - which is probably the *only* deep-learning algorithm that provably trains deep-nets while using step-size scheduling - and competes/supersedes heuristics like Adam and even Adam+Clipping on transformers.

arxiv.org/abs/2404.08624

09.04.2025 09:07 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

He was the first person to interview me for a PhD position in applied maths and stats when I decided to shift my career focus in that direction. Years later when I became a faculty, he accepted my invite to fly to UK from Leipzig to give a talk and meet my students. Sayan is irreplaceable 😢

04.04.2025 14:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Fun @ @aifunmcr.bsky.social 😁

02.04.2025 13:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@aifunmcr.bsky.social πŸ™‚

02.04.2025 12:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

"data uncertainty" and "algorithmic randomness" are clearly 2 separate sources of uncertainty in ML predictions. So it makes sense to identify the former with EU and the later with AU?

21.03.2025 13:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

My PhD student @dkumar9.bsky.social has done an amazing job of blending multiple deep results to establish that Langevin Monte-Carlo algorithm provably learns 2-layer nets for any data and for any size. This is very rigorously a "beyond NTK" regime. More to come from Dibyakanti πŸ™‚

21.03.2025 12:59 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

My first year PhD student SΓ©bastien AndrΓ©-sloan presents at a #INFORMS conference in Toronto. Its a joint work with Matthew Colbrook at DAMTP, Cambridge. We prove a first-of-its-kind size requirement on neural nets for solving PDEs in the super-resolution setup - the natural setup for #PINNs.

15.03.2025 19:10 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The big surprise is that this isoperimetry kicks in via a "constant" amount of regularization - something that doesn't scale with size of the nets. (2/2)

03.02.2025 17:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our key idea is to figure out that neural losses at arbitrary size and data can induce Gibbs' measures which satisfy the Poincare inequality. After that an existing result of Michael Jordan-Weijie Su-Bin Shi takes over (or any of the LMC results). (1/2)

03.02.2025 17:17 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@anirbit is following 18 prominent accounts