(1/n)๐จTrain a model solving DFT for any geometry with almost no training data
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
๐ arxiv.org/abs/2506.01225
๐ป github.com/majhas/self-...
10.06.2025 19:49 โ ๐ 12 ๐ 4 ๐ฌ 1 ๐ 1
LOGML 2025
London Geometry and Machine Learning Summer School, July 7-11 2025
๐Applications open- LOGML 2025๐
๐ฅMentor-led projects, expert talks, tutorials, socials, and a networking night
โ๏ธApplication form: logml.ai
๐ฌProjects: www.logml.ai/projects.html
๐
Apply by 6th April 2025
โ๏ธQuestions? logml.committee@gmail.com
#MachineLearning #SummerSchool #LOGML #Geometry
11.03.2025 15:24 โ ๐ 20 ๐ 9 ๐ฌ 2 ๐ 1
Just write a short informal email. If the person needs a long-winded polite email to answer, then perhaps you don't want to have to interact with them.
09.03.2025 13:42 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
SuperDiff goes super big!
- Spotlight at #ICLR2025!๐ฅณ
- Stable Diffusion XL pipeline on HuggingFace huggingface.co/superdiff/su... made by Viktor Ohanesian
- New results for molecules in the camera-ready arxiv.org/abs/2412.17762
Let's celebrate with a prompt guessing game in the thread๐
06.03.2025 21:06 โ ๐ 14 ๐ 4 ๐ฌ 1 ๐ 1
Why academia is sleepwalking into self-destruction. My editorial @brain1878.bsky.social If you agree with the sentiments please repost. It's important for all our sakes to stop the madness
academic.oup.com/brain/articl...
06.03.2025 19:15 โ ๐ 534 ๐ 309 ๐ฌ 51 ๐ 104
Excited to see our paper โComputing Nonequilibrium Responses with Score-Shifted Stochastic Differential Equationsโ in Physical Review Letters this morning as an Editorโs Suggestion! We uses ideas from generative modeling to unravel a rather technical problem. ๐งต journals.aps.org/prl/abstract...
04.03.2025 18:45 โ ๐ 10 ๐ 2 ๐ฌ 1 ๐ 0
Great intro to PAC-Bayes bounds. Highly recommended!
05.03.2025 09:55 โ ๐ 12 ๐ 2 ๐ฌ 0 ๐ 0
Well you can do it but we don't have any proof. We actually also ran alpha-DSBM with zero-variance noise, i.e. so really an "alpha-rectified flow": experimentally it does "work" but we have no proof of convergence for the procedure.
08.02.2025 17:54 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0
Yes the trajectories are not quite smooth as they correspond to a Brownian bridge and, as the variance of the reference Brownian motion of your SB goes to zero, you get back to the deterministic and straight paths of OT.
08.02.2025 12:02 โ ๐ 4 ๐ 1 ๐ฌ 1 ๐ 0
Better diffusions with scoring rules!
Fewer, larger denoising steps using distributional losses; learn the posterior distribution of clean samples given the noisy versions.
arxiv.org/pdf/2502.02483
@vdebortoli.bsky.social Galashov Guntupalli Zhou @sirbayes.bsky.social @arnauddoucet.bsky.social
05.02.2025 14:23 โ ๐ 29 ๐ 8 ๐ฌ 1 ๐ 3
On the Asymptotics of Importance Weighted Variational Inference
For complex latent variable models, the likelihood function is not available in closed form. In this context, a popular method to perform parameter estimation is Importance Weighted Variational Infere...
A standard ML approach for parameter estimation in latent variable models is to maximize the expectation of the logarithm of an importance sampling estimate of the intractable likelihood. We provide consistency/efficiency results for the resulting estimate: arxiv.org/abs/2501.08477
16.01.2025 17:43 โ ๐ 30 ๐ 2 ๐ฌ 0 ๐ 0
Speculative sampling accelerates inference in LLMs by drafting future tokens which are verified in parallel. With @vdebortoli.bsky.social , A. Galashov & @arthurgretton.bsky.social , we extend this approach to (continuous-space) diffusion models: arxiv.org/abs/2501.05370
10.01.2025 16:30 โ ๐ 46 ๐ 10 ๐ฌ 1 ๐ 0
I personally read at least a couple of hours per day. It is not particularly focused and I might "waste" time but I just enjoy it.
08.01.2025 08:05 โ ๐ 14 ๐ 1 ๐ฌ 1 ๐ 1
Very nice paper indeed. I like it.
27.12.2024 16:38 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
๐ Super excited to announce the first ever Frontiers of Probabilistic Inference: Learning meets Sampling workshop at #ICLR2025 @iclr-conf.bsky.social!
๐ website: sites.google.com/view/fpiwork...
๐ฅ Call for papers: sites.google.com/view/fpiwork...
more details in thread below๐ ๐งต
18.12.2024 19:09 โ ๐ 84 ๐ 19 ๐ฌ 2 ๐ 3
BreimanLectureNeurIPS2024_Doucet.pdf
The slides of my NeurIPS lecture "From Diffusion Models to Schrรถdinger Bridges - Generative Modeling meets Optimal Transport" can be found here
drive.google.com/file/d/1eLa3...
15.12.2024 18:40 โ ๐ 327 ๐ 67 ๐ฌ 9 ๐ 6
One #postdoc position is still available at the National University of Singapore (NUS) to work on sampling, high-dimensional data-assimilation, and diffusion/flow models. Applications are open until the end of January. Details:
alexxthiery.github.io/jobs/2024_di...
15.12.2024 14:46 โ ๐ 41 ๐ 18 ๐ฌ 0 ๐ 0
I couldn't speak for the following 3 days :-)
14.12.2024 22:13 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
I have updated my course notes on Optimal Transport with a new Chapter 9 on Wasserstein flows. It includes 3 illustrative applications: training a 2-layer MLP, deep transformers, and flow-matching generative models. You can access it here: mathematical-tours.github.io/book-sources...
04.12.2024 08:11 โ ๐ 103 ๐ 19 ๐ฌ 2 ๐ 0
On the optimality of coin-betting for mean estimation
Confidence sequences are sequences of confidence sets that adapt to incoming data while maintaining validity. Recent advances have introduced an algorithmic formulation for constructing some of the ti...
exciting new work by my truly brilliant postdoc Eugenio Clerico on the optimality of coin-betting strategies for mean estimation!
for fans of: mean estimation, online learning with log loss, optimal portfolios, hypothesis testing with E-values, etc.
dig in:
arxiv.org/abs/2412.02640
04.12.2024 08:13 โ ๐ 41 ๐ 8 ๐ฌ 2 ๐ 0
Professor of Statistics in the Department of Mathematical Sciences at Durham University, U.K.
#academicsky #rstats
Lurking here for now, may start to post in due course. Currently more active on Mastodon: https://fosstodon.org/@louisaslett
Postdoc @ Cambridge, machine learning, inference and physics
Hon. Associate Professor UCL CS | Ex-Dir. Research AI for Good & Head of Element AI London Office | Ex-DeepMind. He/Him | https://cornebise.com
Assistant Professor at Stanford Statistics and Stanford Data Science | Previously postdoc at UW Institute for Protein Design and Columbia. PhD from MIT.
Applied probabilist. Probability, MCMC, optimization, information theory, TCS.
https://mchchoi.github.io/
Chief Models Officer @ Stealth Startup; Inria & MVA - Ex: Llama @AIatMeta & Gemini and BYOL @GoogleDeepMind
Associate Professor at the University of British Columbia
https://statisticalecology.weebly.com/
Research Scientist @ Google DeepMind - working on video models for science. Worked on video generation; self-supervised learning; VLMs - ๐ฆฉ; point tracking.
Prof in System Eng. Researcher in Machine Learning, Robotics and Computer Vision at @unizar (@DIIS_UZ, @I3Aunizar). @ELLISforEurope member. https://webdiis.unizar.es/~rmcantin/
Principal Research Scientist at NVIDIA | Former Physicist | Deep Generative Learning | https://karstenkreis.github.io/
Opinions are my own.
Professor of Statistics at Bocconi University
Principal Researcher in AI/ML/RL Theory @ Microsoft Research NE/NYC. Previously @ MIT, Cornell. http://dylanfoster.net
RL Theory Lecture Notes: https://arxiv.org/abs/2312.16730
Machine learning researcher at Microsoft Research. Adjunct professor at Stanford.
Professor of Computer Vision/Machine Learning at Imagine/LIGM, รcole nationale des Ponts et Chaussรฉes @ecoledesponts.bsky.social Music & overall happiness ๐ณ๐ชป Born well below 350ppm
๐Paris ๐ https://davidpicard.github.io/
Assistant Professor, Statistical Sciences + School of the Environment at the University of Toronto. Bayesian ecological + environmental stats. Bilingรผe. โก๏ธ๐
https://www.vleosbarajas.com
Researcher at Anthropic, incoming faculty at BU. Based in SF. Likes cats. smsharma.github.io.
Bayesian statistics, Gaussian processes, and all things ML. Senior Applied Scientist at Amazon and developer of GPJax.
Principal Scientist at Naver Labs Europe, Lead of Spatial AI team. AI for Robotics, Computer Vision, Machine Learning. Austrian in France. https://chriswolfvision.github.io/www/
Research Scientist at Google DeepMind
stanniszhou.github.io
Scientific writer @ Cambridge | Science illustrator | I write about quiet corners of physics | Substack: https://substack.com/@appreciatingtheordinary
Portfolio: https://mayankshreshthai.myportfolio.com/
"Illustrating the unseen. Writing the overlooked."