Nicola Branchini's Avatar

Nicola Branchini

@nicolabranchini.bsky.social

๐Ÿ‡ฎ๐Ÿ‡น Stats PhD @ University of Edinburgh ๐Ÿด๓ ง๓ ข๓ ณ๓ ฃ๓ ด๓ ฟ @ellis.eu PhD - visiting @avehtari.bsky.social ๐Ÿ‡ซ๐Ÿ‡ฎ ๐Ÿค”๐Ÿ’ญ Monte Carlo, UQ. Interested in many things relating to UQ, keen to learn applications in climate/science. https://www.branchini.fun/about

1,420 Followers  |  843 Following  |  126 Posts  |  Joined: 12.11.2024  |  1.8563

Latest posts by nicolabranchini.bsky.social on Bluesky

The true academic method: overthink, underdeliver, cite yourself.

14.10.2025 10:25 โ€” ๐Ÿ‘ 53    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2

24. arxiv.org/abs/2510.00389
'Zero variance self-normalized importance sampling via estimating equations'
- Art B. Owen

Even with optimal proposals, achieving zero variance with SNIS-type estimators requires some innovative thinking. This work explains how an optimisation formulation can apply.

04.10.2025 16:03 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Preview
Autodifferentiable Ensemble Kalman Filters | SIAM Journal on Mathematics of Data Science Data assimilation is concerned with sequentially estimating a temporally evolving state. This task, which arises in a wide range of scientific and engineering applications, is particularly challenging...

were you reading:
epubs.siam.org/doi/abs/10.1...

12.10.2025 19:22 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Preview
ELLIS PhD Program: Call for Applications 2025 The ELLIS mission is to create a diverse European network that promotes research excellence and advances breakthroughs in AI, as well as a pan-European PhD program to educate the next generation of AI...

I'm looking for a doctoral student with Bayesian background to work on Bayesian workflow and cross-validation (see my publication list users.aalto.fi/~ave/publica... for my recent work) at Aalto University.

Apply through the ELLIS PhD program (dl October 31) ellis.eu/news/ellis-p...

06.10.2025 09:28 โ€” ๐Ÿ‘ 45    ๐Ÿ” 34    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

"Conditional Causal Discovery"

(don't be fooled by the title :D )

openreview.net/forum?id=6IY...

04.10.2025 16:01 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

"Estimating the Probabilities of Rare Outputs in Language Models"

arxiv.org/abs/2410.13211

04.10.2025 16:01 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

"Stochastic Optimization with Optimal Importance Sampling"

arxiv.org/abs/2504.03560

04.10.2025 16:01 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Posting a few nice importance sampling-related finds

"Value-aware Importance Weighting for Off-policy Reinforcement Learning"

proceedings.mlr.press/v232/de-asis...

04.10.2025 16:01 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Itโ€™s a JAX, JAX, JAX, JAX World | Statistical Modeling, Causal Inference, and Social Science

Itโ€™s a JAX, JAX, JAX, JAX World
statmodeling.stat.columbia.edu/2025/10/03/i...

03.10.2025 22:55 โ€” ๐Ÿ‘ 24    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

I am happy to announce that the Workshop on Emerging Trends in Automatic Control will take place at Aalto University on Sept 26.

Speakers include Lihua Xie, Karl H. Johansson, Jonathan How, Andrea Serrani, Carolyn L. Beck, and others.

#ControlTheory #AutomaticControl #AaltoUniversity #IEEE

08.09.2025 12:31 โ€” ๐Ÿ‘ 5    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Just finished delivering a course on 'Robust and scalable simulation-based inference (SBI)' at Greek Stochastics. This covered an introduction to SBI, open challenges, and some recent contributions from my own group.

The slides are now available here: fxbriol.github.io/pdfs/slides-....

28.08.2025 11:46 โ€” ๐Ÿ‘ 34    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Hollow Knight: Silksong - Special Announcement
YouTube video by Team Cherry Hollow Knight: Silksong - Special Announcement

The countdown is on!

Join us in 48 hours for a special announcement about Hollow Knight: Silksong!

Premiering here: youtu.be/6XGeJwsUP9c

19.08.2025 14:33 โ€” ๐Ÿ‘ 4040    ๐Ÿ” 1512    ๐Ÿ’ฌ 154    ๐Ÿ“Œ 772
The Mathematics of Large Machine Learning Models (Lecture 1) by Andrea Montanari
YouTube video by International Centre for Theoretical Sciences The Mathematics of Large Machine Learning Models (Lecture 1) by Andrea Montanari

Turing Lectures at ICTS
www.youtube.com/watch?v=_fF6...
www.youtube.com/watch?v=mGuK...
www.youtube.com/watch?v=yRDa...

19.08.2025 02:58 โ€” ๐Ÿ‘ 34    ๐Ÿ” 10    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

"Io stimo piรน il trovar un vero, benchรฉ di cosa leggiera, che โ€˜l disputar lungamente delle massime questioni senza conseguir veritร  nissuna"

08.08.2025 17:20 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Today I learnt this Galileo Galilei quote:

"I value more the finding of a truth, even if about something trivial, than the long disputing of the greatest questions without attaining any truth at all"

Feels like we could use some of that in research tbh..

08.08.2025 17:20 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

It is somewhat amusing to see other reviewers confidently and insistingly rejecting alternative proposals (in suitable settings) to SGD/Adam in VI/divergence minimization problems

07.08.2025 12:11 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

algorithmsbook.com/validation/f...

05.08.2025 21:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
MCM 2025 - Program | MCM 2025 Chicago MCM 2025 -- Program.

Flying today towards Chicago ๐ŸŒ† for MCM 2025

fjhickernell.github.io/mcm2025/prog...

Will give a talk on our recent/ongoing works on self-normalized importance sampling, including learning a proposal with MCMC and ratio diagnostics.

www.branchini.fun/pubs

24.07.2025 09:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Really cool work : ) @alexxthiery.bsky.social

www.tandfonline.com/doi/full/10....

16.07.2025 08:57 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

agree; you should check out @yfelekis.bsky.social 's work on this line ๐Ÿ˜„

08.07.2025 11:03 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Just don't see that the PPD_q of the original post leads somewhere useful.
Anyway, thanks for engaging @alexlew.bsky.social : )

06.07.2025 11:03 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I agree, except I think it can be ok to shift the criteria of "good q" to instead some well-defined measure of predictive performance (under no model misspecification, let's say). Ofc Bayesian LOO-CV is one. We could discuss to use other quantities, and how to estimate them, ofc.

06.07.2025 11:03 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Genuine question: what is the estimated value used for then ?

06.07.2025 10:46 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

(computed with the inconsistent method)

06.07.2025 10:38 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Well, re: [choose q1 or q2 based on whether P_q1 > P_q2]
I seem to understand that many VI papers say: here's a new VI method, it produces q1; old VI method gives q2. q1 is better than q2 because test-log PPD is higher !

06.07.2025 10:36 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Not entirely obvious to me, but I see the intuition !

05.07.2025 13:31 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Am definitely at least *trying* to think carefully about the evaluation here ๐Ÿ˜… ๐Ÿ˜‡

05.07.2025 13:30 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Right ! Definitely not sure if necessary, but I like to think there would be value / would be interesting if we wanted to somehow speak formally about generalizing over unseen test points

05.07.2025 13:28 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

It still seems "dangerous" to use the numerical value of (an estimate of) โˆซ p(y|ฮธ) q(ฮธ) to decide which approximate q is better.

(Of course, you may argue we maybe shouldn't use even any MC estimates of the original โˆซ p(y|ฮธ) p(ฮธ|D) with q as proposal, but the above is even less justified)

05.07.2025 13:25 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I don't see that it needs to get that philosophical ?
It is totally possible to formally estimate the pdf itself, since we have some 'test' samples of y, and consider MISE type errors, even if in this case pointwise evaluations of the pdfs have the intractable integral.

05.07.2025 13:20 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@nicolabranchini is following 20 prominent accounts