Lars van der Laan's Avatar

Lars van der Laan

@larsvanderlaan3.bsky.social

Ph.D. Student @uwstat; Research fellowship @Netflix; visiting researcher @UCJointCPH; M.A. @UCBStatistics - machine learning; calibration; semiparametrics; causal inference. https://larsvanderlaan.github.io

566 Followers  |  123 Following  |  24 Posts  |  Joined: 29.09.2023  |  1.7406

Latest posts by larsvanderlaan3.bsky.social on Bluesky

What does โ€˜biasedโ€™ mean here? It would be biased in expectation, since if you were to repeat the experiment many times, some power users would join. If you define your estimator as the empirical mean over non-power users, then it might be unbiased.

18.06.2025 01:24 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Iโ€™d be surprised if this actually works in practice, since neural networks are often overfitting (e.g. perfectly fitting labels with double descent), which violates donsker conditions. And, the neural tangent kernel ridge approximation of neural networks has been shown to not hold empirically.

26.05.2025 23:39 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Kernel Debiased Plug-in Estimation: Simultaneous, Automated Debiasing without Influence Functions for Many Target Parameters When estimating target parameters in nonparametric models with nuisance parameters, substituting the unknown nuisances with nonparametric estimators can introduce ``plug-in bias.'' Traditional methods...

Looks like they are assuming the neural network can be approximated by ridge regression in an RKHS (which seems strong in practice). Under this approximation, plug-in efficiency follows from fairly standard results on undersmoothed ridge regression; see, e.g. arxiv.org/abs/2306.08598

26.05.2025 23:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I've advised 15 PhD studentsโ€”10 were international students. All graduates continue advancing U.S. excellence in research and education. Cutting off this pipeline of talent would be shortsighted.

23.05.2025 03:36 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I had a hard time believing it was as simple as this until Lars taught me how to implement it - calibrate=True and you're done

github.com/apoorvalal/a...

19.05.2025 00:59 โ€” ๐Ÿ‘ 15    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Calibrate your outcome predictions and propensities using isotonic regression as follows:

mu_hat <- as.stepfun(isoreg(mu_hat, Y))(mu_hat)

pi_hat <- as.stepfun(isoreg(pi_hat, A))(pi_hat)

(Or use the isoreg_with_xgboost function given in the paper, which I recommend)

19.05.2025 00:07 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Had a great time presenting at #ACIC on doubly robust inference via calibration

Calibrating nuisance estimates in DML protects against model misspecification and slow convergence.

Just one line of code is all it takes.

19.05.2025 00:02 โ€” ๐Ÿ‘ 19    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

link ๐Ÿ“ˆ๐Ÿค–
Nonparametric Instrumental Variable Inference with Many Weak Instruments (Laan, Kallus, Bibaut) We study inference on linear functionals in the nonparametric instrumental variable (NPIV) problem with a discretely-valued instrument under a many-weak-instruments asymptotic regime, where the

13.05.2025 16:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Iโ€™ll be giving an oral presentation at ACIC in the Advancing Causal Inference session with ML on Wednesday!

My talk will be on Automatic Double Reinforcement Learning and long term causal inference!

Iโ€™ll discuss Markov decision processes, Q-functions, and a new form of calibration for RL!

12.05.2025 18:09 โ€” ๐Ÿ‘ 9    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

New preprint with #Netflix out!

We study the NPIV problem with a discrete instrument under a many-weak-instruments regime.

A key application: constructing confounding-robust surrogates using past experiments as instruments.

My mentor Aurรฉlien Bibaut will be presenting a poster at #ACIC2025!

13.05.2025 10:43 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Our work on stabilized inverse probability weighting via calibration was accepted to #CLeaR2025! I gave an oral presentation last week and was honored to receive the Best Paper Award.

Iโ€™ll be giving a related poster talk at #ACIC on calibration and DML and how it provides doubly robust inference!

12.05.2025 18:32 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Link to paper:

arxiv.org/pdf/2501.06926

12.05.2025 18:19 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This work is a result of my internship at Netflix over the summer and is joint with Aurelien Bibaut and Nathan Kallus.

12.05.2025 18:10 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Iโ€™ll be giving an oral presentation at ACIC in the Advancing Causal Inference session with ML on Wednesday!

My talk will be on Automatic Double Reinforcement Learning and long term causal inference!

Iโ€™ll discuss Markov decision processes, Q-functions, and a new form of calibration for RL!

12.05.2025 18:09 โ€” ๐Ÿ‘ 9    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Inference for smooth functionals of M-estimands in survival models, like regularized coxPH and the beta-geometric model (see our experiments section) are one application of this approach.

12.05.2025 17:56 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

By targeting low dimensional summaries, there is no need to establish asymptotic normality of the entire infinite dimensional M-estimator (which isnโ€™t possible in general). It allows for the use of ML and regularization to estimate it, and valid inference via a one step bias correction.

12.05.2025 17:53 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

If youโ€™re willing to consider smooth functionals of the infinite dimensional M-estimand, then there is a general theory for inference, where the sandwich variance estimator now involves the derivative of the loss and a Riesz representer of the functional.

Working paper:
arxiv.org/pdf/2501.11868

12.05.2025 17:51 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

The motivation should have been something like a confounder that is somewhat predictive of both the treatment and outcome might be more important to adjust for then a variable that is super predictive of the outcome but doesnโ€™t predict treatment. TR might help give more importance to such variables

25.04.2025 16:45 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

One could have given an analogous theorem saying that E[Y | T, X] is a sufficient deconfounding score and argued that one should only adjust for features predictive of the outcome. So yeah I think itโ€™s wrong/poorly phrased

25.04.2025 05:12 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The OPโ€™s approach is based on the conditional probability of Y given the treatment is intervened upon and set to some value. But, they donโ€™t seem to define what this means formally, which is exactly what potential outcomes/NPSEM achieve.

01.04.2025 03:33 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

The second stage coefficients are the estimand (identifying the structural coefficients/treatment effect). The first stage coefficients are nuisances, and typically not of direct interest.

20.03.2025 21:35 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

The paper "Generalized Venn and Venn-Abers Calibration with Applications in Conformal Prediction" by Lars van der Laan and Ahmed Alaa introduces a comprehensive framework that extends Venn and Venn-Abers calibration methods to a broad range of prediction tasks and loss functions.

16.02.2025 18:20 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿšจ Excited about this new paper on Generalized Venn Calibration and conformal prediction!

We show that Venn and Venn-Abers can be extended to general losses, and that conformal prediction can be viewed as Venn multicalibration for the quantile loss!

#calibration #conformal

11.02.2025 18:37 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Lars van der Laan, Ahmed Alaa
Generalized Venn and Venn-Abers Calibration with Applications in Conformal Prediction
https://arxiv.org/abs/2502.05676

11.02.2025 08:10 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

Great discussion! there we use that balancing conditional on the propensity score (or Riesz rep more generally) is sufficient for ipw and DR inference.

To add to the discussion this paper connects balancing with debiasing in aipw

arxiv.org/pdf/2304.14545

25.01.2025 15:59 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Your comment also reminds me of this paper where they ensure the estimators solve a certain equation (which I think can be viewed as a kind of balance) using isotonic regression and they show this leads to DR inference:
arxiv.org/pdf/2411.02771

25.01.2025 15:28 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Thrilled to share our new paper! We introduce a generalized autoDML framework for smooth functionals in general M-estimation problems, significantly broadening the scope of problems where automatic debiasing can be applied!

22.01.2025 13:54 โ€” ๐Ÿ‘ 18    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

link ๐Ÿ“ˆ๐Ÿค–
Automatic Debiased Machine Learning for Smooth Functionals of Nonparametric M-Estimands (Laan, Bibaut, Kallus et al) We propose a unified framework for automatic debiased machine learning (autoDML) to perform inference on smooth functionals of infinite-dimensional M-estimands, defined as

22.01.2025 17:17 โ€” ๐Ÿ‘ 5    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thrilled to share our new paper! We introduce a generalized autoDML framework for smooth functionals in general M-estimation problems, significantly broadening the scope of problems where automatic debiasing can be applied!

22.01.2025 13:54 โ€” ๐Ÿ‘ 18    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

A new Double RL (yes RL) paper by @larsvanderlaan3.bsky.social and colleagues

Love this stuff, this is something I was thinking about for a while and great to see a paper on this topic!

#CausalSky

14.01.2025 17:54 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@larsvanderlaan3 is following 19 prominent accounts