What does โbiasedโ mean here? It would be biased in expectation, since if you were to repeat the experiment many times, some power users would join. If you define your estimator as the empirical mean over non-power users, then it might be unbiased.
18.06.2025 01:24 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0
Iโd be surprised if this actually works in practice, since neural networks are often overfitting (e.g. perfectly fitting labels with double descent), which violates donsker conditions. And, the neural tangent kernel ridge approximation of neural networks has been shown to not hold empirically.
26.05.2025 23:39 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
I've advised 15 PhD studentsโ10 were international students. All graduates continue advancing U.S. excellence in research and education. Cutting off this pipeline of talent would be shortsighted.
23.05.2025 03:36 โ ๐ 8 ๐ 2 ๐ฌ 0 ๐ 0
I had a hard time believing it was as simple as this until Lars taught me how to implement it - calibrate=True and you're done
github.com/apoorvalal/a...
19.05.2025 00:59 โ ๐ 15 ๐ 3 ๐ฌ 0 ๐ 0
Calibrate your outcome predictions and propensities using isotonic regression as follows:
mu_hat <- as.stepfun(isoreg(mu_hat, Y))(mu_hat)
pi_hat <- as.stepfun(isoreg(pi_hat, A))(pi_hat)
(Or use the isoreg_with_xgboost function given in the paper, which I recommend)
19.05.2025 00:07 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Had a great time presenting at #ACIC on doubly robust inference via calibration
Calibrating nuisance estimates in DML protects against model misspecification and slow convergence.
Just one line of code is all it takes.
19.05.2025 00:02 โ ๐ 19 ๐ 1 ๐ฌ 1 ๐ 2
link ๐๐ค
Nonparametric Instrumental Variable Inference with Many Weak Instruments (Laan, Kallus, Bibaut) We study inference on linear functionals in the nonparametric instrumental variable (NPIV) problem with a discretely-valued instrument under a many-weak-instruments asymptotic regime, where the
13.05.2025 16:43 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0
Iโll be giving an oral presentation at ACIC in the Advancing Causal Inference session with ML on Wednesday!
My talk will be on Automatic Double Reinforcement Learning and long term causal inference!
Iโll discuss Markov decision processes, Q-functions, and a new form of calibration for RL!
12.05.2025 18:09 โ ๐ 9 ๐ 1 ๐ฌ 1 ๐ 0
New preprint with #Netflix out!
We study the NPIV problem with a discrete instrument under a many-weak-instruments regime.
A key application: constructing confounding-robust surrogates using past experiments as instruments.
My mentor Aurรฉlien Bibaut will be presenting a poster at #ACIC2025!
13.05.2025 10:43 โ ๐ 5 ๐ 0 ๐ฌ 0 ๐ 0
Our work on stabilized inverse probability weighting via calibration was accepted to #CLeaR2025! I gave an oral presentation last week and was honored to receive the Best Paper Award.
Iโll be giving a related poster talk at #ACIC on calibration and DML and how it provides doubly robust inference!
12.05.2025 18:32 โ ๐ 5 ๐ 1 ๐ฌ 0 ๐ 0
Link to paper:
arxiv.org/pdf/2501.06926
12.05.2025 18:19 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
This work is a result of my internship at Netflix over the summer and is joint with Aurelien Bibaut and Nathan Kallus.
12.05.2025 18:10 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Iโll be giving an oral presentation at ACIC in the Advancing Causal Inference session with ML on Wednesday!
My talk will be on Automatic Double Reinforcement Learning and long term causal inference!
Iโll discuss Markov decision processes, Q-functions, and a new form of calibration for RL!
12.05.2025 18:09 โ ๐ 9 ๐ 1 ๐ฌ 1 ๐ 0
Inference for smooth functionals of M-estimands in survival models, like regularized coxPH and the beta-geometric model (see our experiments section) are one application of this approach.
12.05.2025 17:56 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
By targeting low dimensional summaries, there is no need to establish asymptotic normality of the entire infinite dimensional M-estimator (which isnโt possible in general). It allows for the use of ML and regularization to estimate it, and valid inference via a one step bias correction.
12.05.2025 17:53 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
If youโre willing to consider smooth functionals of the infinite dimensional M-estimand, then there is a general theory for inference, where the sandwich variance estimator now involves the derivative of the loss and a Riesz representer of the functional.
Working paper:
arxiv.org/pdf/2501.11868
12.05.2025 17:51 โ ๐ 4 ๐ 0 ๐ฌ 2 ๐ 0
The motivation should have been something like a confounder that is somewhat predictive of both the treatment and outcome might be more important to adjust for then a variable that is super predictive of the outcome but doesnโt predict treatment. TR might help give more importance to such variables
25.04.2025 16:45 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
One could have given an analogous theorem saying that E[Y | T, X] is a sufficient deconfounding score and argued that one should only adjust for features predictive of the outcome. So yeah I think itโs wrong/poorly phrased
25.04.2025 05:12 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
The OPโs approach is based on the conditional probability of Y given the treatment is intervened upon and set to some value. But, they donโt seem to define what this means formally, which is exactly what potential outcomes/NPSEM achieve.
01.04.2025 03:33 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
The second stage coefficients are the estimand (identifying the structural coefficients/treatment effect). The first stage coefficients are nuisances, and typically not of direct interest.
20.03.2025 21:35 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
The paper "Generalized Venn and Venn-Abers Calibration with Applications in Conformal Prediction" by Lars van der Laan and Ahmed Alaa introduces a comprehensive framework that extends Venn and Venn-Abers calibration methods to a broad range of prediction tasks and loss functions.
16.02.2025 18:20 โ ๐ 1 ๐ 1 ๐ฌ 1 ๐ 0
๐จ Excited about this new paper on Generalized Venn Calibration and conformal prediction!
We show that Venn and Venn-Abers can be extended to general losses, and that conformal prediction can be viewed as Venn multicalibration for the quantile loss!
#calibration #conformal
11.02.2025 18:37 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Lars van der Laan, Ahmed Alaa
Generalized Venn and Venn-Abers Calibration with Applications in Conformal Prediction
https://arxiv.org/abs/2502.05676
11.02.2025 08:10 โ ๐ 3 ๐ 1 ๐ฌ 0 ๐ 1
Great discussion! there we use that balancing conditional on the propensity score (or Riesz rep more generally) is sufficient for ipw and DR inference.
To add to the discussion this paper connects balancing with debiasing in aipw
arxiv.org/pdf/2304.14545
25.01.2025 15:59 โ ๐ 5 ๐ 0 ๐ฌ 0 ๐ 0
Your comment also reminds me of this paper where they ensure the estimators solve a certain equation (which I think can be viewed as a kind of balance) using isotonic regression and they show this leads to DR inference:
arxiv.org/pdf/2411.02771
25.01.2025 15:28 โ ๐ 4 ๐ 1 ๐ฌ 1 ๐ 0
Thrilled to share our new paper! We introduce a generalized autoDML framework for smooth functionals in general M-estimation problems, significantly broadening the scope of problems where automatic debiasing can be applied!
22.01.2025 13:54 โ ๐ 18 ๐ 7 ๐ฌ 1 ๐ 0
link ๐๐ค
Automatic Debiased Machine Learning for Smooth Functionals of Nonparametric M-Estimands (Laan, Bibaut, Kallus et al) We propose a unified framework for automatic debiased machine learning (autoDML) to perform inference on smooth functionals of infinite-dimensional M-estimands, defined as
22.01.2025 17:17 โ ๐ 5 ๐ 2 ๐ฌ 0 ๐ 0
Thrilled to share our new paper! We introduce a generalized autoDML framework for smooth functionals in general M-estimation problems, significantly broadening the scope of problems where automatic debiasing can be applied!
22.01.2025 13:54 โ ๐ 18 ๐ 7 ๐ฌ 1 ๐ 0
A new Double RL (yes RL) paper by @larsvanderlaan3.bsky.social and colleagues
Love this stuff, this is something I was thinking about for a while and great to see a paper on this topic!
#CausalSky
14.01.2025 17:54 โ ๐ 3 ๐ 1 ๐ฌ 0 ๐ 0
CMU postdoc, previously MIT PhD. Causality, pragmatism, representation learning, and AI for biology / science more broadly. Proud rat dad.
Research Staff Member at IBM Research.
Causal Inference ๐ดโ๐ โ๐ก.
Machine Learning ๐ค๐.
Data Communication ๐.
Healthcare โ๏ธ.
Creator of ๐ฒ๐๐๐๐๐๐๐๐: https://github.com/IBM/causallib
Website: https://ehud.co
Heisenberg Professor for Biostatistics at the Department of Statistics, LMU Mรผnchen | causal inference - missing data - HIV
michaelschomaker.github.io
Assistant professor of biostatistics at Columbia University
Casual inference, statistics, etc
Pauca sed Matura
PhD in machine learning | conformal prediction | time-series | author of bestselling Practical Guide to Applied Conformal-Prediction https://a.co/d/iHRag4i
Biostatistics phd student @University of Washington
Interested in non-parametric statistics, causal inference, and science!
Assistant Professor of "Data Science in Economics" at Uni Tรผbingen. Interested in the intersection of causal inference and so-called machine learning.
Teaching material: https://github.com/MCKnaus/causalML-teaching
Homepage: mcknaus.github.io
dorothy gilford endowed chair and professor of statistics/biostatistics at university of washington, all views my own
source: https://arxiv.org/rss/stat.ML
maintainer: @tmaehara.bsky.social
asst. prof. of (bio)statistics at harvardโcausal inference, semi-parametric estimation, machine learning, open-source software for statistics.
research webpage: https://nimahejazi.org
avid runner, concertgoer, timezone hopper
Columbia postdoc and ex-Quantco. Personal website: www.ohines.com
Professor of Biostatistics, University of Washington School of Public Health.
Affiliate Investigator, Fred Hutch Vaccine and Infectious Disease Division.
Causal inference, ML, survival analysis, statistical epi, viruses and vaccine science.
๐จ๐ฆ๐ฎ๐น๐ฆ๐ฒ
LTI PhD at CMU on evaluation and trustworthy ML/NLP, prev AI&CS Edinburgh University, Google, YouTube, Apple, Netflix. Views are personal ๐ฉ๐ปโ๐ป๐ฎ๐ฉ
athiyadeviyani.github.io
Assistant Prof. of CS at Johns Hopkins
Visiting Scientist at Abridge AI
Causality & Machine Learning in Healthcare
Prev: PhD at MIT, Postdoc at CMU
Assistant Professor at UC Berkeley and UCSF.
Machine Learning and AI for Healthcare. https://alaalab.berkeley.edu/
Biostatistician โข Associate Prof @ Wake Forest University โข former postdoc @ Hopkins Biostat โข PhD @ Vandy Biostat โข ๐ Casual Inference โข lucymcgowan.com
Fostering a dialogue between industry and academia on causal data science.
Causal Data Science Meeting 2025: causalscience.org
We're a membership body for statisticians and data professionals, promoting a world with data at the heart of understanding and decision-making.