Yonghan Jung: Debiased Front-Door Learners for Heterogeneous Effects https://arxiv.org/abs/2509.22531 https://arxiv.org/pdf/2509.22531 https://arxiv.org/html/2509.22531
29.09.2025 06:53 β π 0 π 2 π¬ 0 π 0@yonghanjung.bsky.social
Assistant professor at UIUC iSchool. Previously at Purdue CS. Work on Causal Data Science https://yonghanjung.me/
Yonghan Jung: Debiased Front-Door Learners for Heterogeneous Effects https://arxiv.org/abs/2509.22531 https://arxiv.org/pdf/2509.22531 https://arxiv.org/html/2509.22531
29.09.2025 06:53 β π 0 π 2 π¬ 0 π 0Thrilled to share our new paper!
π Paper: arxiv.org/abs/2509.22531
π» Code: github.com/yonghanjung/...
We develop the first orthogonal ML estimators for heterogeneous treatment effects (HTE) under front-door adjustment, enabling HTE identification even with unmeasured confounders.
If you're interested in working with me, feel free to reach out at yhansjung@gmail.com.
13.06.2025 20:40 β π 1 π 0 π¬ 0 π 0I'm excited to share that I'll be joining the School of Information Sciences at UIUC as an Assistant Professor this Fall (ischool.illinois.edu/news-events/...). If you're interested in causal inference and its applications to trustworthy AI and healthcare, join me & let's work together!
13.06.2025 20:28 β π 6 π 0 π¬ 1 π 0PhDone π Iβve successfully defended my thesis!
Huge thanks to my amazing advisor Elias Bareinboim and committeeβJennifer Neville, Jin Tian, Yexiang Xue, and @idiaz.bsky.social.
Grateful to collaborators, colleagues, lab mates, friends, neighborsβand above all, my wife, kid, and family!
New paper alert (hey, I can't doom scroll all the time): This one's on doing causal inference with "microlevel data" where we suspect that the treatment has spatial spillover & temporal carryover effects. We illustrate our new approach + package w/ application to US counterinsurgency efforts in Iraq
07.04.2025 23:58 β π 8 π 4 π¬ 0 π 0πInteresting way of using copula method for the sensitivity analysis in causal inference.
30.03.2025 15:55 β π 0 π 0 π¬ 0 π 0Reinforcement learning has led to amazing breakthroughs in reasoning (e.g., R1), but can it discover truly new behaviors not already present in the base model?
A new paper with Zak Mhammedi and Dhruv Rohatgi:
The Computational Role of the Base Model in Exploration
arxiv.org/abs/2503.07453
It looks interesting!
21.03.2025 15:06 β π 0 π 0 π¬ 0 π 0I really enjoy reading this paper. On a perspective of causal inference researcher, I agree that ML's real-world impact relies on science theory, because understanding causal mechanisms requires domain knowledge or theoretical assumptions. ML without theory simply leads us nowhere.
10.03.2025 18:11 β π 1 π 0 π¬ 0 π 0link ππ€
Adaptive Experimentation When You Can't Experiment () arXiv:2406.10738v1 Announce Type: cross
Abstract: This paper introduces the \emph{confounded pure exploration transductive linear bandit} (\texttt{CPET-LB}) problem. As a motivating example, often online services cannot directly assig
π Join our #CIIG seminar next month for an Introduction to Mechanism Learning
π Mechanism learning proposes using front-door causal bootstrapping such that ML models learn causal rather than "associational" (or spurious) relationships
See abstract and register: turing-uk.zoom.us/meeting/regi...
@pedrosantanna.bsky.social onlinelibrary.wiley.com/doi/10.1111/... biostatistics literature will use PO notation to describe the relevant objects. Just treat RL as MDP with unknown transitions (it's true RL doesn't use PO notation - it gets cumbersome and many key objects relate to the Bellman eqn)
13.01.2025 00:55 β π 12 π 2 π¬ 2 π 1I've decided to collect my DiD materials in a single place.
psantanna.com/did-resources
There, you will find
- 14 lectures of my comprehensive DiD course
- Shorter lectures/talks I have given on DiD
- My DiD R/Stata/Python packages
- Some DiD checklists
- DiD materials from my friends
Enjoy!
Merry Christmas, friends and colleagues! Hope you all have wonderful days with joys! π
25.12.2024 21:49 β π 1 π 0 π¬ 0 π 0Looking ahead, my future direction will explore:
1οΈβ£ High-dimensional, online streaming datasets.
2οΈβ£ Multi-modal data (e.g., text, images).
3οΈβ£ Robust causal inference with uncertainty quantification.
My past work focuses on estimating causal effects identifiable from graphs, with applications in xAI and healthcare. This includes advancing methods to handle multi-domain experimental data, distributional treatment effects, and designing computationally efficient estimators.
19.12.2024 18:46 β π 0 π 0 π¬ 1 π 0Excited to share that Iβm on the academic job market! Iβve been fortunate to work with Elias Bareinboim on causal inference, developing causal effect estimators using modern ML methods. Published in ICML, NeurIPS, AAAI, & more. Details: www.yonghanjung.me
19.12.2024 18:45 β π 2 π 0 π¬ 1 π 0In sum, our work provides a computationally efficient and statistically robust estimator for various covariate adjustment estimands, including cases where no such estimators previously existed.
Come see our poster and let us chat more!
Next, we developed Double-machine learning (DML)-based estimators for the UCA-class and provided finite sample guarantees, showing that it achieves doubly robustness and scalability (i.e., computational efficiency).
11.12.2024 18:19 β π 0 π 0 π¬ 1 π 0The UCA class incorporates a functional in a form of a product of various conditional probabilities. It includes the front-door adjustment, Vermaβs equation, S-admissibility, Effect of treatment on the treated, soft-intervention, and many other practical causal estimands.
11.12.2024 18:19 β π 0 π 0 π¬ 1 π 0In this work,
1. We define a function class called "Unified Covariate Adjustment (UCA)" that incorporates various covariate adjustments; and
2. We developed a double machine learning (DML)-based estimator for the UCA-classes and provided finite-sample learning guarantees.
We will present our work "Unified Covariate Adjustment for Causal Inferenceβ (joint work with Jin Tian &
Elias Bareinboim) at #NeurIPS2024!
- Wed (12/11) from 11am - 2pm
- Poster Session 1 (East Hall A-C) #4901
- Link: openreview.net/pdf?id=aX9z2...
Come and see us!
I am attending NeurIPS 2024 this Tuesday through Sunday. I am also in the academic job market this year (www.yonghanjung.me). Happy to discuss potential opportunities! Get in touch if youβd like to chat! #NeurIPS2024
10.12.2024 23:20 β π 1 π 0 π¬ 0 π 0Kandiros, Pipis, Daskalakis, and Harshaw have a really Interesting new arxiv preprint on "conflict graph designs" for interference/spillovers: arxiv.org/abs/2411.10908 For GATE estimation the improvement is very significant and I'm optimistic/excited about how the ideas will impact the literature..!
22.11.2024 13:51 β π 24 π 8 π¬ 1 π 0As my first post on this platform, allow me to advertise the RL theory lecture notes I have been developing with Sasha Rakhlin: arxiv.org/abs/2312.16730
(shameless repost of my pinned tweet)
An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits https://arxiv.org/abs/2311.05794 arXiv:2311.05794v2 Announce Type: replace Abstract: In multi-armed bandit (MAB) experiments, it is often advantageous to continuously produce inference on the average treatment effec ππ€
11.09.2024 19:04 β π 2 π 1 π¬ 0 π 1Susan Athey, Raj Chetty, Guido Imbens, Hyunseung Kang
Estimating Treatment Effects using Multiple Surrogates: The Role of the Surrogate Score and the Surrogate Index
https://arxiv.org/abs/1603.09326
Siyu Heng, Jiawei Zhang, Yang Feng
Design-Based Causal Inference with Missing Outcomes: Missingness Mechanisms, Imputation-Assisted Randomization Tests, and Covariate Adjustment
https://arxiv.org/abs/2310.18556
Sizhu Lu, Zhichao Jiang, Peng Ding
Principal Stratification with Continuous Post-Treatment Variables: Nonparametric Identification and Semiparametric Estimation
https://arxiv.org/abs/2309.12425