Yonghan Jung

Yonghan Jung

@yonghanjung.bsky.social

Assistant professor at UIUC iSchool. Previously at Purdue CS. Work on Causal Data Science https://yonghanjung.me/

299 Followers 104 Following 33 Posts Joined Sep 2023
1 week ago
Post image

As shown, with unbounded outcome benchmark dataset (IHDP), our approach provides a valid "INDIVIDAULIZED" bound!

0 0 0 0
1 week ago
Preview
Information-Theoretic Causal Bounds under Unmeasured Confounding We develop a data-driven information-theoretic framework for sharp partial identification of causal effects under unmeasured confounding. Existing approaches often rely on restrictive assumptions, suc...

📄 Paper: arxiv.org/abs/2601.17160
💻 GitHub: github.com/yonghanjung/...
📦 Install: pip install itbound

0 0 1 0
1 week ago

Causal inference needs strong assumptions 😔

However, BOUNDING CAUSAL EFFECTS should not need strong assumptions. 😃

`itbound` gives data-driven causal bounds under assumption-lean settings: unmeasured confounding, unbounded outcomes, no sensitivity parameters, etc.

📦 Install: pip install itbound

2 0 1 0
1 week ago
Post image

Specifically, our result shows that the proposed CATE estimator for the front-door (FD-DR, FD-R) outperforms the plug-in front-door estimator, which empirically evidenced the sample efficiency and robust behavior against bias.

0 0 0 0
1 week ago

Front-door (FD) enables causal effect identification through an observed mediator even when treatment-outcome confounding is unobserved.

Our work provides estimators that achieves sample efficiency and allowing personalized treatment effects when treatment-outcome confounding is unobserved.

0 0 1 0
1 week ago
Preview
Debiased Front-Door Learners for Heterogeneous Effects In observational settings where treatment and outcome share unmeasured confounders but an observed mediator remains unconfounded, the front-door (FD) adjustment identifies causal effects through the m...

Our paper “Debiased Front-Door Learners for Heterogeneous Effects” was accepted to ICLR 2026.

- Paper (arXiv): arxiv.org/abs/2509.22531
- Reproducible code: github.com/yonghanjung/...

Quick start:
pip install fd-cate
fdcate demo --outdir ./fdcate-demo
#ICLR2026 #CausalInference #MachineLearning

2 1 1 0
5 months ago

Yonghan Jung: Debiased Front-Door Learners for Heterogeneous Effects https://arxiv.org/abs/2509.22531 https://arxiv.org/pdf/2509.22531 https://arxiv.org/html/2509.22531

0 2 0 0
5 months ago
Preview
Debiased Front-Door Learners for Heterogeneous Effects In observational settings where treatment and outcome share unmeasured confounders but an observed mediator remains unconfounded, the front-door (FD) adjustment identifies causal effects through the m...

Thrilled to share our new paper!
📄 Paper: arxiv.org/abs/2509.22531
💻 Code: github.com/yonghanjung/...

We develop the first orthogonal ML estimators for heterogeneous treatment effects (HTE) under front-door adjustment, enabling HTE identification even with unmeasured confounders.

1 0 0 0
9 months ago

If you're interested in working with me, feel free to reach out at yhansjung@gmail.com.

1 0 0 0
9 months ago
Preview
Jung to join the faculty The iSchool is pleased to announce that Yonghan Jung will join the faculty as an assistant professor in August 2025, pending approval by the University of Illinois Board of Trustees.

I'm excited to share that I'll be joining the School of Information Sciences at UIUC as an Assistant Professor this Fall (ischool.illinois.edu/news-events/...). If you're interested in causal inference and its applications to trustworthy AI and healthcare, join me & let's work together!

6 0 1 0
9 months ago
Post image Post image Post image

PhDone 🎓 I’ve successfully defended my thesis!
Huge thanks to my amazing advisor Elias Bareinboim and committee—Jennifer Neville, Jin Tian, Yexiang Xue, and @idiaz.bsky.social.
Grateful to collaborators, colleagues, lab mates, friends, neighbors—and above all, my wife, kid, and family!

4 0 0 0
11 months ago
Preview
Spatiotemporal causal inference with arbitrary spillover and carryover effects Micro-level data with granular spatial and temporal information are becoming increasingly available to social scientists. Most researchers aggregate such data into a convenient panel data format and a...

New paper alert (hey, I can't doom scroll all the time): This one's on doing causal inference with "microlevel data" where we suspect that the treatment has spatial spillover & temporal carryover effects. We illustrate our new approach + package w/ application to US counterinsurgency efforts in Iraq

8 4 0 0
11 months ago

📌Interesting way of using copula method for the sensitivity analysis in causal inference.

0 0 0 0
11 months ago
Post image

Reinforcement learning has led to amazing breakthroughs in reasoning (e.g., R1), but can it discover truly new behaviors not already present in the base model?

A new paper with Zak Mhammedi and Dhruv Rohatgi:
The Computational Role of the Base Model in Exploration

arxiv.org/abs/2503.07453

44 13 1 0
11 months ago

It looks interesting!

0 0 0 0
1 year ago

I really enjoy reading this paper. On a perspective of causal inference researcher, I agree that ML's real-world impact relies on science theory, because understanding causal mechanisms requires domain knowledge or theoretical assumptions. ML without theory simply leads us nowhere.

1 0 0 0
1 year ago

link 📈🤖
Adaptive Experimentation When You Can't Experiment () arXiv:2406.10738v1 Announce Type: cross
Abstract: This paper introduces the \emph{confounded pure exploration transductive linear bandit} (\texttt{CPET-LB}) problem. As a motivating example, often online services cannot directly assig

1 1 0 0
1 year ago
Post image

👉 Join our #CIIG seminar next month for an Introduction to Mechanism Learning

👉 Mechanism learning proposes using front-door causal bootstrapping such that ML models learn causal rather than "associational" (or spurious) relationships

See abstract and register: turing-uk.zoom.us/meeting/regi...

3 2 0 1
1 year ago
Preview
Reinforcement Learning in Modern Biostatistics: Constructing Optimal Adaptive Interventions In recent years, reinforcement learning (RL) has acquired a prominent position in health-related sequential decision-making problems, gaining traction as a valuable tool for delivering adaptive inter....

@pedrosantanna.bsky.social onlinelibrary.wiley.com/doi/10.1111/... biostatistics literature will use PO notation to describe the relevant objects. Just treat RL as MDP with unknown transitions (it's true RL doesn't use PO notation - it gets cumbersome and many key objects relate to the Bellman eqn)

12 2 2 1
1 year ago
Pedro H. C. Sant’Anna

I've decided to collect my DiD materials in a single place.

psantanna.com/did-resources

There, you will find
- 14 lectures of my comprehensive DiD course
- Shorter lectures/talks I have given on DiD
- My DiD R/Stata/Python packages
- Some DiD checklists
- DiD materials from my friends

Enjoy!

457 144 23 9
1 year ago
Post image

Merry Christmas, friends and colleagues! Hope you all have wonderful days with joys! 🎄

1 0 0 0
1 year ago

Looking ahead, my future direction will explore:
1️⃣ High-dimensional, online streaming datasets.
2️⃣ Multi-modal data (e.g., text, images).
3️⃣ Robust causal inference with uncertainty quantification.

0 0 0 0
1 year ago

My past work focuses on estimating causal effects identifiable from graphs, with applications in xAI and healthcare. This includes advancing methods to handle multi-domain experimental data, distributional treatment effects, and designing computationally efficient estimators.

0 0 1 0
1 year ago
Preview
CausalAI Aficionado Yonghan Jung

Excited to share that I’m on the academic job market! I’ve been fortunate to work with Elias Bareinboim on causal inference, developing causal effect estimators using modern ML methods. Published in ICML, NeurIPS, AAAI, & more. Details: www.yonghanjung.me

3 0 1 0
1 year ago

In sum, our work provides a computationally efficient and statistically robust estimator for various covariate adjustment estimands, including cases where no such estimators previously existed.

Come see our poster and let us chat more!

0 0 1 0
1 year ago

Next, we developed Double-machine learning (DML)-based estimators for the UCA-class and provided finite sample guarantees, showing that it achieves doubly robustness and scalability (i.e., computational efficiency).

0 0 1 0
1 year ago

The UCA class incorporates a functional in a form of a product of various conditional probabilities. It includes the front-door adjustment, Verma’s equation, S-admissibility, Effect of treatment on the treated, soft-intervention, and many other practical causal estimands.

0 0 1 0
1 year ago

In this work,
1. We define a function class called "Unified Covariate Adjustment (UCA)" that incorporates various covariate adjustments; and
2. We developed a double machine learning (DML)-based estimator for the UCA-classes and provided finite-sample learning guarantees.

0 0 1 0
1 year ago
Post image Post image

We will present our work "Unified Covariate Adjustment for Causal Inference” (joint work with Jin Tian &
Elias Bareinboim) at #NeurIPS2024!
- Wed (12/11) from 11am - 2pm
- Poster Session 1 (East Hall A-C) #4901
- Link: openreview.net/pdf?id=aX9z2...
Come and see us!

3 0 2 0
1 year ago

I am attending NeurIPS 2024 this Tuesday through Sunday. I am also in the academic job market this year (www.yonghanjung.me). Happy to discuss potential opportunities! Get in touch if you’d like to chat! #NeurIPS2024

1 0 0 0