's Avatar

@vahidbalazadeh.bsky.social

9 Followers  |  19 Following  |  12 Posts  |  Joined: 10.12.2024  |  1.7557

Latest posts by vahidbalazadeh.bsky.social on Bluesky

Worried about reliability?

CausalPFN has a built-in calibration, and can make reliable estimations even for datasets that fall outside of its pretraining prior.

Try it using: pip install causalpfn

Made with ❀️ for better causal inference
[7/7]

#CausalInference #ICML2025

11.06.2025 13:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

When does it work?

Our theory shows that posterior distribution of causal effects is consistent if and only if the pretraining data only includes identifiable causal structures.

πŸ‘‰ We show how to carefully design the prior, one of the key differences in our work relative to predictive PFNs. [6/7]

11.06.2025 13:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Real-world uplift modelling:

CausalPFN works out of the box on real-world data. On 5 real RCTs in marketing (Hillstrom, Criteo, Lenta, etc.), it outperforms baselines like X-/S-/DA-Learners on policy evaluation (Qini score). [5/7]

11.06.2025 13:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Benchmarks:

On IHDP, ACIC, Lalonde:
– Best avg. rank across many tasks
– Faster than all baselines
– No tuning needed compared to the baselines (that were tuned via cross-validation)
[4/7]

11.06.2025 13:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Why does it matter?

Causal inference traditionally needs domain expertise + hyperparameter tuning across dozens of estimators. CausalPFN flips this paradigm: we pay the cost once (at pretraining), then it’s ready to use out-of-the-box! [3/7]

11.06.2025 13:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What is it?

CausalPFN transforms effect estimation to a supervised learning problem. It's a transformer trained on millions of simulated datasets. It learns to map from data to treatment effect distributions directly. At test time, no finetuning and manual estimator selection are required. [2/7]

11.06.2025 13:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨 Introducing CausalPFN, a foundation model trained on simulated data for in-context causal effect estimation, based on prior-fitted networks (PFNs). Joint work with Hamid Kamkari, Layer6AI & @rahulgk.bsky.social 🧡[1/7]

πŸ“ arxiv.org/abs/2506.07918
πŸ”— github.com/vdblm/Causal...
πŸ—£οΈOral@ICML SIM workshop

11.06.2025 13:13 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 2
Preview
Sequential Decision Making with Expert Demonstrations under Unobserved Heterogeneity We study the problem of online sequential decision-making given auxiliary demonstrations from experts who made their decisions based on unobserved contextual information. These demonstrations can be v...

Our general approach can be applied to various settings like bandits, MDPs, and POMDPs (5/5)

❀️ w/ Keertana Chidambaram, Viet Nguyen, @rahulgk.bsky.social , and Vasilis Syrgkanis

Link to paper: arxiv.org/abs/2404.07266

12.12.2024 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

To do so, we consider all prior distributions on the unobserved factors (e.g. the distribution over each arm's mean reward) that align with the expert data. We then choose the prior with the maximum entropy (least information) and apply posterior sampling to guide the exploration (4/5)

12.12.2024 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Online exploration can eventually identify unobserved factors but requires trial and error. Instead, we use expert data to limit the exploration space. In a billion-armed bandit with expert data spanning only the first ten actions, the learner should only explore those ten arms (3/5)

12.12.2024 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Unobserved confounding factors affect the expert policy in ways that are not understood by the learner. An important example is experts acting with privileged information. Naive imitation leads to single aggregated policies for each observed state and fails to generalize (2/5)

12.12.2024 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

How can we use offline expert data with unobserved confounding to guide exploration in RL? Our approach is to learn prior distributions from expert data and follow posterior sampling

Come to our poster #NeurIPS2024 today to learn more!

πŸ—“οΈ Thu 12 Dec 4:30 - 7 pm PST
πŸ“ West Ballroom A-D #6708

(1/5)

12.12.2024 16:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

@vahidbalazadeh is following 18 prominent accounts