Murat Kocaoglu's Avatar

Murat Kocaoglu

@murat-kocaoglu.bsky.social

Asst. Prof. at Purdue ECE. Causal ML Lab. Causal discovery, causal inference, deep generative models, info theory, online learning. Past: MIT-IBM AI Lab, UT Austin, Koc, METU.

125 Followers  |  97 Following  |  32 Posts  |  Joined: 20.11.2024  |  2.2119

Latest posts by murat-kocaoglu.bsky.social on Bluesky

Thank you so much Naftali! Hope all is well

05.06.2025 16:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thank you!

03.06.2025 11:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
CausalML Lab

Bookmark our lab page and GitHub repo to follow our work:
muratkocaoglu.com/CausalML/
github.com/CausalML-Lab

03.06.2025 08:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

CausalML Lab will continue to push the boundaries of fundamental causal inference and discovery research with an added focus on real-world applications and impact. If you are at Johns Hopkins @jhu.edu, or more generally on the East Coast, and are interested in collaborating, please reach out.

03.06.2025 08:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I am also deeply grateful to Purdue University and @purdueece.bsky.social for their support during my first four years as a professor. I had the privilege of teaching enthusiastic undergrads, working with outstanding PhD students, and great colleagues there. I learned a great deal from them.

03.06.2025 08:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I am happy to share that I will be joining Johns Hopkins University's Computer Science Department @jhucompsci.bsky.social as an Assistant Professor in Fall 2025.

I am grateful to my mentors for their unwavering support and to my exceptional PhD students for advancing our lab's research vision.

03.06.2025 08:42 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 2    πŸ“Œ 2

Faithfulness can be relaxed significantly. Determinism (?) not sure what this means but I don't think it's equivalent to CMC. No unmeasured confounder is not necessary. You need some sparsity to observe some independence pattern, that's all; to learn something, but not necessarily everything.

25.01.2025 03:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

They used to give champagne glass? I was surprised to see something other than a mug this year.

28.12.2024 17:29 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Teaching young researchers mathematical and scientific rigor is more important than ever today with AI tools' wider adoption in research, as these tools tend to be overly optimistic. LLM-assisted false proofs risk flooding our already overloaded reviewing infrastructure.

24.12.2024 20:21 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Sample Efficient Bayesian Learning of Causal Graphs from Interventions Causal discovery is a fundamental problem with applications spanning various areas in science and engineering. It is well understood that solely using observational data, one can only orient the...

We will present this work at #NeurIPS2024 on Wednesday at 4:30pm local time in Vancouver. Poster #5107.

Led by my PhD students Zihan Zhou and Qasim Elahi.

Paper link:
openreview.net/forum?id=RfS...

Follow us for more updates from the #CausalML Lab!

10.12.2024 17:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

WienΓΆbst et al.'s way of uniformly sampling from Markov equivalent DAGs allows us to answer other interesting questions. We focus on estimating the causal effect of non-manipulable variables. We can learn the edges adjacent to this node (a graph cut) and use adjustment from obs.

10.12.2024 17:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We then update the posteriors over each graph cut, which quickly converge to the true cut configurations. This gives us a sample-efficient way to learn causal graphs through interventions non-parametrically for discrete variables.

Green at the bottom is ours vs. some baselines.

10.12.2024 17:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Assuming we have enough obs. data, we can compute the likelihood of our intv. samples given any graph cut, even though we don't know the graph. Two ways to do this are by unif. sampling causal graphs in poly-time thanks to WienΓΆbst et al. 2023, or by adjustment of Perkovic 2020.

10.12.2024 17:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We leverage an idea from 2010s on learning causal graphs with small interventions. (n, k) separating systems cut every edge with interventions of size k. Each intervention gives info about a cut. We keep track of the posteriors of a set of graph cuts due to (n, k) sep. system.

10.12.2024 17:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Bayesian approaches are promising since they can incorporate causal knowledge even in a single interventional sample. However they are computationally intensive to run on large graphs.

Instead of keeping track of all causal DAGs, can we keep track of a compact set of subgraphs?

10.12.2024 17:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You have causal questions. First step is deciding cause-effect relations in your system. Causal graphs capture these compactly. We usually need to be able to run experiments and use the outcomes to infer causal graphs. But experiments are expensive & w/ few samples.

#NeurIPS2024

10.12.2024 17:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Partial Structure Discovery is Sufficient for No-regret Learning in... Causal knowledge about the relationships among decision variables and a reward variable in a bandit setting can accelerate the learning of an optimal decision. Current works often assume the causal...

I will present this work at #NeurIPS2024 next Thursday at 11am local time in Vancouver. Poster #5104.

Led by my PhD student Qasim Elahi. Joint work with my colleague Mahsa Ghasemi.

Paper link:
openreview.net/forum?id=uM3...

Follow us for more updates from the #CausalML Lab!

08.12.2024 19:36 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Finally, we have our bandit algorithm that can operate in unknown environments taking advantage of the fact that partial causal discovery is sufficient for achieving optimal regret, pseudocode below:

08.12.2024 19:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

A toy example from the paper: Missing V1 <--> V3 does not affect the possibly optimal minimal intervention sets (POMIS), missing any other bidirected edge does. So we don't need to allocate rounds in our causal bandit algorithm for learning this edge after learning the rest.

08.12.2024 19:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We find that not all confounder locations are needed. You can get away with not learning some and you end up with the same POMIS set, which means you will never miss an optimal arm!

We propose an interventional causal discovery algorithm that takes advantage of this observation.

08.12.2024 19:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If you don't know the causal graph, you may try to learn it from data and/or experiments. We identify an interesting research question here:

Do we need to know all unobserved confounders to learn all POMISes? Or can we get away without knowing some?

This is not obvious.

08.12.2024 19:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A very nice idea is the POMIS developed by Sanghack Lee and Elias Bareinboim in 2018. They use do-calculus to eliminate unnecessary actions that give the same reward. They propose a principled algorithm to do this. And if you use less, they show you may miss the optimal arm.

08.12.2024 19:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Most existing work focuses on how this action space reduction can be done algorithmically if you know the causal structure. In a semi-Markovian model, this includes the location of every unobserved confounder represented as bidirected edge.

08.12.2024 19:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You want to optimize a reward in an unknown environment. Structural knowledge of cause-effect relations is known to help significantly reduce the search space for bandit algorithms. But how much of the causal structure do you need to know to do this?

#NeurIPS2024

08.12.2024 19:36 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Not exactly sure what that means or how to do that. Any link with more info on this?

08.12.2024 01:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Conditional Generative Models are Sufficient to Sample from Any... Causal inference from observational data plays critical role in many applications in trustworthy machine learning. While sound and complete algorithms exist to compute causal effects, many of them...

We will present this work at #NeurIPS2024 next Thursday at 11am local time in Vancouver. Poster #5103.

Joint work led by my PhD student
Md. Musfiqur Rahman and colleague Matt Jordan.

Paper link:
openreview.net/forum?id=vym...

Follow us for more updates from the #CausalML Lab!

08.12.2024 00:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

With our method, we can quantify how spurious correlations in their training data affect large image generative models. For example, we can quantify how much changing the biological sex of a person affects their perceived age, a non-causal relation that shouldn't be there:

08.12.2024 00:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our algorithm mimics the steps of the ID algorithm and inherits its soundness and completeness guarantees for arbitrary causal queries.

It can also seamlessly sample from conditional interventional queries; the existing deep causal generative models use rejection sampling (slow)

08.12.2024 00:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our paper answers this question positively. We propose an algorithm that trains a bunch of conditional generators, and ties them together to sample from any causal effect estimand.

This unlocks the potential of diffusion models for causal inference!

08.12.2024 00:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Deep generative models give us a way to sample from such conditional distributions, learning them implicitly. A fundamental question we ask is, can we sample from any identifiable causal query (distribution) just by using conditional generative models?

08.12.2024 00:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@murat-kocaoglu is following 20 prominent accounts