Arman Oganisian's Avatar

Arman Oganisian

@stablemarkets.bsky.social

Statistician | Assistant professor @ Brown University Dept of Biostatistics | Developing nonparametric Bayesian methods for causal inference. Research site: stablemarkets.netlify.app #statsky

214 Followers  |  152 Following  |  93 Posts  |  Joined: 14.12.2024
Posts Following

Posts by Arman Oganisian (@stablemarkets.bsky.social)

When it comes to likelihood-based inference, it’s not that we are “treating X as fixed” it’s just that we implicitly assume that F_{Y|X} and F_{X} have distinct parameters so that contributions from F_{X} are ignorable.

01.03.2026 20:55 — 👍 1    🔁 0    💬 0    📌 0

Teaching regression in my Bayes class and one thing I don’t like is language about whether we “treat X as fixed” or “treat X as random”.

Both X and Y are random draws from a joint F_{X,Y}. It’s just that we factorize it as F_{X,Y}= F_{Y|X} F_{X} w/interest in E[Y|X] = ∫y dF_{Y|X}.

01.03.2026 20:55 — 👍 5    🔁 0    💬 1    📌 0

Having got into causal from econometrics, this paper is giving me flashbacks. Everything was about endogeneity wrt an error term in a linear/additive outcome model.

01.03.2026 19:25 — 👍 3    🔁 0    💬 0    📌 0

I also don't think an observational study should be published or not based on how sensitivity results are to unmeasured confounding. A main point and interval estimate accompanied by a tipping point analysis is still a valid and useful contribution.

01.03.2026 18:47 — 👍 5    🔁 2    💬 1    📌 0

The critique of unmeasured confounding is often levied in a lazy/broad way. It is trivially true in any observational study. But if the critic can't think of a plausible such confounder and posit a reasonable direction/magnitude of its bias then they're not doing productive science.

01.03.2026 18:47 — 👍 41    🔁 7    💬 5    📌 3

We really do need to get away from the binary. It should be "is there unmeasured confounding or not." Almost certainty there is - it's a matter of how much and what direction.

01.03.2026 18:31 — 👍 7    🔁 0    💬 1    📌 1

I would say experts I work with more like the "tipping point" style of reasoning (e.g. in the slides). That is, how much/in what direction would unmeasured confounding have to be to make my non-null result null. Or make my null result non-null.

01.03.2026 18:31 — 👍 2    🔁 0    💬 1    📌 0

So these are pretty mainstream, "establishment" folks in causal working on this stuff over decades.

As a field, one reason causal inference is obsessed with stating untestable assumptions with precision is because now you can reason about effects of their violations in an equally precise way.

01.03.2026 18:23 — 👍 1    🔁 0    💬 1    📌 0

At least among methodologists sensitivity analyses are embraced and developed. E.g. the approach with the Delta(a,l) is straight from Jamie Robins' "confounding function" - thought he Bayesian spin is new. Another example from Brumback, Hernan, Haneuse, and Robins: pubmed.ncbi.nlm.nih.gov/14981673/

01.03.2026 18:23 — 👍 1    🔁 0    💬 1    📌 0

Delta is just the amount of residual confounding remaining even after controlling for L.

The approach in the slide is different and does posit one “composite” confounder - more akin to the e-value approach of vanderwheele

28.02.2026 22:54 — 👍 1    🔁 0    💬 1    📌 0

The approach in the paper is agnostic to the number of unmeasured confounders. Unmeasured confounders implies that Delta(a,l) = E[ Y(a) | L=l] - E[ Y(a) | A=a, L=l] =\= 0

Agnostic to whether this is due to 1 or 10 unmeasured confounders. You put a prior on Delta directly.

28.02.2026 22:53 — 👍 2    🔁 0    💬 1    📌 0

Lately peer review has just involved fighting the erroneous notion that “all bayesian causal inference is a posterior predictive exercise in which we impute each subject’s missing counterfactual”

27.02.2026 23:39 — 👍 4    🔁 0    💬 0    📌 0

As a result I’m sympathetic to the argument that you go to school to get skills that make you valued in the labor market and get a high ROI. That intangible enrichment outside of that is great but a second order priority.

27.02.2026 14:47 — 👍 0    🔁 0    💬 1    📌 0

I’m sure we’re between utopia and societal collapse ;)

I’ve just seen too many people get screwed into decades of college debt because some philosophy professor told them that “true education” is debating Kant.

I came from a very low income family and never had the luxury of thinking this way.

27.02.2026 14:47 — 👍 0    🔁 0    💬 1    📌 0
Post image

Equality is another story sure. But absent a commensurate increase in demand, wages will fall and dissuade subsequent entrants rather than societal collapse.

Also fwiw law salaries are extremely bimodal - skewed up by top firms. A big chunk don’t make much money

www.nalp.org/salarydistri...

27.02.2026 14:30 — 👍 0    🔁 0    💬 2    📌 0

I’m not sure because the supply-demand dynamics may simply shift. If everyone suddenly goes into a very high paying job, the increased supply may just lower wages.

27.02.2026 14:07 — 👍 1    🔁 0    💬 1    📌 0
Post image Post image

I also give an example in Part 2 of a summer instute course I teach slides/code available here:
github.com/stablemarket...

Relevant slides posted here.

27.02.2026 04:01 — 👍 2    🔁 1    💬 1    📌 0
Post image

E.g. Figure 3B shows mean/CI for causal effect across various priors for Delta - the amount of unmeasured confounding (UC).

First from Left: point-mass at 0 encodes strong prior belief of no UC.

Second: Gaussian encodes prior belief of UC in either direction symmetrically - widening the interval.

27.02.2026 04:01 — 👍 0    🔁 0    💬 1    📌 0

I survey a bunch of papers along these lines in Section 5 of this paper and give an implementation example.

doi.org/10.1002%2Fsi...

here's arxiv version in case there's access issues: arxiv.org/abs/2004.07375

27.02.2026 04:01 — 👍 0    🔁 0    💬 1    📌 0

Imo this adds (not subtracts) from the hype as there's a large body of work on causal sensitivity analyses for such cases.

All methods will involve untestable assumptions - e.g. at-random compliance. We can widen our intervals appropriately to account for our uncertainty their violations.

26.02.2026 23:28 — 👍 1    🔁 0    💬 1    📌 0

100% prediction interval

25.02.2026 13:59 — 👍 16    🔁 1    💬 0    📌 0
Post image

Related is this excellent post by gelman about studies that are “dead on arrival.” Roughly: questions around true effect sizes that are tiny but estimated using data that are noisy. It’s a long post but I screenshotted the tldr.

statmodeling.stat.columbia.edu/2016/06/26/2...

24.02.2026 20:51 — 👍 2    🔁 0    💬 0    📌 0

What exactly is the paradox? Nearly every aspect of the knowledge production pipeline - grant funding decisions, peer-review, replication and falsification, dissemination - involves community exchange. The goal is still to produce knowledge.

23.02.2026 18:18 — 👍 1    🔁 0    💬 0    📌 0

Similarly I don’t think one can encode unit-level assumptions like exclusion restrictions cleanly on a DAG - since these are structural assumptions about potential outcomes, not their distributions. Nor can we (explicitly) encode cross-world identification assumptions.

23.02.2026 05:58 — 👍 3    🔁 0    💬 0    📌 0

I agree. DAGs are visual representations of conditional dependencies (arrows) in a joint distribution of variables (nodes). So they can encode exchangeability (a conditional independence statement) but not parallel trends (an equality of expected differences).

23.02.2026 05:58 — 👍 3    🔁 0    💬 1    📌 0

I think to distinguish, pearl’s generalization is sometimes referred to as non-parametric structural equation models with independent errors NPSEM-IE

See eg csss.uw.edu/files/workin...

19.02.2026 18:48 — 👍 1    🔁 0    💬 0    📌 0

This paper is now out in final form and is open-access!

journals.lww.com/epidem/fullt...

07.02.2026 16:05 — 👍 2    🔁 0    💬 0    📌 0

It follows that what we are really after is flagging subgroups with high P(Y^1 < Y^0 | X_i) - ie a high probability of benefiting from treatment. Unlike P(Y=1 | X_i) this involves potential outcomes Y^1 and Y^0 and so is an exercise in heterogenous treatment effect estimation.

26.01.2026 01:41 — 👍 0    🔁 0    💬 0    📌 0

We train a model for relapse Y on “risk factors” X then flag patients with high P(Y=1 | X_i) for intervention. But consider that some of these subjects may still relapse whether treated or not - wasting resources.

26.01.2026 01:41 — 👍 1    🔁 0    💬 1    📌 0

This is a really nice post - and productive way of avoiding doomscrolling!

In my view, prediction models that are used to inform a decision may be implicitly causal.

Suppose we want to build a prediction model for relapse, with the goal of flagging patients in high-risk subgroups for treatment

26.01.2026 01:41 — 👍 0    🔁 0    💬 1    📌 0