Arman Oganisian's Avatar

Arman Oganisian

@stablemarkets.bsky.social

Statistician | Assistant professor @ Brown University Dept of Biostatistics | Developing nonparametric Bayesian methods for causal inference. Research site: stablemarkets.netlify.app #statsky

130 Followers  |  136 Following  |  42 Posts  |  Joined: 14.12.2024  |  2.0695

Latest posts by stablemarkets.bsky.social on Bluesky

As you suggested in the post, in my experience the situation is a lot better in biostatistics vs pure statistics departments at least at the places i’ve been at. I could also just be lucky - I have a great group of collaborators and can afford to be selective in the work I take on.

15.07.2025 21:40 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If someone raises this concern, then the burden is on them to bring forward even a single plausible covariate - that is sufficiently unrelated with all the other covariates controlled for - with a realistic dual effect on treatment and outcomes. Otherwise they shouldn’t bring it up.

05.07.2025 15:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

On the other hand: we have causal critiques of the sort β€œthis is wrong because there may be unmeasured confounding.”

Such critiques without solutions are intellectually lazy and do not add scientific value - after all unmeasured confounding is an issue in all obs causal studies.

05.07.2025 15:05 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I was asked to give some advice to current students in this Penn alumni spotlight. It’s hard to give general advice but these four items have worked well for me so far.

dbei.med.upenn.edu/alumni/alumn...

02.07.2025 15:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - stablemarkets/cci_institute_2025: Materials for Bayesian Causal Inference Sessions @ University of Pennsylvania's Center for Causal Inference (CCI)'s summer institute. May 29, 2025 Materials for Bayesian Causal Inference Sessions @ University of Pennsylvania's Center for Causal Inference (CCI)'s summer institute. May 29, 2025 - stablemarkets/cci_institute_2025

Thanks for linking! I also have a set of slides with Stan code from a recent half-day short course:

Slide deck 1 is just a primer on Bayesian inference. Slide deck 2 is on the Bayesian causal stuff.

github.com/stablemarket...

28.06.2025 14:48 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Possibly there’s a computational component to β€œperformance.” But i’m not sure if that’s what is being talked about in that excerpt. I try to avoid posterior approximations if at all possible and just do full mcmc - which is feasible in most stuff I do.

28.06.2025 16:26 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - stablemarkets/cci_institute_2025: Materials for Bayesian Causal Inference Sessions @ University of Pennsylvania's Center for Causal Inference (CCI)'s summer institute. May 29, 2025 Materials for Bayesian Causal Inference Sessions @ University of Pennsylvania's Center for Causal Inference (CCI)'s summer institute. May 29, 2025 - stablemarkets/cci_institute_2025

Thanks for linking! I also have a set of slides with Stan code from a recent half-day short course:

Slide deck 1 is just a primer on Bayesian inference. Slide deck 2 is on the Bayesian causal stuff.

github.com/stablemarket...

28.06.2025 14:48 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

β€œwhen the quasi-Bayesian method outperforms…”

A point of contention is typically the choice of metric. Many Bayesians feel annoyed at having to demonstrate good frequentist properties to be β€œworthy”. Because it implicitly casts frequentism as the home court they’re compelling you to play on.

28.06.2025 14:43 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I’ve seen so many instances of conflating sample and population estimands when doing Bayesian causal inference in conference talks, papers on arxiv, papers i’ve reviewed, and even published papers. People often claim to be doing one when actually doing the other.

27.06.2025 19:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

2) the binning seems to complicate the causal interpretation as there are many versions a bin treatment (all the possible values within the bin). Or maybe some implicit SUTVA assumption is being made ?

10.06.2025 18:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Cool stuff. Any insight into whether

1) one can interpret the binning as corresponding to a kind of ad-hoc smoothing to β€œavoid” treatment values with not support in the empirical distribution

10.06.2025 18:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3. If a student can pass assessments w/LLM support without having completed the learning objectives, then we should take it as a valuable signal that the learning objectives may be misaligned with the skills actually needed in the market (where they can use LLMs).

08.05.2025 19:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

2. The modal student gets a degree to acquire signals/skills needed to enter/be productive in a labor market (academic, industry etc). If schools want to be responsive to this goal, their classes should have a set of learning objectives consistent with it along w/ assessments of whether they are met

08.05.2025 19:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
2025 Penn Causal Inference Summer Institute - Penn DBEI Discover the latest news, research breakthroughs, and expert insights from Penn’s DBEI, advancing biostatistics, epidemiology, and informatics to shape population health.

I’m teaching a 3-hour session on Bayesian causal inference at this year’s Penn Causal Inference Summer Institute, 5/27-5/30.

Virtual registration/attendance options are available.

There are sessions on a lot of other great topics - see full agenda here:
dbei.med.upenn.edu/news-events/...

#statsky

03.05.2025 15:31 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Totally! The Bayes bootstrap is the posterior under a improper DP prior w/ concentration parameter alpha=0.

Generally, The DP posterior of an unknown distr. is a weighted combo of the empirical distr. and a base distr. alpha=0 puts all weight on the empirical - so in that sense is nonparametric.

27.04.2025 04:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yes! My notes are based on Lancaster’s paper. With all the math filled in.

26.04.2025 22:33 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Reminder to self to post my lecture notes on first-order equivalence between bayesian bootstrap SEs, frequentist bootstrap SEs, and sandwich SEs for a linear model with heteroskedastic errors

26.04.2025 00:39 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 4    πŸ“Œ 0

Is the for-loop in the generated quantities block really necessary?

Since the conditional expectation for the logistic model is available in closed form, I think you can directly weight by a draw from Dir(1) without having to resample from a categorical dist.

26.04.2025 00:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Academia is cool because if you're doing it right, every paper you published in the last 3 years feels inadequate now that you understand the topic better, but it'll take 3 years to get out the version where you get it more right, and you get to do that until one day you die! Isn't that cool

19.04.2025 19:11 β€” πŸ‘ 894    πŸ” 104    πŸ’¬ 20    πŸ“Œ 21

β€œJudea Pearl or Don Rubin might tell you that Statistics provides a
science of causation.”

In my understanding, this is the exact opposite of what either of these researchers would say!

20.04.2025 16:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Congratulations to our very own Arman Oganisian, Assistant Professor of Biostatistics, for receiving the 2025 SPH Dean’s Award for Excellence in Research Collaboration! πŸ†

We’re so proud to celebrate your achievement!

11.04.2025 18:38 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Project MUSE - Priors and Propensity Scores in Bayesian Causal Inference

New paper w/ Tony Linero on Bayesian causal inference:

Independent priors on propensity score & outcome models often imply a strong prior on no *measured* confounding - a prior belief that 1) we rarely hold and 2) leads to bad frequentist performance
tinyurl.com/2udmbf6a

#statsky

11.04.2025 17:37 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Today’s mcmc chains are invoking feelings of dread, woe, and malice.

(credit to chatgpt) #statsky

03.04.2025 14:11 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
A Bayesian framework for causal analysis of recurrent events with timing misalignment Abstract. Observational studies of recurrent event rates are common in biomedical statistics. Broadly, the goal is to estimate differences in event rates u

I’ll be at #ENAR2025 to talk about a recent paper on Bayesian causal inference with a recurrent event outcomes!

Session 50: Monday 1:45-3:30

Talk info:

www.enar.org/meetings/spr...

Full paper:

academic.oup.com/biometrics/a...

#StatsSky

23.03.2025 18:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

What it should say is we do not *conclude* the trt effect is exactly zero. We either reject the null or fail to reject it, but we never accept it.

17.03.2025 05:17 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Such odd wording: β€œWe are not assuming the trt effect is exactly zero.” But we are. In NHST we assume H0 is true and find the distribution of some test statistic under that assumption. If the observed test stat is too far in the tails you reject the H0.

17.03.2025 05:17 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For something like a placebo pill the (sharp) null would hold exactly.

17.03.2025 00:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Analogously, viewing penalization as shrinkage induced by a prior, then variable selection corresponds to a kind of highly informative prior exactly at zero. And it’s rare that we’d hold such a strong prior belief - unless the covariate was like a subject’s astrological sign or something

13.03.2025 06:15 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

Interesting perspective. I tend to have the opposite view in that I find LASSO’s exact feature selection to be unsatisfying. My view being that all covariates probably have *some* effect - just to different degrees. I don’t like the binary in or out aspect.

13.03.2025 06:15 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

iirc LASSO tend to shrink truly 0 coefficients exactly to 0, but over-shrinks truly non-0 coefs. Ridge avoids over-shrinking truly non-0 coefs, but doesn’t shrink truly 0 coefs as aggressively. Horseshoe is via media.

Related application to sparse reg:
proceedings.mlr.press/v5/carvalho0...

13.03.2025 06:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@stablemarkets is following 20 prominent accounts