A suggestive question they can ask is whether it would make sense to write the discussion and conclusion reversing the roles of exposure and outcome (since associations are symmetric).
On Chile's national college common app platform, smart personalized info/assistance helped cut mistakes and boosting placements into higher-ranked programs by 20 percent, from Tomás Larroucau, Ignacio A. Rios, Anaïs Fabre, and Christopher Neilson https://www.nber.org/papers/w34164
This is one of those probability facts that drives my usual advice to people seeking intution for probability theory: Stop seeking intuition! It's not intuitive, and that's why it is so useful.
You can learn examples and reform your intution in time. But better to just trust the axioms and compute.
To avoid an LLM in the loop, perhaps some success could be had by translating and de-translating the text into one (or more) other languages.
Regarding 1., just off the top of my head, you could use LLM's to paraphrase or re-express the same content, in a way that gets rid of any idiosyncratic style that may be identifiable. Additionally, can request to (probabilistically) replace activities, places, family relationships.
In the same flavor of one of my favourite vignettes, @rmcelreath.bsky.social 's Statistical Rethinking on Akaike and the conception of the AIC:
Another exhibit to the underappreciated importance of 'idle' activities for insights and connections.
Is it any of these? ryxcommar.com/2019/08/30/s...
datascience.stackexchange.com/questions/10...
github.com/scikit-learn...
If you're trying to *predict* the likelihood of being an axe murderer, association is enough. That is true and uncontroversial. But not sure this is the point in discussion here.
Unsure what is the interpretation here. Is it 1) 'being an axe murderer' is seen as a latent class one belongs to even before commiting a murder (akin to a principal stratum), so that it makes them buy an axe even before its used as a weapon or 2) The scenario is a premeditated axe murder
However, you just introduced partial knowledge of that causal process, by saying that 'purchasing an axe' is a cause and that 'bodies in the car' are an effect.
As Stoltenberg (1997) was commenting a couple decades ago about 'heritability':
"To be clear, no one is saying correlation implies causation ... but it seems reasonable to assume that it does."
Thanks, Guardian.
"According to the CDC, in 10 percent of those drownings, the adult will actually watch the child do it, having no idea it is happening."
Are you looking for an applied paper where they use a DAG to recognize they have confounding by indication or rather a theoretical treatment of the situation?
The status quo is unpaid work, but I haven't seen any proposal to transform the publishing system that wishes to keep it that way.
I always think of this excerpt from 'A Course in Econometrics' (Goldberger, 1991), where the tongue-in-cheek concept of micronumerosity (small sample size) is introduced as a parallel:
Micronumerosity leads to loss of precision, drastic changes with additional data, wrong hypothesis testing, etc.
Makes me think of the title of this reply by Simonsohn and co.
The analog'd be smth like "Causal DAGs won't give you the lowest mean squared error estimator for your data and parametric context but it will distinguish between nonparametrically identifiable and nonidentifiable estimands", but catchy
Had also escaped my Greenland papers radar for quite some time somehow
Not an exhaustive article, and also partially unorthodox (cf. 'random' or 'epidemiological' confounding), but Greenland and Mansournia's (2015) paper touches on some limitations.
Informally, I'd say a causal DAG only shows you what causes what (plus some neat identification implications from this)
But that is leaving quite some heavy work as an exercise to the reader. Maybe more critically, it also does not give the reader a way to concretely contest some of these assumptions or claims behind the conclusion, as they are not presented in any way, much less an explicit and unambiguous one. 3/2
I get what you are hinting at though. There is some sort of implicit assumption on the sparsity of variables involved and a vague restriction on which causal directions would make sense in that system, in which maybe the previous questions can be quantified as affirmative. 2/2
In which specific way would it be evidence? Inductively, is it more likely that it is causative given that they are associated than they are not (whatever that statistical model would look like)? From an error perspective, could estimating an association falsify the claim that there is a cause? 1/2
Which is an aspect that is often overlooked, if not outright conflated with identification. But I'd argue the issue there is with the over-selling of what a causal DAG can tell you rather than with causal DAGs not fulfilling their promise.
I'd say 2) is perhaps too ambiguous as to evaluate its wrongness. Causal DAGs are tools to establish non-parametric identification. I'd assume that 'understanding OS', as you suggest, goes beyond mere identification, and actually cares about estimation with finite data/blocks.
We're so not ready for the AI slop deluge.
$119 Springer cancer treatments book: ‘As an AI language model …’ – Pivot to AI
That professional chess players burn thousands of calories through a match just by sheer cognitive effort.
Maybe not the main point of the tweet, but this is only in the (causal) risk difference scale, no? Average causal effects are not averages of contrasts, but contrasts of averages. Linearity of expectation makes this the same in the difference scale, but else an ATE is not an average of ICE's.