Sean Harrison, PhD's Avatar

Sean Harrison, PhD

@sean-h.bsky.social

Evidence reviews, public health, epidemiology, statistics https://seanharrison.blog/

141 Followers  |  209 Following  |  824 Posts  |  Joined: 27.07.2024
Posts Following

Posts by Sean Harrison, PhD (@sean-h.bsky.social)

Depends on the construction and conceptualisation of "deprivation", but in general, I see residual confounding as above: unblocked paths because what you're measuring is not precisely the confounder of interest.

02.03.2026 16:05 — 👍 2    🔁 0    💬 0    📌 0

Quite so, but arguably a complete (possibly overly-complex) DAG would show that "deprivation" as a core variable would affect any measurement of deprivation (education, etc., which may be affected by other things), and residual confounding would therefore be the remaining open path.

02.03.2026 16:05 — 👍 2    🔁 0    💬 1    📌 0

If anyone needs it, I'd suggest "deprivation" could go into pretty much every "U" variable for virtually any observational research.

"Residual deprivation", if you like, if you've measured and controlled for e.g. "household income" or "Townsend deprivation index".

Because, like, that won't do it.

02.03.2026 15:37 — 👍 2    🔁 0    💬 1    📌 0

Anyway, it'd be pretty awesome to burn all those steps into a slice of tree stump and sell *that* at a farmer's market.

...

I probably wouldn't end up selling it, that sounds cool.

02.03.2026 12:55 — 👍 1    🔁 0    💬 0    📌 0

Pretty sure I once did a logistic regression with a continuous exposure sort-of by hand, as in, wrote out the matrices, inverted them, etc., using a computer for the maths but not the process.

Not 100% sure why, either for learning or for understanding.

02.03.2026 12:55 — 👍 2    🔁 0    💬 1    📌 0

Ahahaha, I didn't even see that.

The IRRs are most definitely *not* on the log scale, and the 0.8, 0.9 ... 1.2 are not on the log scale, but the scale itself *is* on the log scale (the distance between markings is equal).

26.02.2026 15:42 — 👍 0    🔁 0    💬 0    📌 0

If it's case, I wonder* if the authors can get back some of the $9,550** they spent on the APC...

*I don't really wonder.
**May be less if the authors had some kind of instutional deal.

Man, I hate journals.

26.02.2026 15:27 — 👍 3    🔁 0    💬 0    📌 0

"Minor" in the sense that the conclusions don't change based on the incorrect figure and/or IRRs, and that I'd just ask them to fix the figure, rather than retract the paper.

It's *possible* the journal re-did the figure in house style and screwed it up, fairly certain that happened to me once...

26.02.2026 15:21 — 👍 1    🔁 0    💬 1    📌 0
Figure 2 Unadjusted incidence rate ratios for heart failure, atrial fibrillation, and VHD at 12 months -- none of the IRRs or their 95% confidence intervals are plotted on the graph correctly (or vice versa, depending on which is correct)

Figure 2 Unadjusted incidence rate ratios for heart failure, atrial fibrillation, and VHD at 12 months -- none of the IRRs or their 95% confidence intervals are plotted on the graph correctly (or vice versa, depending on which is correct)

Minor but irritating point - Figure 2 is simply wrong.

None of the IRRs or their 95% CIs match the figure.

Like, there's only 3 figures...

26.02.2026 15:14 — 👍 9    🔁 1    💬 3    📌 1

If the decision to fund a study comes *solely* down to power, even though the methods are sound, and the research question can be answered well with the proposed data, then it can meaningfully add to the evidence base.

To do otherwise means you'd never study rare outcomes or populations.

24.02.2026 15:09 — 👍 0    🔁 0    💬 0    📌 0

"While relying on significance tests to judge a causal effect as zero or non-zero may seem unpalatable, an even worse interpretation is that the point estimate is the effect."

I don't think practice should be governed by people mistreating statistics.

24.02.2026 15:09 — 👍 0    🔁 0    💬 1    📌 0

"Careless readers may take it for a suggestion that any sample size is acceptable when making causal inferences about important questions. We believe this is a real risk."

Yet adequate sample sizes are *not* necessary for causal inference (though are obviously useful)...

24.02.2026 15:09 — 👍 0    🔁 0    💬 1    📌 0

And by doing more than nothing, they'll likely give a balanced overview of a shitty situation, which can then be used instead of "alarming" people with contradictory results.

Nonetheless, I don't call for any analysis, I call for *good* analyses, which require decent thought that goes beyond power.

24.02.2026 15:09 — 👍 0    🔁 0    💬 1    📌 0

In this case, the D group have provided an answer that is orders or magnitude more precise than A or B, so would dominate any meta-analysis in any case.

They also adjusted, where others didn't, which may be appropriate or not.

But, point is, reviewers would do more than "do nothing".

24.02.2026 15:09 — 👍 0    🔁 0    💬 1    📌 0

They'd look through each analysis, assess the risks of bias, maybe try for an IPD meta-analysis if they can get data from each study (although this is just 2x2 tables, and it looks like only one adjusted, so the data is largely available anyway), assess heterogeneity, and summarise appropriately.

24.02.2026 15:09 — 👍 0    🔁 0    💬 1    📌 0

I disagree with:

"They [meta-analysts] do nothing and the socially alarmed groups are left with four sets of equivocal and possibly contradictory results, arguably more alarming than having no information at all."

Reviews do more than meta-analyses, and their conclusion wouldn't be "do nothing".

24.02.2026 15:09 — 👍 0    🔁 0    💬 1    📌 0

I mean, that's just an argument for doing good research.

If the question is worth answering, it's worth answering with limited information, but I noted above that you'd still have to do a *good* analysis.

At least some of the hypothetical groups didn't do that.

24.02.2026 15:09 — 👍 0    🔁 0    💬 1    📌 0

"Any amount of good analysis is better than no analysis, so long as it is *all* reported, because it feeds into systematic reviews and meta-analyses, which are ultimately what we should be using for the basis of policy decisions, not individual studies.

And small populations need analyses too."

24.02.2026 14:12 — 👍 0    🔁 0    💬 1    📌 0

Send me the link - I may be able to find it

23.02.2026 16:43 — 👍 1    🔁 0    💬 1    📌 0

Quite so!

NICE could play a blinder one day and just say: "New interventions need to reduce the average cost per QALYs in the NHS or they won't be funded."

I mean, otherwise, you'll just creep inexorably to whatever cost per QALY threshold you're using.

23.02.2026 13:23 — 👍 0    🔁 0    💬 0    📌 0

The distinction matters: same with mental health and neurodivergence, people call rising SEND provision "overdiagnosis", without understanding anything about it.

I'd also take issue with "exponential growth", which has a meaning distinct from "went up fast", but whatever.

23.02.2026 11:32 — 👍 11    🔁 0    💬 0    📌 0

"Over the past decade, we have seen exponential growth in the number of children with special educational needs and disabilities."

Have we?

Or have children, who would have otherwise been left to struggle, been recognised as having some additional needs that, when met, allow them to do better?

23.02.2026 11:32 — 👍 12    🔁 0    💬 1    📌 0

With diagnostic infrastructure that could meet demand, the necessity of self-diagnosis (possibly including concept-creep by the public) would diminish substantially.

23.02.2026 11:17 — 👍 0    🔁 0    💬 0    📌 0

And this is directed at clinical guidelines, not self-diagnoses.

Self-diagnosis is much more difficult, and for e.g. autism (but extending across conditions and states of being), is a consequence of the reality that formal diagnosis is not an option that vast numbers of people.

23.02.2026 11:17 — 👍 0    🔁 0    💬 1    📌 0

But no, it's mainly: "those people would never have been diagnosed in my day, they don't need treatment, fuck them!"

(These are just personal views, not specifically directed at anyone!)

23.02.2026 11:13 — 👍 0    🔁 0    💬 1    📌 0

"If it turns out that expanding the definition has decreased the cost-effectiveness low enough, we may want to reconsider the definition or treatment protocols: not least because we wouldn't want to be unnecessarily treating people who aren't likely to benefit from it."

23.02.2026 11:13 — 👍 0    🔁 0    💬 1    📌 0

They'd presumably start from positions of:

"The definition of [condition] has expanded, so the effectiveness of [treatment] has decreased, making the cost-effectiveness lower, so we need to re-do those analyses to make sure the [treatment] still meets the required cost-effectiveness threshold."

23.02.2026 11:13 — 👍 0    🔁 0    💬 1    📌 0

There are arguments to be made for defining and redefining conditions with or without objective diagnostic tests on the basis of new evidence, including by trying to make care for those conditions as cost-effective as possible.

But those arguments don't start from a position of "overdiagnosis".

23.02.2026 11:13 — 👍 0    🔁 0    💬 2    📌 0

But I feel the moral panic about "overdiagnosis" of mental health conditions is wrought from expectations about how many people *should* have particular conditions.

This is an entirely subjective and evidence-free point of view, and can be disregarded as such.

23.02.2026 11:13 — 👍 1    🔁 0    💬 1    📌 0

In the former, nothing is overdiagnosis, just false positives.

In the latter, everyone who wouldn't benefit would be "overdiagnosed", and are now also false positives.

If in practice, you can't tell the difference (e.g. prostate cancer), then you need to accept this or develop a new test.

23.02.2026 11:13 — 👍 1    🔁 0    💬 1    📌 0