The person who wrote this list seems to have no idea either what each of these jobs entails or the limits of LLMs.
tbf it could be an AI-generated list.
That'd be on-brand.
@sean-h.bsky.social
Evidence reviews, public health, epidemiology, statistics https://seanharrison.blog/
The person who wrote this list seems to have no idea either what each of these jobs entails or the limits of LLMs.
tbf it could be an AI-generated list.
That'd be on-brand.
Fully agree.
To add to the "this needs to stop": read a paper yesterday where they referenced their previous papers as if they were other people, e.g. Smith wrote in Smith's own paper "Smith (2018) suggested...".
If you reference your own work, own it.
pmc.ncbi.nlm.nih.gov/articles/PMC...
Ha, he did for me too.
31.07.2025 08:48 β π 1 π 0 π¬ 0 π 0Hard disagree: he could be a good dad (p>0.05), a bad dad (p<0.05, -ve effect direction), or THE BEST DAD (p<0.05, +ve effect direction).
I mean, it's clear from the first 2 panels he's NOT the best dad, he's a dick, but it's a frequentist analysis that ignores all previous evidence.
Huh, I'm a cliche.
I started epi by obsessively analysing data to prove why another "expert" was wrong about a method of adjusting PSA for age & BMI.
I work from home, alone, 80% of the time.
If I live long enough, there's at least a 90% chance I'll get prostate cancer.
DAMMIT
Thanks - I'm a big fan of the literalness of many German words, so knew a few!
Stinktier is a personal favourite, along with Fledermaus.
Oh, and Wasserschwein!
Extremely limited German, let's try a translation:
Summer/Winter/Nicholas (??)
Playpark (?)/Animal-park (zoo)/tea-house (??)
Sweets (??)/chocolate/PIZZA
?????
Circus/ZOO
Google translate would be so much better if it just listed words it didn't know as ?????????
(not questioning expertise on 1*SE, I just don't know why!)
24.07.2025 11:27 β π 1 π 0 π¬ 0 π 0I thought the relative precision could be lower for Z because it was binary, and low number of imps could mean greater possibility of higher imprecision.
I re-ran with Z~N(0,1) - took out "if `run'", otherwise same code.
Now I just think "it was chance".
Also, why not 1.96*MCSE in graph?
Increasing buffer length (to, say, 1000) might also help?
24.07.2025 11:00 β π 2 π 0 π¬ 1 π 0I had the same, and thought it was my computer not keeping up with the recording.
Think I resolved it with a restart and closing browser tabs etc.
Haven't tried this, but this thread recommends a better audio driver too: www.reddit.com/r/youtubers/...
Both underappreciated, and often called on to fix something unfixable too late.
I had this conversation with an information specialist yesterday!
I'm also generally wary of meta-analyses: not because they can't be useful or meaningful (they absolutely can be), but that people reading the result will take that, and only that, away.
So I get the impulse to remove high risk of bias studies from that...
Nah, I agree with the point that most reviews are crap, and there's plenty of frustrations around that on all sides.
It was only the technical aspect of excluding vs sensitivity vs comprehensive reporting in case anyone wanted a pedantic over-explanation.
Ha, I read that when it came out!
And yeah, it takes up your valuable time for no meaningful gain to anyone - certainly in my experience people don't value information scientists & medical librarians nearly enough!
(with a top-up search if we think the search was crap.)
Important: searches should be at the very least *checked* by an information scientist!
I've been doing this for over a decade, I've done dozens of reviews, I "help" with searches, I understand all the logic: I can't do a search from scratch.
What are you talking about - if it's an RCT, that's an automatic green light!
I've never seen RCTs that look even a tiny bit absolute shit!
Certainly never seen any RCTs that might have selected their participants and analysed their data in such a way as to support their prior beliefs!
I saw a paper yesterday in a "student research" journal (didn't know that was a thing) where the search returned 3 results.
Yes, this is a problem.
Although include clinicians, and anyone else who does a review because it's "easy" or "simple", with students.
This requires good understanding of primary studies, including a lot of causal inference and statistics, as well as review methods.
So yeah, it's a lot, but that's the job (even if it's seldom done well).
Evidence should never be discounted entirely - the above risk, but also "you excluded these studies, this review is clearly biased" (can you imagine the fun during COVID mask reviews...) - but evidence needs intrepreting, and risk of bias factors into that.
24.07.2025 08:05 β π 0 π 0 π¬ 1 π 0Sensitivity analyses, or meta-analyses by overall risk of bias, also fine.
But they best thing to do is include everything, appraise everything honestly and completely, then interpret the evidence (with or without a meta-analysis) while acknowledging and accounting for risks of bias.
Risk of bias assessments shouldn't be used to exclude studies in a main analysis - the incentive to give high risks of bias to studies that don't meet the answer you want etc.
Specific, objective risks of bias as eligibility criteria, e.g. "observational studies need to adjust for X,Y,Z": fine.
In offence of systematic reviews, I will happily refuse to do umbrella reviews: I think enough reviews are shit enough quality that I'd rather spend the time using them as sources of primary studies, then just review those myself.
24.07.2025 07:49 β π 4 π 0 π¬ 1 π 0In defence of systematic reviews, there are people with expertise to do them well, and they are very important for policy (although noting we did rapid reviews when necessary, and I'd say they were as useful and quicker).
They are also useful for "do I really need to do this study?"
This is what I'd do too...
23.07.2025 19:20 β π 1 π 0 π¬ 0 π 0In evidence synthesis, they may represent articles that were never published, e.g. because of publication bias/file drawer problem.
Sure, they should be published as preprints instead, but that's not how it works out sometimes.
(Agreed most of the time it's pointless, but this is a legit reason)
Yeah, me too
21.07.2025 11:23 β π 2 π 0 π¬ 0 π 0STOP GIVING PEOPLE IDEAS!
But also:
Strengthbread
Dazzleberries
Superlativelaxative
Magnapasta
Magnapasta is my favourite.
Why have I heard "Bayesian borrowing" twice today, but never before today?
Also, have we already had "a real-world simulation study", or can I get in first?
I wrote a thing: press.asimov.com/articles/asp...
If you're interested in whether willow bark = aspirin, it might be interesting!
If not, it still might be interesting!