Sam Asher's Avatar

Sam Asher

@sgzasher.bsky.social

PhD student at Stanford GSB. Causal inference, local political economy, occasionally Tottenham Hotspur, SF Giants. Research at sgzasher.com.

85 Followers  |  80 Following  |  20 Posts  |  Joined: 24.09.2023
Posts Following

Posts by Sam Asher (@sgzasher.bsky.social)

Cyrus if you ever want to chat trajectory balancing, we're working on the new draft before submission now! Course sounds fantastic

26.02.2026 15:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Communicating uncertainty about one’s conclusions is a fundamental task of scientists.
Within empirical economics a conventional approach is to report a statistic, often a standard error, alongside the point estimate of a target parameter (Athey and Imbens, 2023).
An important question is whether, and when, this convention accurately conveys the
uncertainty about the target parameter.

Communicating uncertainty about one’s conclusions is a fundamental task of scientists. Within empirical economics a conventional approach is to report a statistic, often a standard error, alongside the point estimate of a target parameter (Athey and Imbens, 2023). An important question is whether, and when, this convention accurately conveys the uncertainty about the target parameter.

astonishing banger intro paragraph alert (economics.mit.edu/sites/defaul...)

20.02.2026 00:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Hoped you'd find this interesting, Ryan!

19.02.2026 19:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
a graph showing how LLMs do or do not p-hack

a graph showing how LLMs do or do not p-hack

While LLMs will try to follow good research practices by default, you can pretty easily convince them to p-hack for you. In one case (out of the 4 tested), the LLM moved the result from p > 0.05 to p < 0.001. github.com/janetmalzahn...

19.02.2026 19:10 β€” πŸ‘ 32    πŸ” 13    πŸ’¬ 1    πŸ“Œ 3

A common experience: having a research idea, doing some basic searching to see what's out there, and it's THAT person again who has already done something very similar. For me That Persion is Fabrizia Mialli, what a scholar.

18.02.2026 20:01 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

1/ Sorry for double-posting from X. Sharing a new working paper for the Year of the Horce 🐎:

"An AI-assisted workflow that scales reproducibility in empirical research" (bit.ly/repro-ai) w/ Leo Yang Yang

18.02.2026 19:21 β€” πŸ‘ 76    πŸ” 26    πŸ’¬ 4    πŸ“Œ 6

Also many (most?) applied political science papers have statistical uncertainty just totally dwarfed by myriad other, unquantified uncertainties. Applying /different/ heuristics doesn't seem likely to fix anything.

14.02.2026 19:16 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

LLMs/engineers go

parser = argparse.ArgumentParser()
parser.add_argument("--bread1", required = True)
p.a_a("--spam")
p.a_a("--onion")
p.a_a("--cheese")
p.a_a("--tomato")
p.a_a("--bread2")

burger.py --bread1 --spam --onion --cheese --tomato --bread2

vs
make_a_sandwich()
with sensible defaults

06.02.2026 06:05 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0

the models have gotten really good at math.

04.02.2026 05:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Again returning to Joel Tropp's fantastic introduction to probability theory notes. These are an amazing resource for those needing quick references and reminders (as at least I do, constantly!)

tropp.caltech.edu/notes/Tro24-...

28.01.2026 16:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Apoorva each week you make my urge to neglect my responsibilities and build something outside my technical range very, very strong; beseeching you etc. etc.

26.01.2026 15:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Get ready to have your feathers ruffled.

21.01.2026 19:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

AI wrote the prompt to handle the prompt to make the other AI generate a prompt for the other AI to write the script to validate the script the other AI wrote from the prompt the AI wrote after I prompted the tool the first AI made to prompt the

08.01.2026 20:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

this is our curse

13.08.2025 21:01 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

that will change a teen

12.08.2025 04:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

New slide one for teaching prediction methods

09.08.2025 07:56 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

but apoorva, matrix multiplication can my read papers and write my code

02.08.2025 18:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

returning to read the hte lit always makes me giggle. you take one tiny step off the ATE path and the wildlife suddenly want you dead

23.07.2025 19:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The tragic landslide in Blatten gives me the excuse to tell you the story of how we found out Ice Ages existed. It's a cool story and the most important bit is rather similar to what's happening now.

29.05.2025 19:49 β€” πŸ‘ 956    πŸ” 416    πŸ’¬ 26    πŸ“Œ 97

You're kidding! Tim CA'd for Theory of Stats when I took it... he's awesome

15.05.2025 22:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Related: learning programming languages (not just translating code from prior languages or fixing naive gobbledegook). I know what questions to ask, so it probably speeds up language acquisition by a factor of 4-10. This is really remarkable!

30.04.2025 01:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Late addendum to discussion on LLM use cases. When trying to write proofs with ideas I have studied before but only recall loosely, LLMs are great at sketching the argument. Can't trust their proofs! But they've cumulatively saved WEEKS fumbling through old textbooks by making clear where to look.

30.04.2025 01:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Shoot.

06.04.2025 22:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

a linear t-learner 😭😭

08.03.2025 20:16 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0