Cyrus if you ever want to chat trajectory balancing, we're working on the new draft before submission now! Course sounds fantastic
26.02.2026 15:27 β π 0 π 0 π¬ 1 π 0Cyrus if you ever want to chat trajectory balancing, we're working on the new draft before submission now! Course sounds fantastic
26.02.2026 15:27 β π 0 π 0 π¬ 1 π 0Communicating uncertainty about oneβs conclusions is a fundamental task of scientists. Within empirical economics a conventional approach is to report a statistic, often a standard error, alongside the point estimate of a target parameter (Athey and Imbens, 2023). An important question is whether, and when, this convention accurately conveys the uncertainty about the target parameter.
astonishing banger intro paragraph alert (economics.mit.edu/sites/defaul...)
20.02.2026 00:18 β π 0 π 0 π¬ 0 π 0Hoped you'd find this interesting, Ryan!
19.02.2026 19:50 β π 1 π 0 π¬ 1 π 0a graph showing how LLMs do or do not p-hack
While LLMs will try to follow good research practices by default, you can pretty easily convince them to p-hack for you. In one case (out of the 4 tested), the LLM moved the result from p > 0.05 to p < 0.001. github.com/janetmalzahn...
19.02.2026 19:10 β π 32 π 13 π¬ 1 π 3A common experience: having a research idea, doing some basic searching to see what's out there, and it's THAT person again who has already done something very similar. For me That Persion is Fabrizia Mialli, what a scholar.
18.02.2026 20:01 β π 1 π 0 π¬ 0 π 0
1/ Sorry for double-posting from X. Sharing a new working paper for the Year of the Horce π:
"An AI-assisted workflow that scales reproducibility in empirical research" (bit.ly/repro-ai) w/ Leo Yang Yang
Also many (most?) applied political science papers have statistical uncertainty just totally dwarfed by myriad other, unquantified uncertainties. Applying /different/ heuristics doesn't seem likely to fix anything.
14.02.2026 19:16 β π 4 π 0 π¬ 0 π 0
LLMs/engineers go
parser = argparse.ArgumentParser()
parser.add_argument("--bread1", required = True)
p.a_a("--spam")
p.a_a("--onion")
p.a_a("--cheese")
p.a_a("--tomato")
p.a_a("--bread2")
burger.py --bread1 --spam --onion --cheese --tomato --bread2
vs
make_a_sandwich()
with sensible defaults
the models have gotten really good at math.
04.02.2026 05:52 β π 1 π 0 π¬ 0 π 0
Again returning to Joel Tropp's fantastic introduction to probability theory notes. These are an amazing resource for those needing quick references and reminders (as at least I do, constantly!)
tropp.caltech.edu/notes/Tro24-...
Apoorva each week you make my urge to neglect my responsibilities and build something outside my technical range very, very strong; beseeching you etc. etc.
26.01.2026 15:25 β π 1 π 0 π¬ 0 π 0Get ready to have your feathers ruffled.
21.01.2026 19:06 β π 1 π 0 π¬ 0 π 0AI wrote the prompt to handle the prompt to make the other AI generate a prompt for the other AI to write the script to validate the script the other AI wrote from the prompt the AI wrote after I prompted the tool the first AI made to prompt the
08.01.2026 20:06 β π 1 π 0 π¬ 0 π 0this is our curse
13.08.2025 21:01 β π 4 π 0 π¬ 0 π 0that will change a teen
12.08.2025 04:10 β π 1 π 0 π¬ 1 π 0New slide one for teaching prediction methods
09.08.2025 07:56 β π 6 π 1 π¬ 1 π 1but apoorva, matrix multiplication can my read papers and write my code
02.08.2025 18:39 β π 1 π 0 π¬ 1 π 0returning to read the hte lit always makes me giggle. you take one tiny step off the ATE path and the wildlife suddenly want you dead
23.07.2025 19:12 β π 1 π 0 π¬ 1 π 0The tragic landslide in Blatten gives me the excuse to tell you the story of how we found out Ice Ages existed. It's a cool story and the most important bit is rather similar to what's happening now.
29.05.2025 19:49 β π 956 π 416 π¬ 26 π 97You're kidding! Tim CA'd for Theory of Stats when I took it... he's awesome
15.05.2025 22:55 β π 0 π 0 π¬ 0 π 0Related: learning programming languages (not just translating code from prior languages or fixing naive gobbledegook). I know what questions to ask, so it probably speeds up language acquisition by a factor of 4-10. This is really remarkable!
30.04.2025 01:35 β π 1 π 0 π¬ 0 π 0Late addendum to discussion on LLM use cases. When trying to write proofs with ideas I have studied before but only recall loosely, LLMs are great at sketching the argument. Can't trust their proofs! But they've cumulatively saved WEEKS fumbling through old textbooks by making clear where to look.
30.04.2025 01:35 β π 1 π 0 π¬ 1 π 0Shoot.
06.04.2025 22:23 β π 1 π 0 π¬ 0 π 0a linear t-learner ππ
08.03.2025 20:16 β π 2 π 0 π¬ 1 π 0