also of interest @patrickpliu.bsky.social's thread on the working paper
Congratulations to @yamilrvelez.bsky.social, @patrickpliu.bsky.social, and @scottclifford.bsky.social !
we think attitudes are some function of beliefs; our exps routinely move beliefs but not (even correlated) attitudes. This team found a way to guess which beliefs matter more (and they do!)
P.S. Pre-post differences are *not* valid treatment effect estimates. Why? Here's a post by @statsepi.bsky.social: statsepi.substack.com/p/one-simple..., here's a post by me: www.the100.ci/2025/01/22/r... >
YES! that would be v. cool.
A clunky party of DD is the declare_model() section; if we could somehow do declare_model(daggity_spec) that would be amazing. it's just that all the details (the outcome spaces, strength of covariances) are hard to get in there simultaneously
Seems great!
!!!! will look into this, that sounds like it would solve many annoying things (like how it sometimes uses older code)
(should note that I checked the r2, it was wrong, I told the AI it was wrong, and it fixed it. So there were two rounds before I believed the sim.)
Writing simulations in DeclareDesign just went from "I should do that, but it's kind of a lot of work" to extremely easy
Reminder to register (no fee) for the Rebecca Morton experiments conference at NYU taking place next week. Join us: nyu.qualtrics.com/jfe/form/SV_...
Per protocol analysis strikes again!
Folks, if you randomize but then don‘t analyze some of the people who got randomized (maybe because they didn’t adhere to instructions, maybe because they dropped out), randomization will no longer do all the heavy causal inference lifting.
IMO, I think journals should be able to decide whether they review registered reports or not. Most don't, at present.
I *do not* think journals should reject papers for *having been preregistered*; that's nuts!
*maybe* this journal mistakenly thought the author submitted a registered report (or similar).
But if the position is "papers must not be pre-registered" I'd love to hear the justification.
Thank you! am screen-shotting your predictions and will assess after results come in!!
Thank you for all the support and reposts!
We've gotten a steady stream of inquiries and submissions for this competition, but also some ANXIETY that the window will close before people have a chance to submit.
We're nowhere near that! We'll update on here when we've allocated 50% of the capacity.
Very nice, thank you for the research and for the Atlantic piece.
TIL!
Very excited to see this out at @bjpols.bsky.social! In this article, I show that contemporary political news coverage makes it challenging for readers to learn information that is helpful for democratic accountability, even for very politically engaged audiences.
A brief summary:
(you're so right, that was my *actual* fav, but it sold already....)
yessss the white glaze in the ridges!!
Am I hearing you right that when the two candidates take issue positions that people care about a lot, the test-retest is closer to like 95% but among people who don't care so much about the issue, test re-test is lower, like 75%? I'm giving numbers so you'll correct me :)
Ooh, sounds v. interesting. Can you give a teaser? I'm guessing that people make the same choice about 80% of the time, is that close?
Penelope Van Grinsven and Lilly Zuckerman curate Above Board Ceramics -- this year's show is now live and is fabulous.
www.aboveboardceramics.com
[disclosure Penny and I are married!]
highly recommend this paper. the experimental design manipulates how much subjects are monitored; when asked, yes treated subjects feel more monitored.
Do they later give different responses on possibly sensitive topics? no they don't.
Rough. it doesn't have to be this way! Check out what the the Saint Paul Chamber Orchestra is up to.
www.thespco.org/concerts-tic...
Interesting paper, especially interesting it's coming from researchers at Anthropic arxiv.org/pdf/2601.20245
Among the most vivid declines in the American people's trust -- their diminished trust in other people.
Data from @gallup.com's Social Series.
I would bet that, conditional on seeing the video,
the treatment effect among those who oppose ICE
\approx
the treatment effect among those who support ICE
It's hard to get people to watch things they disagree with, though, but that's not to say it wouldn't work if they did
🎺 Call for proposals 🎺
1️⃣ replicate an existing experiment
2️⃣ run a novel experiment
on repdata.com
3️⃣ coauthor with Mary McGrath and me to meta-analyze the replications and existing studies
4️⃣ publish your study
details: alexandercoppock.com/replication_...
applications open Feb 1
please repost!
After years in academia, I’m exploring data science and research roles in industry.
I'm a quant. social scientist (PhD Yale ’24, NYU) focused on causal inference, experiments, and large-scale data.
Feel free to get in touch or share; all leads appreciated. dwstommes@gmail.com
JOIN us for this year’s Rebecca Morton conference on experimental political science at NYU! March 6-7. We have a great line up of papers and posters!
Program (scroll down) here: wp.nyu.edu/cesspolitica...
Register (no fee) here: nyu.qualtrics.com/jfe/form/SV_...