Would love to see the final version!
One of the reasons I think the study has continued to have strong support is that all of the political parties use it routinely in their own analysis.
I think a major role of government is to build public goods and widely applicable data resources is a good example of that. A world where we're gating these datasets on a pay as you go system would be a lot worse in my view.
I'm delighted when I hear non-academics are finding value from BES data
It takes nothing away from academic uses for private companies to be using the data we produce and creates another group of advocates for continuing to fund these resources
My opinion is that we shouldn't be gatekeeping research or data funded with public money other than for subject protection reasons. The ESRC's very interested in having societal impact and having private companies use the data is one way for that to happen.
And in this example, Claude is just downloading the data (or getting the researcher to do it) and then running an analysis in R or Python, so I don’t think datasets are being shared in bulk to Anthropic
I mean that’s kind of the point of publicly funded research.
Maybe add South Africa too
Comparative case study of reconciliation approaches (integration/exclusion of former collaborators into post-authoritarian societies) in post authoritarian political systems including different parts of the wizarding world, post nazi Germany, post war Japan and Iraq
Interesting that the policy ends up coming down to "at least paraphrase what the AI wrote". I think there's a lot to be said for this to make sure at least one human brain paid enough attention to rewrite what the AI said. But fascinating how quickly we're having to grapple with all of this
Congratulations, well deserved!
The most downloaded paper on EconStor in Feb. 2026 was:
"Briggs, Ryan C.; Mellon, Jonathan; Arel-Bundock, Vincent (2026) : It must be very hard to publish null results, I4R Discussion Paper Series, No. 281, Institute for Replication (I4R), s.l." hdl.handle.net/10419/336819
@i4replication.bsky.social
Conditionally accepted at the APSR (w/ @scottclifford.bsky.social & @patrickpliu.bsky.social):
Why does political information so often change beliefs but NOT attitudes? We highlight the role of belief relevance, or the extent to which beliefs bear on attitudes.
This proposes a way of using AI agents to produce research. Ok. But this bit is a pipe dream: "And human scientists should retain authority over — and responsibility for — framing the question, validating the path and signing off on conclusions." Here's why...
/1
this but for scientists who have p-hacked
It could be so much worse. Imagine if relationship arguments were associational rather than causal. I notice a correlation between you doing it this way and me being in a bad mood
I should come clean and admit there were 2 weeks I couldn’t cross because the Hudson froze over entirely.
There was a river between the train station and my office and the choice was a 7 minute kayak or a 30 minute drive
And the nearest bridges were 7 miles in each direction
There was a river between the train station and my office
I really do mean every day too
This undersells the fact that you would sometimes do zoom meetings during the kayak!
I kayaked to work every day for 3 years
Definitely agree with all that too
The main thing I think AI is going to do is generically put pressure on all of these institutions. That opens up space for changes in general including ones that might not be directly tied to AI.
But who knows, maybe there's a "Big Short" style version of electoral shocks you could make
Something tells me this very legitimate agent may not have read my book
What’s actually going on in this statement is he’s using insider terms to sound credible to influential people within the defense policy community