Impressive!
In economics at least part of the credit goes to posing the question and convincing the field that the question is interesting. In other fields people often work on questions posed by others. (3/3)
You can choose to work on such questions and if you solve them the field will recognize that and you will get credit for that. That is not true in economics. We do not have well-defined questions that have been puzzling the profession for many years and that will eventually get solved. (2/3)
When I orginally said this in some interview, I was not intending to say anything deep (and did not know they would put this on a kitchen magnet). I do think it is interesting to reflect on the fact that in some fields there are well-defined questions that the field agrees on are important. (1/3)
True story! And we got the laundry done too!
Yeah, but in settings where Bernstein-von Mises does not apply, you have to declare your allegiance. E.g., unit root settings, weak instruments. And no, I did not put it on the exam, though probably should have!
Years ago I would tell my first year econometrics class that there were doing ways of doing statistical inference, Bayesian and frequentist, also known as right and wrong. A student asked if that would be on the exam.
Nice!
Not just me. An Imbens and Yiqing Xu retrospective!
I am not a big fan of DID in general, but the setting where it is used, panel data, with a binary intervention, with possibly variation in the adoption date (staggered adoption), is very common. That module, which also covered synthetic control and related methods, was very popular.
Paul, check slide 25 from lecture 2.
Here you go.
Real
Quarterly journal of economics, 2023, coauthored with Alberto Abadie, Susan Athey, and Jeff Wooldridge. One of my favorite papers over my career.
It's been a pleasure working with this committee and thinking about the publication process and how things have changed or not changed, and how it can be improved.
My pleasure!
Yes, under the null of no effect (no direct effect and no spillovers), randomization inference allows for calculation of exact Fisher-style p-values. You can approximate those using robust (non-clustered) standard errors.
Not testable.
I am not sure I see that as bias.