Max Reith's Avatar

Max Reith

@maxreith.bsky.social

AI, Economic Theory, Political Economy Economics @EconOxford, prev. Mannheim

500 Followers  |  1,547 Following  |  63 Posts  |  Joined: 03.12.2023  |  1.7172

Latest posts by maxreith.bsky.social on Bluesky

Preview
Who killed Europe’s single market dream? A decades-long effort to tear down internal trade barriers has stalled, leaving the EU economy ‘tagging along behind’

‘The whodunnit for modern Brussels: who killed the single market dream?’
on.ft.com/3M7d0oo

02.12.2025 06:58 — 👍 7    🔁 4    💬 1    📌 1

Gemini-3 got this wrong 5/5 times...
(But this might just be reduced reasoning budgets at launch or something)

18.11.2025 21:59 — 👍 2    🔁 0    💬 0    📌 0
Screenshot of working paper: The Consequences of Faculty Sexual Misconduct

Screenshot of working paper: The Consequences of Faculty Sexual Misconduct

📣 New NBER Working Paper out today 📣

"The Consequences of Faculty Sexual Misconduct"
Sarah Cohodes & Katherine Leu

10.11.2025 13:49 — 👍 537    🔁 199    💬 12    📌 34
Post image

New @nberpubs: "The Economic Impact of Brexit" www.nber.org/papers/w34459
"by 2025, Brexit had reduced UK GDP by 6% to 8%, with the impact accumulating gradually over time." 😲

10.11.2025 11:45 — 👍 25    🔁 19    💬 1    📌 4

Unlike the other models used, Kimi K2 Thinking is freely available. The other models, Gemini 2.5 Pro and GPT-5 Extended Thinking, are only available through a $20 monthly subscription. So overall, Kimi K2 seems like a pretty big deal to me. (I didn’t test GPT-5 Pro, since it costs $200 per month)

08.11.2025 14:59 — 👍 0    🔁 1    💬 0    📌 0

Sometimes the LLMs gave wrong equilibria, sometimes they wrongly claimed that there were no new equilibria at all. The inconsistency across all three models is annoying, but that’s just part of working with LLMs, I suppose 🤷‍♂️

08.11.2025 14:59 — 👍 0    🔁 0    💬 1    📌 0

The paper is on algorithmic game theory, where I modify existing games in a specific way and examine whether new equilibrium outcomes emerge under the modified framework. I provided each model with a simple numerical example and asked whether new equilibrium outcomes arise.

08.11.2025 14:59 — 👍 1    🔁 0    💬 1    📌 0
Post image Post image Post image

Another DeepSeek moment? Moonshot AI, a Chinese lab, released its new (open source!) model K2 Thinking, outperforming OpenAI et al. on several benchmarks. I tested it with a question from an unpublished paper of mine. Out of 5 tries, Kimi, GPT-5 and Gemini 2.5 Pro each replied correctly 3 times!

08.11.2025 14:59 — 👍 5    🔁 1    💬 1    📌 2
AI Purity Test
The AI Purity Test is a voluntary self-assessment developed by Tina Tarighian. It provides participants with a structured opportunity to reflect on the evolution of their interactions with artificial intelligence over time.
Caution: this is not a bucket list. Completion of all items on this test will likely result in death.

Your score:
67

AI Purity Test The AI Purity Test is a voluntary self-assessment developed by Tina Tarighian. It provides participants with a structured opportunity to reflect on the evolution of their interactions with artificial intelligence over time. Caution: this is not a bucket list. Completion of all items on this test will likely result in death. Your score: 67

chat, is this good?

I scored 67 on the AI purity test.

post your scores:
https://aipuritytest.org

24.10.2025 14:00 — 👍 32    🔁 1    💬 25    📌 11
CHM Live | The Great Chatbot Debate: Do LLMs Really Understand?
YouTube video by Computer History Museum CHM Live | The Great Chatbot Debate: Do LLMs Really Understand?

An interesting debate between Emily Bender and Sebastien Bubeck: www.youtube.com/watch?v=YtIQ... ---Emily's thesis is roughly summarized as: "LLMs extrude plausible sounding text, and the illusion of understanding comes entirely from how the listener's human mind interprets language. "

21.10.2025 15:33 — 👍 8    🔁 1    💬 2    📌 0

Dieses Streitgespräch zwischen @clemensfuest.bsky.social und @suedekum.bsky.social in der @zeit.de sollte man in Vorlesungen und Proseminaren zur Theorie der Wirtschaftspolitik durchnehmen. Sehr gutes Lehrmaterial, for the good and the bad. Ein 🧵:

18.10.2025 09:31 — 👍 87    🔁 16    💬 4    📌 2
Preview
AI bots wrote and reviewed all papers at this conference Event will assess how reviews by models compare with those written by humans.

🧪 A new computer science conference, Agents4Science, will feature papers written and peer-reviewed entirely by AI agents. The event serves as a sandbox to evaluate the quality of machine-generated research and its review process.
#MLSky

15.10.2025 15:33 — 👍 4    🔁 2    💬 0    📌 0

I’ve decided not to post my annual “women on the Econ job market” thread this year. Social media has splintered too much, and now that I’ve left academia I’m focused on other priorities.

14.10.2025 14:02 — 👍 55    🔁 13    💬 1    📌 1
Joel Mokyr at the 2011 conference in his honour at Northwestern.

Joel Mokyr at the 2011 conference in his honour at Northwestern.

Elated at Joel Mokyr's Nobel Prize! You can find numerous accounts -now multiplying by the minute- of his scholarly contributions. Today I want to celebrate the man and the mentor.

13.10.2025 18:00 — 👍 41    🔁 8    💬 2    📌 0
Post image Post image

I don't think people have updated enough on the capability gain in LLMs, which (despite being bad at math a year ago) now dominate hard STEM contests: gold medals in the International Math Olympiad, the International Olympiad on Astronomy & Astrophysics, International Informatics Olympiad...

12.10.2025 20:40 — 👍 129    🔁 19    💬 8    📌 3

These results are somewhat at odds with the mistakes Gpt and Gemini keep making when working on my proofs. I have the $20 subscription though, could that be the reason?

12.10.2025 21:00 — 👍 8    🔁 0    💬 2    📌 0
Preview
Sora hit 1M downloads faster than ChatGPT | TechCrunch This level of consumer adoption is worth noting because Sora remains an invite-only app, while ChatGPT was more publicly available at launch. That makes Sora's performance more impressive.

Sora hit 1M downloads faster than ChatGPT
#MLSky
techcrunch.com/2025/10/09/s...

10.10.2025 14:30 — 👍 3    🔁 1    💬 0    📌 0

How over- and underrepresented are different causes of death in the media?

Another way to visualize this data is to measure how over- or underrepresented each cause is.

To do this, we calculate the ratio between a cause’s share of deaths and its share of news articles.

09.10.2025 17:07 — 👍 261    🔁 99    💬 7    📌 18
Post image

The other day a student asked me about the prevalence of insider trading in prediction markets. I now have an answer.

10.10.2025 11:19 — 👍 645    🔁 165    💬 9    📌 17

Wohl nicht. Any suggestions?

03.10.2025 22:11 — 👍 3    🔁 0    💬 1    📌 0

The best post I’ve seen on Bluesky in a very long time! Brilliant idea and brilliant accounts out there !

02.10.2025 10:31 — 👍 13    🔁 2    💬 0    📌 0

Back in graduate school, Paul Milgrom asked me to examine a published paper from 1984 by another person that he suspected had an incorrect proof. I found the error. I decided to see if LLMs could. Only Gemini 2.5 Pro did so. Claude Opus and GPT-5-pro found no significant errors.

30.09.2025 18:58 — 👍 12    🔁 1    💬 1    📌 0

Income Effect: Analyst become more productive -> hire more

Substitution Effect: Fewer analysts are needed per project -> hire less.

Both effects exist, it’s TBD which dominates.

If a job is fully automated (AI can do all tasks), employment should def. fall (think Waymo replacing Uber drivers).

21.09.2025 16:08 — 👍 0    🔁 0    💬 1    📌 0

I think it does help! AI today mainly augments labor: AI substitutes some tasks that analysts do, but not all. Analysts are more productive now. Does their employment rise? Depends on Income vs. Substitution effects:

21.09.2025 16:05 — 👍 0    🔁 0    💬 1    📌 0
Preview
Economic Growth under Transformative AI Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating research findings among academics, public policy makers,…

Want to dig deeper? This is a short summary of a paper by Phil Trammell and Anton Korinek: (www.nber.org/papers/w31815). I recently had the pleasure of taking Phil’s course on Transformative AI at Stanford DEL!

19.09.2025 09:35 — 👍 3    🔁 0    💬 0    📌 0

• Unlocking robots: AI-led breakthroughs might also unlock humanoid robots, bringing explosive growth via the substitution channel described in 1)

19.09.2025 09:35 — 👍 1    🔁 0    💬 1    📌 0

Why yes:
• Returns to scale: Picture one AI containing the knowledge of thousands of scientists. Unlike human teams, the AI wouldn’t face coordination costs, could parallelize research effortlessly, and tap into knowledge from multiple fields instantly, thus accelerating discovery.

19.09.2025 09:35 — 👍 3    🔁 0    💬 1    📌 0

• Limited parallelizability: Some breakthroughs depend on earlier ones: You can't invent a car wo inventing the wheel first. Research may not scale w AI.

• Physical constraints: Science needs hardware and experiments, which AI might not be able to substitute. Research might not be fully automatable

19.09.2025 09:35 — 👍 1    🔁 0    💬 1    📌 0

2) Research Automation
You've probably heard of this one: AI invents new technologies, improves itself and drives explosive growth. Could it work? Maybe...

Why not:
• Harder ideas: It could get harder and harder to discover new ideas. Even with AGI, the rate of discovery might go down.

19.09.2025 09:35 — 👍 2    🔁 0    💬 1    📌 0

But imagine AI that turns capital into a substitute for labor (think robots doing most jobs). Capital could expand wo human bottlenecks, creating room for accelerated growth. How much? Depends on whether capital productivity grows too. If so, growth could take off long run, though 3000% is a stretch

19.09.2025 09:35 — 👍 3    🔁 0    💬 2    📌 0

@maxreith is following 20 prominent accounts