Sean O hEigeartaigh's Avatar

Sean O hEigeartaigh

@sean-o-h.bsky.social

Academic, AI nerd and science nerd more broadly. Currently obsessed with stravinsky (not sure how that happened).

3,768 Followers  |  275 Following  |  281 Posts  |  Joined: 07.09.2024  |  1.7852

Latest posts by sean-o-h.bsky.social on Bluesky

Preview
Stopping the Clock on catastrophic AI risk AI is already sufficiently robust that it introduces new global risks and exacerbates existing threats. Its development is being driven by some of the most powerful companies on Earth, and the technol...

An honour to provide a paper for the Bulletin's 80th anniversary, on the challenges AI poses for catastrophic risk the coming decade. An exercise in navigating uncertainty and the collingridge dilemma, with the stakes likely rising with each passing year.
thebulletin.org/premium/2025...

10.12.2025 16:43 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1
Preview
Is China really racing for AGI? with SeÑn Ó hÉigeartaigh The Rhetoric and Reality of the AI Race

I really enjoyed chatting to Philip Bell about 'AGI race' claims and the AI race narrative overall. Link to podcast below

techfuturesproj.substack.com/p/is-china-r...

10.12.2025 09:44 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
MARS 4.0 β€” Cambridge AI Safety Hub

Had a blast speaking to the MARS fellows last week about prospects for international cooperation on AI safety, great power competition in AI, evidence supporting 'AI race' claims. Great discussion - calibre of these fellows is remarkable. They'll do important things.
www.cambridgeaisafety.org/mars

09.12.2025 10:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Right - so as the relatively rare Irishman who is also a Russia/FSU geek I guess I have to have Thoughts and Opinions after yesterday's revelations. First off - I don't think that this was an attempted assassination or other "kinetic" mission. The goal here seems to have been purely psychological /1

05.12.2025 07:46 β€” πŸ‘ 49    πŸ” 14    πŸ’¬ 1    πŸ“Œ 4

It's a rainy saturday, so I thought I'd bump our 'AI for sustainable development' report from the WIC programme from earlier this year. I thought it was a very good report, and folks might find it useful.
www.wicinternet.org/pdf/Governin...

22.11.2025 15:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Things are looking smoothly exponential for AI over the past several years, and I continue to think this is the best default assumption (until the AI R&D automation feedback loop eventually speeds everything up)

21.11.2025 09:19 β€” πŸ‘ 15    πŸ” 2    πŸ’¬ 3    πŸ“Œ 1

It's a bold strategy, Cotton - let's see if it plays off for them. 4/4

19.11.2025 11:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

undermined. That's stuff that is salient to a LOT of people. Now the American people get to see - loudly and clearly - that this same part of the industry is directly trying to interfere in their democracy; trying to kill of the chances of the politicians that hear them. 3/4

19.11.2025 11:10 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

They don't dislike it because of 'EA billionaires'.They dislike it because of Meta's chatbots behaving 'romantically' towards their children,gambling and bot farms funded by a16z,suicides in which ChatGPT played an apparent role, and concerns their jobs will be affected and their creative rights2/4

19.11.2025 11:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
New York could be on the verge of a milestone AI safety bill We spoke with the bill’s author, Alex Bores, about where it stands and his AI-focused run in a crowded congressional race.

This will be a fascinating test case. The AI industry (a16z, OpenAI & others) are running the crypto fairshake playbook. But that worked because crypto was low-salience; most people didn't care. People care about AI. 1/4
www.techbrew.com/stories/2025...

19.11.2025 11:10 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The report takes seriously the possibility and risks of AGI and superintelligence - but more than that, it takes seriously the steps needed to address them in a global, collaborative manner that acknowledges commerical and geopolitical pressures. I'd love to see it read and shared. 4/4

11.11.2025 14:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

- liability and verification mechanisms, compliance review, tiered dialogue mechanisms, and cross-border dispute resolution procedures.

Huge gratitude to the excellent writing team, and panel of leading Chinese and international experts who put so much time into this. 3/4

11.11.2025 14:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

aimed at minimising frontier AI risk. Including

- Collaborative technology tracking and early warning systems
- Internationally shared safety evaluation tools & platforms
- dynamic governance rules to avoid ossification
- emergency intervention mechanisms for catastrophic risks 2/4

11.11.2025 14:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Extremely excited to launch this report; the second report from World Internet Conference's International AI Governance Programme that I co-Chair with Yi Zeng. It goes further than any similar report I've seen in recommending robust governance interventions 1/4

www.wicinternet.org/pdf/Advancin...

11.11.2025 14:04 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

are more sceptical of superintelligence than under the previous administration, and the 'AI as a normal technology' scientific worldview is more in the ascendency there - and along with it, an 'export the American tech stack' strategy more suited to the scientific view.

04.11.2025 12:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

mea culpa/correction: this was written over the summer, when it still appeared to me that the 'race to superintelligence' held sway in Washington DC as well as Silicon Valley. It's become clearer to me since that a many (perhaps indeed a critical mass of) leading policy voices in/advising USG

04.11.2025 12:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Goldene Zukunft mit roten Linien Die Aussichten auf eine Superintelligenz sind nicht nur positiv. Über Gefahren und düstere Prognosen aber gibt es keine breite Debatte.

Pleased to have a new paper in Internationale Politik, in which I call for
(a) more of a global conversation around AGI and superintelligence.
(b) 'middle powers' to start exerting themselves in the global conversation.
internationalepolitik.de/de/goldene-z...

04.11.2025 12:43 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
John Collison of Stripe: Ireland is going backwards. Here’s how to get it moving The Stripe co-founder looks at how the State can get out of the government-by-agency corner into which it has painted itself

www.irishtimes.com/life-style/p...

03.11.2025 12:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Matt Clifford: LFG Make or Break full speech
YouTube video by Looking For Growth Matt Clifford: LFG Make or Break full speech

Thoroughly enjoyed this speech by the excellent Matt Clifford. Much of it is also relevant to Ireland, where it pairs well with John Collison's overlapping-in-diagnosis&recommendations Irish Times article from the previous week (link in comment).
www.youtube.com/watch?v=McMt...

03.11.2025 12:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance - Philosophy & Technology Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives...

I cowrote this paper w a great team of Chinese and UK colleagues in 2019/20. Just noticed it's had by far its most citations this year. An indicator, I hope, that the appetite for international cooperation on AI safety & governance is stronger than ever.
link.springer.com/article/10.1...

25.10.2025 13:51 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

i'll write something up when i'm back

24.10.2025 16:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

So much happening in Beijing these days! Really exciting to see the progress that's happening in these conversations.

24.10.2025 12:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Excited to head to Beijing tomorrow for an action-packed week. Launching an international cooperation network with my friend and colleague Yi Zeng; participating in a frontier AI Safety and Governance forum; and presenting on my cooperation/competition work at a Tsinghua Roundtable.

24.10.2025 12:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Preview
Introducing: the Global Volcano Risk Alliance charity & Linkpost: 'When sleeping volcanoes wake' (AEON) β€” EA Forum We want to highlight two things in this post: …

Because there's more happening (and more in need of philanthropic support!) than AI risk, check out this great writeup by Mike Cassidy and Lara Mani on their volcanic risk work and charity - innovative and neglected.
forum.effectivealtruism.org/posts/jAcCjF...

21.10.2025 10:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Future of Life Organizations - UK Policy Advocate The Future of Life Institute (FLI) is hiring a UK Policy Advocate to advise the UK government on its forthcoming AI bill. This experienced and connected policy professional will also make recommendati...

Impactful job for the right candidate here. Based on my meetings and discussions, the UK government remains the one grappling most deeply with the implications of frontier AI and possibility of AGI, including the attendant risks.
jobs.lever.co/futureof-lif...

20.10.2025 09:07 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

He made us all suffer the national indignity of That Elon Musk Bletchley interview, and didn't even manage to wrangle a job out of it? I'm never going to look at Dishy Rishi the same way again.

09.10.2025 17:41 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Options for the future of the global governance of AI
YouTube video by London Futurists Options for the future of the global governance of AI

I really enjoyed being part of this excellent discussion yesterday on the future of global AI governance, with Kayla Blomquist, Dan Fagella, Duncan Cass-Beggs, Nora Ammann and Robert Whitfield, skilfully Chaired by David Wood. Catch up with it here:
www.youtube.com/watch?v=EADx...

05.10.2025 12:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Important report. Given how long it takes/competitive it is to get tenure these days, too often academia basically asks women to choose between an academic career and motherhood, which is unacceptable. (There being ultra-capable subset who manage both doesn't make it reasonable). Needs to change.

04.10.2025 09:56 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Well this was an interesting way to discover that my (considerate, charming) wife sometimes reads my social media posts.

03.10.2025 13:57 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

As of today I am officially a Research Professor! I will be celebrating by having "Professor SeΓ‘n: Doomometrics Studies" socks made, and submitting a paper tonight with some excellent coauthors that I hope will be a banger (in appropriate Research Professor fashion).

01.10.2025 12:40 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@sean-o-h is following 20 prominent accounts