Stopping the Clock on catastrophic AI risk
AI is already sufficiently robust that it introduces new global risks and exacerbates existing threats. Its development is being driven by some of the most powerful companies on Earth, and the technol...
An honour to provide a paper for the Bulletin's 80th anniversary, on the challenges AI poses for catastrophic risk the coming decade. An exercise in navigating uncertainty and the collingridge dilemma, with the stakes likely rising with each passing year.
thebulletin.org/premium/2025...
10.12.2025 16:43 β π 3 π 2 π¬ 0 π 1
Is China really racing for AGI? with SeΓ‘n Γ hΓigeartaigh
The Rhetoric and Reality of the AI Race
I really enjoyed chatting to Philip Bell about 'AGI race' claims and the AI race narrative overall. Link to podcast below
techfuturesproj.substack.com/p/is-china-r...
10.12.2025 09:44 β π 5 π 0 π¬ 0 π 0
MARS 4.0 β Cambridge AI Safety Hub
Had a blast speaking to the MARS fellows last week about prospects for international cooperation on AI safety, great power competition in AI, evidence supporting 'AI race' claims. Great discussion - calibre of these fellows is remarkable. They'll do important things.
www.cambridgeaisafety.org/mars
09.12.2025 10:30 β π 1 π 0 π¬ 0 π 0
Right - so as the relatively rare Irishman who is also a Russia/FSU geek I guess I have to have Thoughts and Opinions after yesterday's revelations. First off - I don't think that this was an attempted assassination or other "kinetic" mission. The goal here seems to have been purely psychological /1
05.12.2025 07:46 β π 49 π 14 π¬ 1 π 4
It's a rainy saturday, so I thought I'd bump our 'AI for sustainable development' report from the WIC programme from earlier this year. I thought it was a very good report, and folks might find it useful.
www.wicinternet.org/pdf/Governin...
22.11.2025 15:28 β π 1 π 0 π¬ 0 π 0
Things are looking smoothly exponential for AI over the past several years, and I continue to think this is the best default assumption (until the AI R&D automation feedback loop eventually speeds everything up)
21.11.2025 09:19 β π 15 π 2 π¬ 3 π 1
It's a bold strategy, Cotton - let's see if it plays off for them. 4/4
19.11.2025 11:10 β π 0 π 0 π¬ 0 π 0
undermined. That's stuff that is salient to a LOT of people. Now the American people get to see - loudly and clearly - that this same part of the industry is directly trying to interfere in their democracy; trying to kill of the chances of the politicians that hear them. 3/4
19.11.2025 11:10 β π 2 π 0 π¬ 1 π 0
They don't dislike it because of 'EA billionaires'.They dislike it because of Meta's chatbots behaving 'romantically' towards their children,gambling and bot farms funded by a16z,suicides in which ChatGPT played an apparent role, and concerns their jobs will be affected and their creative rights2/4
19.11.2025 11:10 β π 1 π 0 π¬ 1 π 0
New York could be on the verge of a milestone AI safety bill
We spoke with the billβs author, Alex Bores, about where it stands and his AI-focused run in a crowded congressional race.
This will be a fascinating test case. The AI industry (a16z, OpenAI & others) are running the crypto fairshake playbook. But that worked because crypto was low-salience; most people didn't care. People care about AI. 1/4
www.techbrew.com/stories/2025...
19.11.2025 11:10 β π 4 π 0 π¬ 1 π 0
The report takes seriously the possibility and risks of AGI and superintelligence - but more than that, it takes seriously the steps needed to address them in a global, collaborative manner that acknowledges commerical and geopolitical pressures. I'd love to see it read and shared. 4/4
11.11.2025 14:04 β π 1 π 0 π¬ 0 π 0
- liability and verification mechanisms, compliance review, tiered dialogue mechanisms, and cross-border dispute resolution procedures.
Huge gratitude to the excellent writing team, and panel of leading Chinese and international experts who put so much time into this. 3/4
11.11.2025 14:04 β π 1 π 0 π¬ 1 π 0
aimed at minimising frontier AI risk. Including
- Collaborative technology tracking and early warning systems
- Internationally shared safety evaluation tools & platforms
- dynamic governance rules to avoid ossification
- emergency intervention mechanisms for catastrophic risks 2/4
11.11.2025 14:04 β π 1 π 0 π¬ 1 π 0
Extremely excited to launch this report; the second report from World Internet Conference's International AI Governance Programme that I co-Chair with Yi Zeng. It goes further than any similar report I've seen in recommending robust governance interventions 1/4
www.wicinternet.org/pdf/Advancin...
11.11.2025 14:04 β π 2 π 1 π¬ 1 π 0
are more sceptical of superintelligence than under the previous administration, and the 'AI as a normal technology' scientific worldview is more in the ascendency there - and along with it, an 'export the American tech stack' strategy more suited to the scientific view.
04.11.2025 12:43 β π 0 π 0 π¬ 0 π 0
mea culpa/correction: this was written over the summer, when it still appeared to me that the 'race to superintelligence' held sway in Washington DC as well as Silicon Valley. It's become clearer to me since that a many (perhaps indeed a critical mass of) leading policy voices in/advising USG
04.11.2025 12:43 β π 0 π 0 π¬ 1 π 0
Goldene Zukunft mit roten Linien
Die Aussichten auf eine Superintelligenz sind nicht nur positiv. Γber Gefahren und dΓΌstere Prognosen aber gibt es keine breite Debatte.
Pleased to have a new paper in Internationale Politik, in which I call for
(a) more of a global conversation around AGI and superintelligence.
(b) 'middle powers' to start exerting themselves in the global conversation.
internationalepolitik.de/de/goldene-z...
04.11.2025 12:43 β π 2 π 0 π¬ 1 π 0
YouTube video by Looking For Growth
Matt Clifford: LFG Make or Break full speech
Thoroughly enjoyed this speech by the excellent Matt Clifford. Much of it is also relevant to Ireland, where it pairs well with John Collison's overlapping-in-diagnosis&recommendations Irish Times article from the previous week (link in comment).
www.youtube.com/watch?v=McMt...
03.11.2025 12:44 β π 1 π 0 π¬ 1 π 0
i'll write something up when i'm back
24.10.2025 16:26 β π 1 π 0 π¬ 0 π 0
So much happening in Beijing these days! Really exciting to see the progress that's happening in these conversations.
24.10.2025 12:54 β π 0 π 0 π¬ 0 π 0
Excited to head to Beijing tomorrow for an action-packed week. Launching an international cooperation network with my friend and colleague Yi Zeng; participating in a frontier AI Safety and Governance forum; and presenting on my cooperation/competition work at a Tsinghua Roundtable.
24.10.2025 12:54 β π 1 π 0 π¬ 2 π 0
Introducing: the Global Volcano Risk Alliance charity & Linkpost: 'When sleeping volcanoes wake' (AEON) β EA Forum
We want to highlight two things in this post: β¦
Because there's more happening (and more in need of philanthropic support!) than AI risk, check out this great writeup by Mike Cassidy and Lara Mani on their volcanic risk work and charity - innovative and neglected.
forum.effectivealtruism.org/posts/jAcCjF...
21.10.2025 10:56 β π 0 π 0 π¬ 0 π 0
He made us all suffer the national indignity of That Elon Musk Bletchley interview, and didn't even manage to wrangle a job out of it? I'm never going to look at Dishy Rishi the same way again.
09.10.2025 17:41 β π 3 π 0 π¬ 0 π 0
YouTube video by London Futurists
Options for the future of the global governance of AI
I really enjoyed being part of this excellent discussion yesterday on the future of global AI governance, with Kayla Blomquist, Dan Fagella, Duncan Cass-Beggs, Nora Ammann and Robert Whitfield, skilfully Chaired by David Wood. Catch up with it here:
www.youtube.com/watch?v=EADx...
05.10.2025 12:52 β π 2 π 0 π¬ 0 π 0
Important report. Given how long it takes/competitive it is to get tenure these days, too often academia basically asks women to choose between an academic career and motherhood, which is unacceptable. (There being ultra-capable subset who manage both doesn't make it reasonable). Needs to change.
04.10.2025 09:56 β π 4 π 0 π¬ 0 π 0
Well this was an interesting way to discover that my (considerate, charming) wife sometimes reads my social media posts.
03.10.2025 13:57 β π 3 π 0 π¬ 0 π 0
As of today I am officially a Research Professor! I will be celebrating by having "Professor SeΓ‘n: Doomometrics Studies" socks made, and submitting a paper tonight with some excellent coauthors that I hope will be a banger (in appropriate Research Professor fashion).
01.10.2025 12:40 β π 7 π 0 π¬ 0 π 0
PhD researcher in the Philosophy & Ethics Group at Eindhoven University of Technology (TU/e), working at the interface of Philosophy, Comparative Cognition, and AI.
π: www.dmoralesp.com
CEO at Machine Intelligence Research Institute (MIRI, @intelligence.org)
Trying to understand and reduce the risk of global catastrophes, at @cser.bsky.social.
Mastodon: https://mastodon.social/@constantin_arnscheidt
Website: arnscheidt.github.io
The Global Priorities Institute was an academic research institute at Oxford University with the aim to conduct foundational research that informs the decision-making of individuals and institutions seeking to do as much good as possible.
Director of AI & Geopolitics Project at University of Cambridge | Founder of Formation Advisory | TIME100 AI | Former Global Head of Policy for DeepMind | Author of βAI Needs You: how we can change AIβs future and save our ownβ https://tinyurl.com/y4v26spa
π Leiden University | Studying how institutions shape AI adoption, or vice versa
π Writing on AI, digital governance, political communication & policy innovation
π‘ Bridging academia & practiceβletβs connect!
Econ PhD student at UZH. Interested in welfare econ, including both theory (e.g., social choice) and applications to important issues such as growth, inequality, and catastrophic risks.
My website: http://sites.google.com/view/gustav-alexandrie/
The University of Cambridge Development and Alumni Relations office. Supporting excellence in education and research through philanthropy and alumni engagement. π
π www.philanthropy.cam.ac.uk | www.alumni.cam.ac.uk
Retired UNESCO Dir for Digital Inclusion, Policies & Transformation. Chair, UN University, eGov Institute.
UNESCO Women in STEM Committee
Some pottery and cyanotyping
Profile picture is of my face and torso
Banner is a picture I took of a light garden
Canadian. Forever. Let's show SOLIDARITY with Americans fighting the good fight! π
*I block all accounts spouting Russo-American disinformation*
Communications and community. More here: http://synthesis.williamgunn.org/about/
Talk to me: https://calendar.app.google/z4KR3xfXTc178es47
AI Research Engineer working on AI Safety and Alignment | formerly OpenAI, Waymo, DeepMind, Google. Father, photographer, Zen practitioner.
Lecturer @kcl-spe.bsky.social @kingscollegelondon.bsky.social
Game Theory, Econ & CS, Pol-Econ, Sport
Chess βοΈ
Game Theory Corner at Norway Chess
Studied in Istanbul -> Paris -> Bielefeld -> Maastricht
https://linktr.ee/drmehmetismail
Views are my own
Head of Research @ Utah AI Policy Office // math PhD // networks, complex systems, machine learning, and all things AI // mom & cat lady
Researcher affiliated w @BKCHarvard. Previously @openai @ainowinstitute @nycedc. Views are yours, of my posts. #justdontbuildagi
Research nonprofit exploring how to navigate explosive AI progress. forethought.org
I work on AI safety and AI in cybersecurity
A.I. Correspondent at @Puck.news
I write a lot about AI
Signal # 732-804-1223
ian@puck.news
https://puck.news/newsletters/the-hidden-layer/