Sean O hEigeartaigh's Avatar

Sean O hEigeartaigh

@sean-o-h.bsky.social

Academic, AI nerd and science nerd more broadly. Currently obsessed with stravinsky (not sure how that happened).

3,728 Followers  |  274 Following  |  232 Posts  |  Joined: 07.09.2024  |  1.7063

Latest posts by sean-o-h.bsky.social on Bluesky

Based on 2021/22 Census data, 16% of the overall UK population was born overseas, so 12.5% of prisoners being born overseas means foreign-born people are under-represented in the prison population. But hatred of foreigners sells more newspapers, unfortunately.

01.08.2025 13:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Based on 2021/22 Census data, 16% of the overall UK population was born overseas, so 12.5% of prisoners being born overseas means foreign-born people are under-represented in the prison population. But hatred of foreigners sells, unfortunately.

01.08.2025 13:40 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Obituary: Professor Margaret Boden - LCFI The Leverhulme Centre for the Future of Intelligence (CFI) community is mourning the loss of Professor Margaret Boden, OBE ScD FBA, who recently passed away. Professor Boden was Research Professor of ...

Sad to hear of the passing of the great Margaret Boden. She gave a few wonderful lectures in the early days of our two centres in Cambridge. Towering intellectual contributions in cognitive science, AI, philosophy and psychology,plus a wry sense of humour. www.lcfi.ac.uk/news-events/...

31.07.2025 10:47 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Frontier AI Risk Management Framework - Concordia AI Shanghai AI Laboratory and Concordia AI are proud to introduce the Frontier AI Risk Management Framework v1.0 (the β€œFramework”). We propose a robust set of protocols designed to empower general-purpos...

Lots of highlights from Shanghai, one was the release of Shanghai AI Lab's Frontier AI Safety framework, co-developed w Concordia AI. Will have significant influence in Shanghai++ ecosystem.
concordia-ai.com/research/fro...

30.07.2025 08:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Exciting week ahead with the World AI Conference in Shanghai and a series of AI safety & governance workshops alongside. I'll be offline - be good and don't make AI recursively self-improve, etc.

21.07.2025 07:45 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I was delighted to be an advisor to this excellent work by Liam Epstein - exploring the idea of sovereign wealth funds for managing the transition to transformative AI.
www.convergenceanalysis.org/fellowships/...

14.07.2025 10:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Developing responsible AI: Vision and practice Amidst accelerating advancements in AI and growing disparities in digital access, the World Internet Conference (WIC) is going to convene a multidisciplinary coalitionβ€”including members of WIC Special...

Off to Geneva for the ITU AI for Good conference. Tomorrow I'll be speaking at a workshop the World Internet Conference is organising with ITU. My talk:
"AI Governance is in Crisis".

Because, well, I think AI governance is in crisis.
aiforgood.itu.int/event/develo...

08.07.2025 11:24 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
How the striking of a GOP regulatory ban will affect the global artificial intelligence race The House just voted to pass the One Big Beautiful Bill without a moratorium that would ban states from enforcing AI regulations for 10 years.

Pleased to have my work quoted in this excellent @thebulletin.org article on the push against AI regulation in the US, the spectre of the race with China, and the false claim made by tech companies incl OpenAI, and US politicians, that China isn't regulating AI.
thebulletin.org/2025/07/how-...

06.07.2025 08:27 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
The Singapore Consensus on Global AI Safety Research Priorities Rapidly improving AI capabilities and autonomy hold significant promise of transformation, but are also driving vigorous debate on how to ensure that AI is safe, i.e., trustworthy, reliable, and secur...

Delighted to have contributed in a small way to the Singapore Consensus - an important step forward at a critical time in establishing shared, international agreement on the safety challenges we can address together. My deepest thanks to everyone who made it happen.
arxiv.org/abs/2506.20702

27.06.2025 11:43 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Systemic contributions to global catastrophic risk | Global Sustainability | Cambridge Core Systemic contributions to global catastrophic risk - Volume 8

In research on global risk, work on how catastrophes occur and spread in complex interconnected systems (systemic risk, cascading risk, polycrisis) has rarely been connected with work on global catastrophic risk (worst-case outcomes). Our new paper bridges this gap: www.cambridge.org/core/journal...

26.06.2025 11:32 β€” πŸ‘ 2    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
In our scramble to win the AI race against China, we risk losing ourselves A fraud is being perpetuated on the American people and our pliant, gullible political leaders.

Good post from US Senator Chris Murphy.
"The leaders of the artificial intelligence industry in the United States... claim that any meaningful regulation of AI in America will allow China to leapfrog the United States. But they are dead wrong." www.chrismurphyct.com/p/in-our-scr...

17.06.2025 14:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Risk Tiers: Towards a Gold Standard for Advanced AI - Oxford Martin AIGI Increasing risks from advanced AI demand effective risk management systems tailored to this rapidly changing technology. One key part of risk management is establishing risk tiers. Risk tiers are cate...

Pleased to have contributed to this research memo on AI risk tiers, led by the excellent Nick Caputo and Oxford Martin AI Governance Initiative:
aigi.ox.ac.uk/publications...

17.06.2025 12:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Celebrating small milestones: 10 years at Cambridge earlier this year. Can't believe they're still letting me do this for a living. For all the many ways academia sucks, nothing beats a small town full of interesting people absolutely, obsessively passionate about what they do.

16.06.2025 20:38 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
How Some of China’s Top AI Thinkers Built Their Own AI Safety Institute The emergence of the China AI Safety and Development Association (CnAISDA) is a pivotal moment for China’s frontier AI governance. How it navigates substantial domestic challenges and growing geopolit...

Nice paper by Scott Singer, Karson Elmgren & Oliver Guest on the evolution of China's AISI.

As Singer notes, this is being led by experts who are influential in Chinese policy, and genuinely care about frontier AI safety and the need for cooperation.
carnegieendowment.org/research/202...

16.06.2025 19:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Field moving so quickly you need to put up preprints - fine, understood. But these then get picked up by media as settled science, even as the scientific community is doing 'live' peer review and analysing approach & community. Critiques/further analysis rarely make media.

10.06.2025 08:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Advanced AI suffers β€˜complete accuracy collapse’ in face of complex problems, study finds β€˜Pretty devastating’ Apple paper raises doubts about race to reach stage of AI at which it matches human intelligence

Worth knowing this paper's conclusions have been brought into question by IMO well-founded critiques (I've shared some). AFAIK original paper not formally peer-reviewed. Illustrates a problem in field (why I emphasised my recent paper pre-peer review)
www.theguardian.com/technology/2...

10.06.2025 08:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Opinion | Anthropic C.E.O.: Don’t Let A.I. Companies off the Hook

And again: REALLY pleased to see (excellent) articles like this from Dario Amodei.
www.nytimes.com/2025/06/05/o...

05.06.2025 16:44 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

... is it the weather?

04.06.2025 11:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Pending review process, this will hopefully be out later this year in an exciting Springer Nature book on the future of AI, being led by Lord Fairfax and Max Rangeley. Huge thanks to colleagues who gave feedback on earlier drafts. Thoughts & discussion as always appreciated! 5/5

04.06.2025 10:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

My views in this paper are shaped by analysis of a great many documents and speeches, but also many trips to both US and China over the last few years (I was in China 4x last year)! 4/5

04.06.2025 10:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It's not too late though: there are steps we can take: policymakers, academics, researchers. Above all, I believe it is time to challenge this most dangerous of fictions. 3/5

04.06.2025 10:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The evidence is weak, & the narrative is being mostly promoted in the West, and often by actors who stand to benefit directly from it. It strangles prospects for international cooperation when we most need it, & creates some of the most dangerous conditions for pursuit of AGI.2/5

04.06.2025 10:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
The Most Dangerous Fiction: The Rhetoric and Reality of the AI Race Are the US and China locked in a race to artificial general intelligence and global strategic dominance? This chapter traces the emergence of the 'AI race' narr

New working paper (pre-review), maybe my most important in recent years. I examine the evidence for the US-China race to AGI and decisive strategic advantage, & analyse the impact this narrative is having on our prospects for cooperation on safety. 1/5
papers.ssrn.com/sol3/papers....

04.06.2025 10:51 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But we love you! Can't we persuade you to stay?

04.06.2025 10:49 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(for avoidance of doubt given this is the Internet after all, this is me gently poking fun at the Giant Boogeyman narratives around EA on both the left and the right, rather than endorsing said narratives)

01.06.2025 21:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The reason for being on both Bluesky and Twitter is that it's very important to know that the effective altruists both engineered the rise of the far right AND are responsible for Woke AI, leftist globalism and the evil Let's Regulate AI Influence Operation.

01.06.2025 21:35 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Research Associate (Fixed Term) - Job Opportunities - University of Cambridge Research Associate (Fixed Term) in the Cambridge Institute for Technology and Humanity at the University of Cambridge.

And another good role: 1 year postdoc position on AI/bio risk:
www.jobs.cam.ac.uk/job/51468/

29.05.2025 16:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
The Istana The Istana is the office of the President of the Republic of Singapore and is used to receive and entertain state guests.

Great speech from Singapore's President on the need for AI regulation.

"We can't leave it to the future to see how much bad actually comes out of the AI race."
www.istana.gov.sg/Newsroom/Spe...

28.05.2025 17:44 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Behind the Curtain: Top AI CEO foresees white-collar bloodbath Hardly anyone is paying attention.

"Amodei said AI companies and government need to stop "sugar-coating" what's coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs."
www.axios.com/2025/05/28/a...

28.05.2025 17:22 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Research Assistant (Part Time, Fixed Term) - Job Opportunities - University of Cambridge Research Assistant (Part Time, Fixed Term) in the Cambridge Institute for Technology and Humanity at the University of Cambridge.

Important topic, and a good opportunity for the right person: research assistant role on AI/nuclear risk
www.jobs.cam.ac.uk/job/51403/

27.05.2025 15:19 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@sean-o-h is following 20 prominent accounts