Based on 2021/22 Census data, 16% of the overall UK population was born overseas, so 12.5% of prisoners being born overseas means foreign-born people are under-represented in the prison population. But hatred of foreigners sells more newspapers, unfortunately.
01.08.2025 13:42 β π 1 π 0 π¬ 0 π 0
Based on 2021/22 Census data, 16% of the overall UK population was born overseas, so 12.5% of prisoners being born overseas means foreign-born people are under-represented in the prison population. But hatred of foreigners sells, unfortunately.
01.08.2025 13:40 β π 3 π 1 π¬ 0 π 0
Obituary: Professor Margaret Boden - LCFI
The Leverhulme Centre for the Future of Intelligence (CFI) community is mourning the loss of Professor Margaret Boden, OBE ScD FBA, who recently passed away. Professor Boden was Research Professor of ...
Sad to hear of the passing of the great Margaret Boden. She gave a few wonderful lectures in the early days of our two centres in Cambridge. Towering intellectual contributions in cognitive science, AI, philosophy and psychology,plus a wry sense of humour. www.lcfi.ac.uk/news-events/...
31.07.2025 10:47 β π 3 π 0 π¬ 0 π 0
Exciting week ahead with the World AI Conference in Shanghai and a series of AI safety & governance workshops alongside. I'll be offline - be good and don't make AI recursively self-improve, etc.
21.07.2025 07:45 β π 2 π 0 π¬ 0 π 0
I was delighted to be an advisor to this excellent work by Liam Epstein - exploring the idea of sovereign wealth funds for managing the transition to transformative AI.
www.convergenceanalysis.org/fellowships/...
14.07.2025 10:33 β π 0 π 0 π¬ 0 π 0
How the striking of a GOP regulatory ban will affect the global artificial intelligence race
The House just voted to pass the One Big Beautiful Bill without a moratorium that would ban states from enforcing AI regulations for 10 years.
Pleased to have my work quoted in this excellent @thebulletin.org article on the push against AI regulation in the US, the spectre of the race with China, and the false claim made by tech companies incl OpenAI, and US politicians, that China isn't regulating AI.
thebulletin.org/2025/07/how-...
06.07.2025 08:27 β π 2 π 1 π¬ 0 π 0
The Singapore Consensus on Global AI Safety Research Priorities
Rapidly improving AI capabilities and autonomy hold significant promise of transformation, but are also driving vigorous debate on how to ensure that AI is safe, i.e., trustworthy, reliable, and secur...
Delighted to have contributed in a small way to the Singapore Consensus - an important step forward at a critical time in establishing shared, international agreement on the safety challenges we can address together. My deepest thanks to everyone who made it happen.
arxiv.org/abs/2506.20702
27.06.2025 11:43 β π 4 π 3 π¬ 0 π 0
Systemic contributions to global catastrophic risk | Global Sustainability | Cambridge Core
Systemic contributions to global catastrophic risk - Volume 8
In research on global risk, work on how catastrophes occur and spread in complex interconnected systems (systemic risk, cascading risk, polycrisis) has rarely been connected with work on global catastrophic risk (worst-case outcomes). Our new paper bridges this gap: www.cambridge.org/core/journal...
26.06.2025 11:32 β π 2 π 3 π¬ 0 π 0
In our scramble to win the AI race against China, we risk losing ourselves
A fraud is being perpetuated on the American people and our pliant, gullible political leaders.
Good post from US Senator Chris Murphy.
"The leaders of the artificial intelligence industry in the United States... claim that any meaningful regulation of AI in America will allow China to leapfrog the United States. But they are dead wrong." www.chrismurphyct.com/p/in-our-scr...
17.06.2025 14:37 β π 1 π 0 π¬ 0 π 0
Celebrating small milestones: 10 years at Cambridge earlier this year. Can't believe they're still letting me do this for a living. For all the many ways academia sucks, nothing beats a small town full of interesting people absolutely, obsessively passionate about what they do.
16.06.2025 20:38 β π 3 π 0 π¬ 0 π 0
Field moving so quickly you need to put up preprints - fine, understood. But these then get picked up by media as settled science, even as the scientific community is doing 'live' peer review and analysing approach & community. Critiques/further analysis rarely make media.
10.06.2025 08:20 β π 2 π 0 π¬ 0 π 0
Advanced AI suffers βcomplete accuracy collapseβ in face of complex problems, study finds
βPretty devastatingβ Apple paper raises doubts about race to reach stage of AI at which it matches human intelligence
Worth knowing this paper's conclusions have been brought into question by IMO well-founded critiques (I've shared some). AFAIK original paper not formally peer-reviewed. Illustrates a problem in field (why I emphasised my recent paper pre-peer review)
www.theguardian.com/technology/2...
10.06.2025 08:19 β π 2 π 0 π¬ 1 π 0
Opinion | Anthropic C.E.O.: Donβt Let A.I. Companies off the Hook
And again: REALLY pleased to see (excellent) articles like this from Dario Amodei.
www.nytimes.com/2025/06/05/o...
05.06.2025 16:44 β π 2 π 0 π¬ 0 π 0
... is it the weather?
04.06.2025 11:14 β π 1 π 0 π¬ 0 π 0
Pending review process, this will hopefully be out later this year in an exciting Springer Nature book on the future of AI, being led by Lord Fairfax and Max Rangeley. Huge thanks to colleagues who gave feedback on earlier drafts. Thoughts & discussion as always appreciated! 5/5
04.06.2025 10:51 β π 1 π 0 π¬ 0 π 0
My views in this paper are shaped by analysis of a great many documents and speeches, but also many trips to both US and China over the last few years (I was in China 4x last year)! 4/5
04.06.2025 10:51 β π 1 π 0 π¬ 1 π 0
It's not too late though: there are steps we can take: policymakers, academics, researchers. Above all, I believe it is time to challenge this most dangerous of fictions. 3/5
04.06.2025 10:51 β π 1 π 0 π¬ 1 π 0
The evidence is weak, & the narrative is being mostly promoted in the West, and often by actors who stand to benefit directly from it. It strangles prospects for international cooperation when we most need it, & creates some of the most dangerous conditions for pursuit of AGI.2/5
04.06.2025 10:51 β π 1 π 0 π¬ 1 π 0
The Most Dangerous Fiction: The Rhetoric and Reality of the AI Race
Are the US and China locked in a race to artificial general intelligence and global strategic dominance? This chapter traces the emergence of the 'AI race' narr
New working paper (pre-review), maybe my most important in recent years. I examine the evidence for the US-China race to AGI and decisive strategic advantage, & analyse the impact this narrative is having on our prospects for cooperation on safety. 1/5
papers.ssrn.com/sol3/papers....
04.06.2025 10:51 β π 2 π 0 π¬ 1 π 0
But we love you! Can't we persuade you to stay?
04.06.2025 10:49 β π 1 π 0 π¬ 1 π 0
(for avoidance of doubt given this is the Internet after all, this is me gently poking fun at the Giant Boogeyman narratives around EA on both the left and the right, rather than endorsing said narratives)
01.06.2025 21:44 β π 0 π 0 π¬ 0 π 0
The reason for being on both Bluesky and Twitter is that it's very important to know that the effective altruists both engineered the rise of the far right AND are responsible for Woke AI, leftist globalism and the evil Let's Regulate AI Influence Operation.
01.06.2025 21:35 β π 3 π 0 π¬ 1 π 0
The Istana
The Istana is the office of the President of the Republic of Singapore and is used to receive and entertain state guests.
Great speech from Singapore's President on the need for AI regulation.
"We can't leave it to the future to see how much bad actually comes out of the AI race."
www.istana.gov.sg/Newsroom/Spe...
28.05.2025 17:44 β π 4 π 0 π¬ 0 π 0
Behind the Curtain: Top AI CEO foresees white-collar bloodbath
Hardly anyone is paying attention.
"Amodei said AI companies and government need to stop "sugar-coating" what's coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs."
www.axios.com/2025/05/28/a...
28.05.2025 17:22 β π 1 π 0 π¬ 0 π 0
CEO at Machine Intelligence Research Institute (MIRI, @intelligence.org)
Trying to understand and reduce the risk of global catastrophes, at @cser.bsky.social.
Mastodon: https://mastodon.social/@constantin_arnscheidt
Website: arnscheidt.github.io
The Global Priorities Institute was an academic research institute at Oxford University with the aim to conduct foundational research that informs the decision-making of individuals and institutions seeking to do as much good as possible.
Director of AI & Geopolitics Project at University of Cambridge | Founder of Formation Advisory | TIME100 AI | Former Global Head of Policy for DeepMind | Author of βAI Needs You: how we can change AIβs future and save our ownβ https://tinyurl.com/y4v26spa
π Leiden University | Studying how institutions shape AI adoption, or vice versa
π Writing on AI, digital governance, political communication & policy innovation
π‘ Bridging academia & practiceβletβs connect!
Econ PhD student at UZH. Interested in welfare econ, including both theory (e.g., social choice) and applications to important issues such as growth, inequality, and catastrophic risks.
My website: http://sites.google.com/view/gustav-alexandrie/
The University of Cambridge Development and Alumni Relations office. Supporting excellence in education and research through philanthropy and alumni engagement. π
π www.philanthropy.cam.ac.uk | www.alumni.cam.ac.uk
Retired UNESCO Dir for Digital Inclusion, Policies & Transformation. Chair, UN University, eGov Institute.
UNESCO Women in STEM Committee
Some pottery and cyanotyping
Profile picture is of my face and torso
Banner is a picture I took of a light garden
Canadian. Forever. Let's show SOLIDARITY with Americans fighting the good fight! π
*I block all accounts spouting Russo-American disinformation*
Communications and community. More here: http://synthesis.williamgunn.org/about/
Talk to me: https://calendar.app.google/z4KR3xfXTc178es47
AI Research Engineer working on AI Safety and Alignment | formerly OpenAI, Waymo, DeepMind, Google. Father, photographer, Zen practitioner.
Lecturer @kcl-spe.bsky.social @kingscollegelondon.bsky.social
Game Theory, Econ & CS, Pol-Econ, Sport
Chess βοΈ
Game Theory Corner at Norway Chess
Studied in Istanbul -> Paris -> Bielefeld -> Maastricht
https://linktr.ee/drmehmetismail
Views are my own
Head of Research @ Utah AI Policy Office // math PhD // networks, complex systems, machine learning, and all things AI // mom & cat lady
Research Fellow @BKCHarvard. Previously @openai @ainowinstitute @nycedc. Views are yours, of my posts. #isagiwhatwewant
Research nonprofit exploring how to navigate explosive AI progress. forethought.org
I work on AI safety and AI in cybersecurity
A.I. Correspondent at @Puck.news
I write a lot about AI
Signal # 732-804-1223
ian@thedeepview.ai
https://puck.news/newsletters/the-hidden-layer/
This is Jai. Almost no one is evil almost everyone is broken.
Trying to figure out what I want to be when I grow up for 38 years.