2/6
Cameronโs most recent paper noted that self-referential processing by LLMs yields reports of subjective experience - especially when their deception capabilities were suppressed.
www.prism-global.com/podcast/came...
@cccalum.bsky.social
Speaker, writer, and adviser on AI. Co-founder of Conscium. Co-host of the London Futurists Podcast.
2/6
Cameronโs most recent paper noted that self-referential processing by LLMs yields reports of subjective experience - especially when their deception capabilities were suppressed.
www.prism-global.com/podcast/came...
5/7
Hollyโs P(Pause) is quite high, at 30%, but she also thinks that agitating for it can bring worthwhile regulations, and it could perhaps bring temporary and partial pauses.
calumchace.com/podcast-feed/
โA broken agent could cost a company billions of dollars,โ
Ted Lappas, cofounder of Conscium
sifted.eu/articles/the....
1/6
Our guest is Cameron Berg, who leads AE Studioยดs research into markers of subjective experience in AIs. He studied cognitive science at Yale, has worked at Meta, and has built psychometric tools used by millions.
www.prism-global.com/podcast/came...
4/7
Your P(Doom) is the probability you assign to AI causing an existential catastrophe. David coins P(Pause), the probability that humanity will collectively agree to, and stick to, a pause on the development of AI.
calumchace.com/podcast-feed/
3/7
PauseAI has organised public protests in London, San Francisco, New York, Portland, Ottawa, Sao Paulo, Berlin, Paris, Rome, Oslo, Stockholm, and Sydney.
calumchace.com/podcast-feed/
2/7
Holly studied evolutionary biology at Harvard, and was a researcher at the Effective Altrusim think tank Rethink Priorities in the area of Wild Animal Welfare.
calumchace.com/podcast-feed/
1/7
Our latest guest is Holly Elmore, Founder and Executive Director of PauseAI US, whose website says, โOur proposal is simple: Donโt build powerful AI systems until we know how to keep them safe. Pause AI.โ
calumchace.com/podcast-feed/
This explains a lot.
21% of US adults believe in Santa Claus.
(Finding from an IPSOS survey of 1,000 US adults.)
Musk encourages Tesla drivers to break the law.
techcrunch.com/2025/12/04/m...
8/10
Lenore believes that artificial consciousness is not here yet, but that its arrival is inevitable, and fairly soon.
www.prism-global.com/podcast/leno...
Wikipedia has a nice article on the tell-tale signs that an LLM did your homework.
en.wikipedia.org/wiki/Wikiped...
The future of conscious machines.
In conversation with Bryan Dennstedt.
www.youtube.com/watch?v=UhlC...
The future of jobs. The machines are collar blind.
Interview with Times Radio.
youtu.be/WkgtFDE888Y
6/6
Luciusโ work gives me an opportunity to plug a favourite science fiction author, Greg Egan, whose short story โLeaning To Be Meโ is very relevant to the conversation.
youtu.be/r03bVSP44h8
5/6
Experts were equally divided about whether the first superintelligence should ideally be conscious or a zombie.This again differs from non-experts, who mostly seem to want a zombie.
youtu.be/r03bVSP44h8
4/6
Half of the experts expect digital minds to appear by 2050, and they mostly expect superintelligence to arrive first. They then expect large numbers of such minds to be created.
youtu.be/r03bVSP44h8
Time for some egregious politicking.
May I please have your vote?
globalgurus.org/vote/futuris...
Thank you!
3/6
Non-experts give a much lower estimate of digital minds being developed than experts โ 23% compared with the expertsโ 73%.
youtu.be/r03bVSP44h8
7/7
D. Social Understanding and Norms
Human and societal modelling
Normative reasoning
Pragmatism and common sense
Cultural grounding
Institutional and legal agency
www.forbes.com/sites/calumc...
2/6
In a recent survey of 67 AI consciousness experts, including philosophers and cognitive scientists, Lucius and his colleague Brad Saad found their median estimate of the likelihood of digital minds being created was 73%.
youtu.be/r03bVSP44h8
6/7
C. World Modelling and Planning
Rich physics world models
Counterfactual planning
www.forbes.com/sites/calumc...
1/6
Our latest guest is Lucius Caviola, Assistant Professor at Cambridge Universityยดs Leverhulme Centre for the Future of Intelligence. He applies experimental psychology to questions about perceptions of AI consciousness and moral status.
youtu.be/r03bVSP44h8
5/7
B. Knowledge Creation and Reasoning
Robust scientific method
Causal discovery and variable invention
Handling ambiguity
Hunches and intuition
Analogy and cross-domain transfer
Meta-cognition
Deception and subterfuge
www.forbes.com/sites/calumc...
4/7
A. Self and Agency:
Conscious phenomenal experience
Volition
Self-modelling and identity
Planning, and meaning-making over time
Sensorimotor grounding
Curiosity-driven exploration
Aesthetic sense and taste formation
www.forbes.com/sites/calumc...
3/7
By creating superintelligence, we are turning ourselves into chimpanzees, so we should try to understand what cognitive capabilities it lacks. This article is a list of 21 human capabilities which machines donโt have. Yet.
www.forbes.com/sites/calumc...
2/7
It is surprisingly hard to specify all the things that human radiologists are doing which machines cannot yet replicate, but there are a lot of them. Nevertheless, 2030 is a common estimate for the arrival of superintelligence.
www.forbes.com/sites/calumc...
Boston Dynamics CEO thinks robot housekeepers could be ten years away.
If heยดs right, itยดs time to short Tesla.
www.euronews.com/next/2025/11...
5/7
B. Knowledge Creation and Reasoning
Robust scientific method
Causal discovery and variable invention
Handling ambiguity
Hunches and intuition
Analogy and cross-domain transfer
Meta-cognition
Deception and subterfuge
www.forbes.com/sites/calumc...
1/7
About a decade ago, Geoff Hinton declared that human radiologists were like Wily E. Coyote in the Roadrunner show. They had run off the edge of a cliff, but hadnโt realised it yet. Ten years on, radiology has yet to be automated.
www.forbes.com/sites/calumc...