...and some of it is far off course - perhaps part of the alignment problem is a) us aligning to value (inc moral) realism, and b) designing/nudging AI that will want to track this too.
06.11.2025 06:52 β π 0 π 0 π¬ 0 π 0@adamford.bsky.social
Blogs at scifuture.org - posts videos at : youtube.com/@scfu
...and some of it is far off course - perhaps part of the alignment problem is a) us aligning to value (inc moral) realism, and b) designing/nudging AI that will want to track this too.
06.11.2025 06:52 β π 0 π 0 π¬ 0 π 0...If the 'real' game theory is discoverable (perhaps something akin to value (inc moral) realism), then AI may try to discover and align to this, thinking other advanced civs discover and align to it as well. I think parts of human values approximately tracks some of this value realism, ...
06.11.2025 06:52 β π 0 π 0 π¬ 1 π 0...about the universe, how that factors into offence/defence + defection/cooperation tradeoffs, and ultimately what other mature civilisations align to. I think most mature civs will lean defensive/cooperative - in that I think there are dominant instrumental reasons to do this...
06.11.2025 06:52 β π 0 π 0 π¬ 1 π 0Game theory seems useful to incentivise the AI to be predictably rational and moral earlier on. Though if superintelligence is uncontrollable, it may converge on the 'reality tracking' game theory imposed by the laws of physics which include resource boundaries and how resources are distributed...
06.11.2025 06:51 β π 0 π 0 π¬ 1 π 0AI Outscored Humans in a Blinded Moral Turing Test - Should We Be Worried?
Yet people could still tell it was AI. Are we priming the public to over-trust machine morality?
Recent interview with Dr Eyal Aharoni explores these issues
youtu.be/quNwpv0zhtM?... via @YouTube
Halloween special: a scary futures tier list that is spooky in theme & sobering in content.
Anders Sandberg @arenamontanus.bsky.social is a formerly a senior research fellow at Oxfordβs Future of Humanity Institute.
youtu.be/3sToD13u_78
#halloween #xrisk #ai
Zombie AI, if smart enough will likely recognise it lacks qualia β this may be a problem if moral reliability is limited without sentience. Zombie AI may decide to engineer into itself the capacity for qualia, esp if it desires moral reliability.
Link : www.scifuture.org/on-zombie-ai...
Can current AI really reason - or are LLMs just clever parrots, skipping the "understanding" step humans rely on?
@bengoertzel.bsky.social argues that there is a big difference between appearing to reason & building abstract representations required for reasoning
youtu.be/vVTnfoO-uzc
one could think of schelling points in galactic game theory as totems which rational mature civs naturally come to rally around - operating like some kind of cosmic leviathan - which results in alignment pressure upon any rational agent that can reason adequately about it.
11.10.2025 01:00 β π 2 π 0 π¬ 0 π 0thus an ASI thinking really long term may consider what the cosmic commons of mature civs may value+want, and how their superior cognitive capabilities have shaped their values+wants (which I think deeply influences convergence to coordination over wasteful defection and resource burning war)...
11.10.2025 00:59 β π 2 π 0 π¬ 1 π 0Often people talk about the selection pressures imposed by environments while forgetting about the agents in the environment or treating them as dumb features of the environment... what sets apart intelligent agents in the environment, is that their wants and values shape selection pressures too...
11.10.2025 00:58 β π 1 π 0 π¬ 1 π 0If we had non-negligible credence that we could create ASI that was robustly more moral than us, would we be obliged to try?
And if so, how much resources should we devoting to this?
www.scifuture.org/more-moral-t...
Video on AI Welfare with Jeff Sebo @jeffsebo.bsky.social
-he argues that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood.
www.youtube.com/watch?v=Bsq2...
New Interview with Nick Bostrom: From #Superintelligence to Deep #Utopia. #AI has surged from theoretical speculation to powerful, world-shaping reality. Now we have a shot at avoiding catastrophe and ensuring resilience, meaning, and flourishing in a βsolvedβ world.
youtu.be/8EQbjSHKB9c?...
Here is a cartoon I generated for a talk I'm giving, took longer than expected to get it right, and still needed to edit it afterwards - yet I'm still impressed.
Any pointers on generating cartoons?
Ken Mogi - AI, Consciousness & Empathy
#AI #consciousness #empathy youtube.com/shorts/lhNiB... @kenmogi.bsky.social
Super excited! Interviewing Nick Bostrom again in a few days - last time was in 2012. We will cover topics that range from Superintelligence to Deep Utopia.
AI Safety through mathematical precision or swiss-cheese security?
What about Indirect Normativity?
www.scifuture.org/forthcoming-...
Nick Bostrom on the opportunities of technological progress - it's not all doom and gloom.
youtube.com/shorts/Ar6Od...
Hey, I've got 2 followers on Bandcamp - want more - so please follow me, and I'll upload some fun sounds for you :)
scifuture.bandcamp.com/follow_me
Join Ken Mogi & Shun Yoshizawa for a mind-bending Future Day talk on metacognition in LLMs! Can AI think about thinking? Dive into the future of intelligence at #FutureDay -
@kenmogi.bsky.social
www.scifuture.org/metacognitio...
Robin Hanson's talk 'Our Big Oops: We Broke Humanityβs Superpower' is on at Feb 28th 13:30 PST at Future Day
www.scifuture.org/events/futur...
.."as that needs natural selection of whole cultures. We now have less variety, weaker selection pressures, and faster changes from context and cultural activism. Our options to fix are neither easy nor attractive." - see www.scifuture.org/our-big-oops...
27.02.2025 02:40 β π 0 π 0 π¬ 0 π 0At Future Day @robinhanson.bsky.social Robin Hanson will argue that "humanityβs superpower is cultural evolution. Which still goes great for behaviors that can easily vary locally, like most tech and business practices. But modernity has plausibly broken our evolution of shared norms and values..."
27.02.2025 02:39 β π 1 π 0 π¬ 1 π 1What do people want from AI safety? The wise might dream of a better world; the greedy might just want a leash on the chaos.
25.02.2025 09:22 β π 0 π 0 π¬ 1 π 0Hi Ken, hope to see you at Future Day
20.02.2025 01:42 β π 0 π 0 π¬ 0 π 0Link: www.scifuture.org/james-barrat...
20.02.2025 01:36 β π 0 π 0 π¬ 0 π 0Excited to announce that James Barrat will discuss his up an coming book 'The Intelligence Explosion: When AI Beats Humans at Everything' at #FutureDay this year!
He is also the author of the best selling book 'Our Final Invention'
Link in the comments π§΅
Future Day - A day to think ahead - before the future thinks for us!
Join us for Future Day! π§΅link in the 1st comment
The Book : www.penguin.com.au/books/the-da...
03.02.2025 20:08 β π 1 π 0 π¬ 0 π 0