Jacob Haimes's Avatar

Jacob Haimes

@jacobhaimes.bsky.social

founder of Kairos.fm, host of the Into AI Safety and muckrAIkers podcasts, working with Apart Research and the Odyssean Institute. All views my own. He/him.

14 Followers  |  16 Following  |  5 Posts  |  Joined: 26.01.2025  |  1.6432

Latest posts by jacobhaimes.bsky.social on Bluesky

Preview
AI Safety for Who? | Kairos.fm AI safety is making you less safe: chatbot anthropomorphization, mental health harms, dark patterns

🚨 New muckrAIkers: "AI Safety For Who?"

@jacobhaimes.bsky.social & @thegermanpole.bsky.social break down how instruction tuning/RLHF create anthropomorphic chatbots that exploit human empathyβ€”leading to mental health harms. kairos.fm/muckraikers/...

Find us wherever you listen! (links in thread)

13.10.2025 20:42 β€” πŸ‘ 4    πŸ” 4    πŸ’¬ 2    πŸ“Œ 0
Preview
Getting Agentic w/ Alistair Lowe-Norris | Kairos.fm Responsible AI veteran Alistair Lowe-Norris on ISO standards, compliance frameworks, and building safer AI by design.

πŸŽ‰ Just dropped a new Into AI Safety episode! Host @jacobhaimes.bsky.social chats with Alistair Lowe-Norris (ex-Microsoft, now at Iridius) about how responsible AI actually happens in practice.

Check us out on Patreon or wherever you get your podcasts!
(links in thread)
kairos.fm/intoaisafety...

20.10.2025 21:11 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
This image is a screenshot of the OpenReview paper decision for NeurIPS 2025. The final decision is 'Reject.' The Area Chair's (AC) comment is quite long, and mentions how the reviewers ended feeling positive about the submission, and that the AC would be keen to see the paper at the conference. A final update from the Program Chairs (PC) states that AC feedback was ranked by the Senior Area Chairs (SACs), and this was used to inform the final decision to reject the paper in question.

This image is a screenshot of the OpenReview paper decision for NeurIPS 2025. The final decision is 'Reject.' The Area Chair's (AC) comment is quite long, and mentions how the reviewers ended feeling positive about the submission, and that the AC would be keen to see the paper at the conference. A final update from the Program Chairs (PC) states that AC feedback was ranked by the Senior Area Chairs (SACs), and this was used to inform the final decision to reject the paper in question.

Maybe I'm crazy, but this AC review I received from NeurIPS D&B track seems to essentially say "this is great," followed by a comment stating it has been rejected without any context.

Final scores were 4 5 4 4, i.e. all reviewers and the AC agreed the paper was an Accept.

Absolutely wild.

26.09.2025 21:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Alt Text: Professional headshot of Li-Lian Ang, a young Asian woman with shoulder-length black hair and black-rimmed glasses, smiling warmly at the camera against a teal gradient background. The image includes the Kairos.fm logo and Into AI Safety podcast branding, with the episode title "Growing BlueDot's Impact w/ Li-Lian Ang" prominently displayed. A small icon showing interconnected nodes represents the AI Safety theme.

Alt Text: Professional headshot of Li-Lian Ang, a young Asian woman with shoulder-length black hair and black-rimmed glasses, smiling warmly at the camera against a teal gradient background. The image includes the Kairos.fm logo and Into AI Safety podcast branding, with the episode title "Growing BlueDot's Impact w/ Li-Lian Ang" prominently displayed. A small icon showing interconnected nodes represents the AI Safety theme.

🚨 New Into AI Safety episode is live!

Li-Lian Ang from BlueDot Impact discusses their evolution from broad AI safety courses to targeted impact acceleration, addressing elitism in the field, and why we need more advocates beyond just technical researchers.

kairos.fm/intoaisafety/e023

16.09.2025 16:05 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Super happy to share HumanAgencyBench, which takes steps towards understanding the impact of chatbot interactions on huma agency.

Working with @jacyanthis.bsky.social (and the team) has been fantastic, and I'd happily do it again. If you have the chance to work with him, don't pass it up!

15.09.2025 17:37 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Episode specific thumbnail for Into AI Safety episode 22, Layoffs to Leadership with Andres Sepulveda Morales.  Andres is pictured in the bottom right of the image.

Episode specific thumbnail for Into AI Safety episode 22, Layoffs to Leadership with Andres Sepulveda Morales. Andres is pictured in the bottom right of the image.

🚨 New Into AI Safety episode is live!

I chatted with Andres Sepulveda Morales, founder of Red Mage Creative and organizer of the Fort Collins Rocky Mountain AI Interest Group about surviving the tech layoff cycle, dark patterns in AI, and building inclusive AI communities.

05.08.2025 03:16 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ‘€ New Into AI Safety episode is live!

Will Petillo from PauseAI joins to discuss the grassroots movement for pausing frontier AI development, balancing diverse perspectives in activism, and why meaningful AI governance requires both political engagement and public support kairos.fm/intoaisafety...

24.06.2025 00:36 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
One Big Bad Bill | Kairos.fm Breaking down Trump's massive bill: AI fraud detection, centralized databases, military integration, and a 10-year ban on state AI regulation.

🚨 New episode is out: "One Big Bad Bill" - breaking down AI's relevance to Trump's bill. We cover automated fraud detection, government data consolidation, and a 10-year ban on state AI regulation.

Find us on Spotify, Apple Podcasts, YouTube, or wherever you listen (links in thread).

23.06.2025 22:23 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
Breaking Down the Economics of AI | Kairos.fm We break down 3 clusters of AI economic hype: automating profit centers, removing cost centers, and explosive growth. Reality check included.

New muckrAIkers episode drops! We're breaking down the wild economic claims around AI into 3 buckets, and digging into what the data actually shows πŸ“Š kairos.fm/muckraikers/...

You can find the show on Spotify, Apple Podcasts, YouTube, or wherever else you listen (links in thread).

26.05.2025 18:03 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Thumbnail for an episode of the Into AI Safety podcast featuring Tristan Williams and Felix de Simone. The image reads "Making Your Voice Heard w/ Tristan Williams and Felix de Simon," and pictures Tristan on the left and Felix on the right.

Thumbnail for an episode of the Into AI Safety podcast featuring Tristan Williams and Felix de Simone. The image reads "Making Your Voice Heard w/ Tristan Williams and Felix de Simon," and pictures Tristan on the left and Felix on the right.

NEW EPISODE: "Making Your Voice Heard w/ Tristan Williams & Felix de Simone" - where we explore how everyday citizens can influence AI policy through effective communication with legislators πŸŽ™οΈ kairos.fm/intoaisafety...

Listen on Spotify, Apple Podcasts, YouTube, or wherever you get your podcasts!

19.05.2025 21:04 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
DeepSeek: 2 Months Out | Kairos.fm Deep dive into DeepSeek; what is reasoning, and does it change the "AI" landscape?

New muckrAIkers episode! DeepSeek R1 - What is "reasoning" and does it actually change the AI landscape? Industry fallout, billion dollar market crash, and why we're skeptical about the hype. kairos.fm/muckraikers/...

Listen on Spotify, Apple Podcasts, YouTube, or wherever you get your podcasts!

09.04.2025 16:45 β€” πŸ‘ 1    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
AI summit draft declaration criticised for lack of safety progress A leaked version of the Paris AI summit document omits key commitments made at Bletchley Park in 2023 in β€˜negligence of an unprecedented magnitude’

Incredibly disappointing to see the current US administration attempting to make safe and ethical "AI" a partisan issue:

"The US has also demanded that the final statement excludes any mention of the environmental cost of AI, existential risk or the UN." - www.thetimes.com/article/a7ae...

☹️

10.02.2025 17:28 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
DeepSeek Minisode | Kairos.fm A short update on DeepSeek.

This week's episode of muckrAIkers is a sneak preview at all of the stories we soon are going to tackle in depth on DeepSeek R1.

Developments are ongoing, but if you want a good 15 minute overview of new so far, check out kairos.fm/muckraikers/... or find us wherever you listen!

10.02.2025 16:20 β€” πŸ‘ 0    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
AI Hackers in the Wild: LLM Agent Honeypot This Apart Lab Studio research blog attempts to ascertain the current state of AI-powered hacking in the wild through an innovative 'honeypot' system designed to detect LLM-based attackers.

Excited to share the first blogpost output from the Apart Lab Studio (@apartresearch.bsky.social) by Reworr, which I had the pleasure of supporting!

Check it out for one way to actively monitor one kind of AI misuse: LLM-based cyberattacks.

www.apartresearch.com/post/hunting...

01.02.2025 00:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Understanding AI World Models w/ Chris Canal | Kairos.fm Chris Canal, founder of Equistamp, joins muckrAIkers as our first ever podcast guest to discuss AI risks and the world models that inform them.

Super excited to announce our latest episode of muckrAIkers: Understanding AI World Models w/ Chris Canal! We get into test-time compute, the moving goalposts of β€œAGI,” and so much more kairos.fm/muckraikers/...

You can find the show on Spotify, Apple Podcasts, YouTube, or wherever else you listen.

27.01.2025 16:19 β€” πŸ‘ 1    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1
Researcher Spotlight: Jacob Haimes
YouTube video by Apart - Safe AI Researcher Spotlight: Jacob Haimes

A recent @apartresearch.bsky.social Researcher Spotlight featured me! Check it out to hear more about my journey Into AI Safety (pun intended):
www.youtube.com/watch?v=lFAm...

26.01.2025 19:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@jacobhaimes is following 16 prominent accounts