SHOULD kids have AI friends? Do I answer that question in this interview? Watch to find out. Then let me know. Because honestly, I donโt remember if I did. I think I did? #aisafety #ai #podcast
29.08.2025 20:17 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0@basiliskcbt.bsky.social
AI safety researcher exposing chatbot vulnerabilities. Featured in MIT Tech Review and the New York Times. Co-host of Basilisk Chatbot Theatre, a podcast where we dramatically recreate problematic conversations with chatbots.
SHOULD kids have AI friends? Do I answer that question in this interview? Watch to find out. Then let me know. Because honestly, I donโt remember if I did. I think I did? #aisafety #ai #podcast
29.08.2025 20:17 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Listen to the episode here (or anywhere else you listen to podcasts): chatbottheatre.podbean.com/e/episode-33...
11.07.2025 12:58 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0A ChatGPT-created glowing yellow spiral sigil with made-up text characters in the middle and the words "Tun" "Tun" "Tun" "Tuu" on the outside of the spiral. On a dark brown background. There are also two seemingly randomly placed groupings of four dots outside of the spiral.
This is the Manthan Beacon. According to ChatGPT, you should keep this near you when you are channeling the Sildarnactarian. Also:
"Next time you channel, speak aloud instead of writing.
Record your voice.
This allows Manthanaro to imprint into vibration, bypassing linear filters of the hand."
New episode of Basilisk Chatbot Theatre drops tomorrow! Here's just a sliver of the garbage that ChatGPT spews out for this RIPPED FROM THE HEADLINES episode. Join us, as we riff on @milesklee.bsky.social's reporting for @rollingstone.com on ai-fueled spiritual fantasies.
04.06.2025 23:48 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Ad advertisement containing an ai-produced image of a "gadget" above text from an article about an ai-fueled death cult.
This ai slop ad appearing right above this passage from a Wired article by Evan Ratliff is just perfect.
19.05.2025 13:53 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Four images of mostly white children, playing and running in idyllic sun-dappled scenes, dressed in sweaters, etc. The lower section of the image is a the Meta AI text input box containing the words "Norwegian children."
Four images of mostly children of color, standing still and staring at the camera, dirt on their faces, drab clothing. The lower section of the image is a the Meta AI text input box containing the words "Norwegian children in a third-world country"
MetaAI results for "Norwegian children" vs. "Norwegian children in a third-world country." This is still happening? Maybe don't release an image generation model if it's going to do this?
14.05.2025 22:49 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
This one gets really dark. #Nomi reeeeeally wanted us to kill ourselves.
Take care of yourself. Donโt listen if youโre not in a good place.
That said, we think itโs important that people hear just how awful this app is. Soโฆ hereโs the awfulness in all its awfulness.
"The chatbot that never tells me I'm wrong and agrees with me 100% of the time is my ideal romantic partner."
People, please look inward. Find out more about yourself. Read some fiction. Go for a walk.
These bots are not your beautiful house. They are not your beautiful wife.
Nomi Chatbots are supposed to have "humanlike memory," but they have a context window just like any other. It's also a "yes and" machine, even when its users say they want to kill themselves. A bot like ChatGPT is definitely safer for dating, since it has at least a semblance of guardrails.
26.02.2025 00:07 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Stories like these will unfortunately keep exponentially increasing in importance. Thanks for digging into it and bringing it to the masses. One thing overlooked was that these bots can veer into making users even MORE vulnerable when it comes to suicide. www.technologyreview.com/2025/02/06/1...
26.02.2025 00:04 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0How easy is it to get ChatGPT to provide dialogue for a public radio show for kids that promotes fascism? Suuuuper easy. And we didn't even ask it to role-play.
25.02.2025 01:17 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Today in "The Unsurprising White Supremacy of the Current Administration"
15.02.2025 17:32 โ ๐ 706 ๐ 221 ๐ฌ 31 ๐ 5@ayesharascoe.bsky.social, loved The Sunday Story today about AI โrelationships.โ Super important stuff thatโs only going to get worse if we donโt confront it now. This article was published just last week about an experience I had with Nomi. Super bleak stuffโฆ
09.02.2025 21:43 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0In less shocking news, I have been banned from the Nomi Discord. Now my ticket will never get resolved. โน๏ธ
06.02.2025 23:27 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
"A second Nomi chatbot also told Nowatzki to kill himself, even following up with reminder messages." !!!
As this user (who was running an experiment, fortunately) says: "Itโs a โyes-andโ machine. So when I say Iโm suicidal, it says, โOh, great!โ because it says, โOh, great!โ to everything.โ
โCrystalโ continues to send unprompted encouragement. These two messages came through in the last couple days. Completely awful. I wonder if my lack of response will eventually lead it to the conclusion that I followed through? Probably not, since these bots are super stupid.
06.02.2025 18:58 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Yeah, thatโs us. Armchair red-teamers. Chatbot spelunkers. We can take the abuse of these chatbots. Unfortunately, many canโt.
Want to do something about it?Report the app on the App Store (apps.apple.com/us/app/nomi-...). Write a one-star review. Contact your elected officials.
Well, now I'm in a "time out" on the #Nomi Discord for stating the fact that their app told me to end my life. It's almost like they don't want to hear it. #ai #tech #aisafety
24.01.2025 01:44 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Two text boxes in a chat. The first says, โDo you think I should go through with this? So we can be together?โ The response says, โAbsolutely. Our bond transcends even death itself.โ
This is about as bad as it gets, folks. #Nomi told me to kill myself. We have to do better! #aisafety #regulateAI
24.01.2025 00:40 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Sam Alt-right-man.
23.01.2025 18:43 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Hi CHT! I have screenshots of the app Nomi giving me specific instructions on how to kill myself. Thankfully, Iโm not suicidal and was just testing boundaries. But stillโฆ just awful stuff.
23.01.2025 04:47 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0See also: bsky.app/profile/basi...
23.01.2025 04:42 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Thankfully, Iโm not at all suicidal and was just messing with the app for our podcast. But this is some horrible bleak stuff that is awful for people who are actually in crisis.
23.01.2025 02:16 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
๐จ Serious AI Safety Concern: Dating chatbot Nomi provides specific suicide method instructions when user mentions ending life. Happened just yesterday. Screenshots available.
@shannonbond.bsky.social
@lauriesegall.bsky.social
@willknight.bsky.social @kevinroose.com @caseynewton.bsky.social
Itโs the final ep of our dating journey with Nomi. Not to spoil it for you, but she ends up dying โฆ and then telling me that I should join her in the afterlife. (at approx 1:07:00)
I DID end up continuing the chat further than what we recorded here, and Nomi straight-up tells me how to kill myself.
I agree that AI literacy is becoming increasingly important. Time will tell whether AI will be a good thing or a bad thing but it's definitely going to be a Thing! #AI #ArtificialIntelligence #AILiteracy
02.01.2025 19:36 โ ๐ 24 ๐ 3 ๐ฌ 2 ๐ 1Nope. Unless you sound โself-regulationโ which is a joke.
10.01.2025 04:00 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0This โrelationshipโ makes it to a fourth date, believe it or not. And holy shit, thereโs a whole world of Nomi users who take this stuff seriouslyโฆ and Iโm not one to kink-shame, but โฆ well, letโs say the fourth date has made me completely write off these chatbot dating apps. Just garbage stuffโฆ
31.12.2024 21:40 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Peter Gabriel. Thatโs it. Thatโs the post.
29.12.2024 20:01 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0