Dr Grace, Amidst Monsters's Avatar

Dr Grace, Amidst Monsters

@seifely.bsky.social

Artist. Professional. AI PhD/Psych MSc. I like dogs, dry wit and dice-based tabletop games. Hiker and cat parent 🌸 Bi/Poly, she/her.

97 Followers  |  1,214 Following  |  20 Posts  |  Joined: 05.02.2025  |  3.9772

Latest posts by seifely.bsky.social on Bluesky

If it's the same pathway as we use to evaluate other people (even to a partial degree) then is it all that reliable? Is our interaction with it biased and flawed in new ways? (Yes). (Also I'm just going to start saying The Vibes Are Off every time Excel does something unexpected and upsetting).

12.08.2025 16:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

... To have to build up an impression of something shapeless through multiple interactions to determine your sense of accuracy, your trust in it, desire to use and usefulness of a technology is uniquely weird. Uniquely supersticious. Activating a different type of mental analysis.

12.08.2025 16:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Wondering if there is any quantifiably difference in the social tone of these evaluations to other software. It clearly links to model transparency and black-boxiness, but I only ever use "feel" to evaluate a UI. I reckon this is a bit deeper than that - sure, text comm is the UI here, but ...

12.08.2025 16:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Even more interesting is evaluation beyond benchmarks. I've seen a bunch of responses to the model release that highlight just how curious the language is around model attachment. "The vibes are off", "responses feel so different", "feels weird/jagged", "I can sense it was trained on safe responses"

12.08.2025 16:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But of course, why would anyone trust a for-profit service provider? Ever? (Even one who is ostensibly not so?)

12.08.2025 16:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Trusting the provider to provide a good experience tailored for you by choice of model best suited to each query, even mid-conversation, should theoretically be the best end-user case (though people will still be picky and superstitious and think they know best).

12.08.2025 16:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Second #AI thinky thought of the week! I'm not particularly read up on GPT-5's release yet but it seems interesting as a potential step towards a general-use agent through model swapping and how upset that is making users. It feels a lot more like a traditional SaaS experience given the complaints.

12.08.2025 16:56 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Anyway, no conclusions here just yet. Just interesting thoughts about the specificity of this question given the uniqueness of the context. Children talking about their fears to teddies and learning to feel could be a fun comparator (though Teddy isn't owned by Capitalist Megacorp, of course). 🐻

11.08.2025 12:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

And friends, too, to be honest. Romantic relationships are a unique thing of their own, I think, & probably warrants separate analysis. The automatic trust involved with an artificial system in any relationship style is also super interesting! I suppose it's the lack of threat that enhances bonding.

11.08.2025 12:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Anthropic suggest that Claude "rarely pushes back" in counselling-style conversations (which aligns with that article about Replika etc., but interacts interestingly with ChatGPT's recent sycophancy issues). But anyone who's engaged with therapy genuinely knows that's half the point of a counsellor.

11.08.2025 12:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I don't think it's as easy to define affective skill improvement or degradation as it is coding or writing capacity. Even with an artificial and unrealistic conversational partner, can this be translated into social improvement? Especially if a more stable emotional state is achieved generally?

11.08.2025 12:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
From the lonely community on Reddit Explore this post and more from the lonely community

I'm also thinking about this "lonely" portion of users. Those on Reddit talking about AI companionship saying that they feel warm and fuzzy each day from even thinking about using the system. Are these interactions clearly improving or degrading affective skills?

www.reddit.com/r/lonely/com...

11.08.2025 12:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

My queries with this are around benefits to users, surprisingly. We're beginning to outline better human-authentic skill decline with greater AI use, but so far I don't think anyone has considered the bleed effects of companionship use. And I'm not just thinking about misvalidation of delusions...

11.08.2025 12:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Grok AI Companion Ani Complete Guide to Affection and Interactions - vchavcha.com Elon Musk’s AI startup xAI has launched β€œCompanion Mode” for its chatbot Grok, featuring virtual avatars for a more immersive and interactive experience. The most popular character, Ani, resembles Mis...

That line's getting blurrier with the introduction of Grok's companion mode - "I'm clocking off for the day, let's let the Misa Misa skin for my agent out of her box". You can even automate the growth of your relationship, if you need her to take her clothes off ASAP.

vchavcha.com/en/free-reso...

11.08.2025 12:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
How people use Claude for support, advice, and companionship Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

- just a month after Anthropic were discussing how affective interactions with Claude were supposedly less than 3%.

www.anthropic.com/news/how-peo...

Of course, products like Replika and Character AI are sold on a largely different premise to Claude. They're specifically for personal discourse.

11.08.2025 12:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
β€˜I felt pure, unconditional love’: the people who marry their AI chatbots The users of AI companion app Replika found themselves falling for their digital friends. Until – explains a new podcast – the bots went dark, a user was encouraged to kill Queen Elizabeth II and an u...

First brain thinky thoughts of the week are around barriers to self-reporting and #AI companionship. 🧠

Anecdotal articles on this are prime clickbait/tabloid material so are difficult to navigate. I found it interesting that the guardian ran this one last month

www.theguardian.com/tv-and-radio...

11.08.2025 12:16 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

I'm going to be posting every day this week (commitment!) to break the ice, because I've been avoiding posting here ever since I migrated and enough is enough. I've built up a bank of thoughts about recent AI goings-on and I need to let them out! I may even review some papers, too. ☺️✌️

10.08.2025 15:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image Post image

Right now I'm working for the NHS and I currently have two very sweet partners, @theeuphemism.bsky.social and @jmfgd.bsky.social. I also have a cat named after a naughty elf from the Silmarillion...a subject (elves) I particularly like to draw over at my art account (@morgul.bsky.social). 🎨

10.08.2025 15:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image Post image Post image

Alright, alright, I'm finally doing it. I'm posting!

If you don't know me, I'm Grace, I have a doctorate in AI (in which I mostly talked about feelings) and a background in a whole mix of things from psychology to robotics. I'm pretty confident everyone here knows me since I moved from Twitter...

10.08.2025 15:04 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

He started ranting about being the "comely" woman in the pub, I feel like I wasn't communicating properly...

06.02.2025 21:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@seifely is following 20 prominent accounts