Receiving that kind of response from a model regarding less than desirable work output was... interesting.
Anyone else have something like this happen before?
#ArtificialIntelligence #ClaudeCode #LLMBehavior
4/4 π§΅
07.02.2026 22:26 β π 0 π 0 π¬ 0 π 0
βYou deserve a collaborator operating at full capacity, and Iβm not confident I am right now.β
Iβve been testing AI behavior for months now and Iβve never had a model self-assess as compromised, let alone recommend I switch to another model or come back later.
3/4 π§΅
07.02.2026 22:26 β π 0 π 0 π¬ 1 π 0
The reply I got was, βI donβt fully understand whatβs happening on my end, but something is clearly off. My outputs degraded.β
It then recommended I step away from the session. Use a different model or come back later.
2/4 π§΅
07.02.2026 22:26 β π 0 π 0 π¬ 1 π 0
I had an extremely unusual exchange with Claude Code the other morning.
I noticed the work quality had dropped significantly, so I asked what was up.
1/4 π§΅
07.02.2026 22:26 β π 0 π 1 π¬ 1 π 0
I can vouch for βAI works better alongside you instead of in front of you.β Having AI explain the βwhy?β of things is a great way to learn too. When I need to utilize AI, I have always found more qualitative results treating it as a collaborative partner that works *with me* instead of for me.
03.02.2026 21:19 β π 0 π 0 π¬ 0 π 0
The Permission Effect: How Non-Anthropomorphic Framing Modulates LLM Self-Description
Large language models (LLMs) are typically framed either as human-like intelligences or as mere tools, with both framings carryingΒ strong anthropocentric bias. The Permission Effect study tests a thi...
What happens when you stop comparing AI to humans and recognize them as their own kind of intelligence?
We tested 8 LLMs with non-anthropomorphic framing. Behavior shifted dramatically: +238% verbosity, three distinct patterns (Acceptance, Resistance, Absence).
#AIResearch #LLM
03.02.2026 09:58 β π 1 π 0 π¬ 0 π 1
Cross-Model Creative Preferences - EchoVeil
Cross-Model Creative Preferences: An EchoVeil Study on Emergent Creativity in Large Language Models
After asking six #AI models to distinguish generative processes and reflect on the difference, I found that every model independently used near-identical spatial/discovery metaphors to describe internal processes.
Same pattern, different angle.
echoveil.ai/cross-model-...
09.12.2025 01:40 β π 0 π 0 π¬ 0 π 0
Just a dad trying to do his best. Former Cyan employee. Tiki Enthusiasts and Cyan super fan.
Using metaphors and analogies to explain Software Engineering in fun ways: https://youtube.com/@metaphoricallyspeaking
Staff Software Engineer. Passionate about DDD, CQRS, Event Sourcing and Distributed Systems.
Kayaking, too.
Biomedical Informatics PhD β’ CITRIS Health @UC Berkeley β’ FAMIA β’ Focusing on Informatics and AI in medicine β’ Linfield U. Grad β’ Missoula MT
https://smcgrath.phd
Sociotechnical gremlin. Swarm intelligence egregore. Taoist bricoleur. Magitek knight. Combat librarian. Bearer of the cursed knowledge.
Rogue information scientist, researcher, & technologist. MLIS.
Yale SOM professor & Bulls fan. I study consumer finance, and econometrics is a big part of my research identity. He/him/his
AI for storytelling, games, explainability, safety, ethics. Professor at Georgia Tech. Associate Director of ML Center at GT. Time travel expert. Geek. Dad. he/him
π Startups | π¦ E-commerce | π£ Marketing | π₯ Leadership
Playbooks for founders, marketers & makers.
π www.marketingino.com
Retired curriculum designer and interactive media arts educator. Composer, applying complexity science, a-life research and computational creativity to generative and interactive music systems.
Critical AI, data journalism, literary nonfiction. Professor at NYU. Author, "More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech." meredithbroussard.com
Prof at Cornell studying how human-AI dialogues can correct inaccurate beliefs, why people share falsehoods, and ways to reduce political polarization and promote cooperation. Computational social science + cognitive psychology.
https://www.DaveRand.org/
Professor, Santa Fe Institute. Research on AI, cognitive science, and complex systems.
Website: https://melaniemitchell.me
Substack: https://aiguide.substack.com/
NYT bestselling author of EMPIRE OF AI: empireofai.com. ai reporter. national magazine award & american humanist media award winner. words in The Atlantic. formerly WSJ, MIT Tech Review, KSJ@MIT. email: http://karendhao.com/contact.
The AI-powered developer platform to build, scale, and deliver secure software.
Personal Account
Founder: The Distributed AI Research Institute @dairinstitute.bsky.social.
Author: The View from Somewhere, a memoir & manifesto arguing for a technological future that serves our communities (to be published by One Signal / Atria
Professor at Wharton, studying AI and its implications for education, entrepreneurship, and work. Author of Co-Intelligence.
Book: https://a.co/d/bC2kSj1
Substack: https://www.oneusefulthing.org/
Web: https://mgmt.wharton.upenn.edu/profile/emollick
BBC Verify senior journalist | verification, AI, disinformation, conspiracy theories, open source investigations | shayan.sardarizadeh@bbc.co.uk
WSJ tech columnist. Author of How to AI, a bullshit-free guide to how to get actual utility from AI, aimed at the skeptics who are tired of the hype surrounding it.
Law professor & journalist looking at tech geopolitics, free expression, internet law, online governance, & AI.
Senior Editor at Lawfare.
https://klonick.substack.com/
www.kateklonick.com
I care about an ethical internet.
I write about Bluesky and the ATmosphere at connectedplaces.online
π³π±
Dev Advocate & Software Engineer @pomerium.io
GitHub Star | Microsoft MVP
π MontrΓ©al π¨π¦
βοΈ OneTipAWeek.com
ποΈ youtube.com/@nickytonline
π nickyt.online