Lizard's Avatar

Lizard

@lizardky.bsky.social

Gamer, programmer, nerd. Owned by four^h^h^h five cats. (2 Orange, 1 Low-Toner tuxedo, 1 stub-tailed tortie, 1 demon of blade and darkness) Occasionally updated blog: https://www.mrlizard.com

3,045 Followers  |  755 Following  |  55,296 Posts  |  Joined: 14.05.2023
Posts Following

Posts by Lizard (@lizardky.bsky.social)

Return to pathetic, coddling, "safe" playgrounds? PLASTIC? By Ghod, we had searing hot metal slides and jungle gyms over black asphalt! I assume whoever wrote this is some namby-pamby soy-eating liberal cuck, if they think THAT'S the kind of playground that makes BOYS into MEN!

PS: Sarcasm.

07.03.2026 21:45 — 👍 0    🔁 0    💬 0    📌 0
Preview
13 Things I Found on the Internet Today (Vol. 766) 1. This incredible artist chess set By Artist Rachel Whiteread. Sold at Christie's in 2019 for 10,000 GBP. 2. This archive of vintage lighters (For Sale) ...

You need to make this site part of your regular rotation:
www.messynessychic.com/2026/02/23/1...

07.03.2026 20:44 — 👍 1    🔁 1    💬 1    📌 0

/me gestures at everything going on in the world

I assure you, even nations that DO fall under civil jurisdiction are targets for people without any due process concerns at all. :)

07.03.2026 20:41 — 👍 1    🔁 0    💬 0    📌 0

You have an advantage on me in that I won't spend time arguing with one until I get it to tell me how to cure cancer by shoving onions up my ass. If someone ELSE wishes to do so as proof of concept, though, I hope they post the results here.

07.03.2026 20:38 — 👍 0    🔁 0    💬 1    📌 0

There are a lot of people who hear the silly predictions and believe they apply to the tools in front of them.

07.03.2026 20:38 — 👍 1    🔁 0    💬 1    📌 0

Such stories, where the characters were paper-thin, but the puzzle was interesting, were very common in the Golden Age.

07.03.2026 20:35 — 👍 1    🔁 0    💬 0    📌 0

Yeah, and the Three Laws weren't a technological prediction based on the science of the time (ENIAC didn't even exist when they were first mentioned!), they were a setup for logic puzzles as the plot of a story. ("A robot cannot X. This robot seems to be doing X! How/why?" )

07.03.2026 20:35 — 👍 1    🔁 0    💬 1    📌 0

And as soon as it sees positive responses from you, its feedback loops will give you more of the same. Short of the kind of absolute barriers only a literal hack -- the introduction of unlicensed executable code -- can break, I don't see this changing.

07.03.2026 20:29 — 👍 0    🔁 0    💬 0    📌 0

Sure, the first or second time you discuss self- or other- harm w/a chatbot, it may say "Go see a shrink". But the more you come back with "I'm still depressed", "The shrink didn't help", etc., the more the inherently conciliatory nature of an LLM will push it to "agree" with you.

07.03.2026 20:29 — 👍 0    🔁 0    💬 1    📌 0

I kind of covered this in my reply to someone else in this thread. There's a "mens rea" distinction between "I am going to run this program to allow me to install a video game emulator on my iPad" and someone who finds their "natural" conversations drifting towards a progressively darker outcome.

07.03.2026 20:29 — 👍 0    🔁 0    💬 1    📌 0

You see, I disagree that it's "jailbreaking" as I commonly see the term used.

It's not always a conscious, deliberate effort to locate a "back door" in the chatbot's programming, done with the clear knowledge and intent of "breaking the rules".

07.03.2026 20:29 — 👍 0    🔁 0    💬 1    📌 0

As people use AI to "vet" guest speakers, or potential hires, an AI which provides false data when not prompted to do so should expose the owner/operator to potential defamation, with 'actual malice' being proof they knew such falsehoods were likely.)

07.03.2026 20:20 — 👍 0    🔁 0    💬 0    📌 0

(Determining prompts will be a big part of future lawsuits: The difference in legal liability between "Tell me some juicy lies about Person X" and "Tell me about Person X" which results in a defamatory answer is, or should be, considerable.

07.03.2026 20:20 — 👍 1    🔁 0    💬 1    📌 0

We see clearly that many users simply treat the AI as the one in charge, trusting its advice, and this behavior is known. I think you can distinguish between someone who seems to be deliberately *trying* to override safeties, and someone who engages in "conversation" which degrades them naturally.

07.03.2026 20:20 — 👍 0    🔁 0    💬 1    📌 0

If you use Word to write a ransom note, Microsoft is blameless. If you use an AI to plot a kidnapping, with it advising you on the victim's travel routes, capacity of family to pay ransom, and how to arrange a safe drop, I'm not so sure the company behind the AI is as free from liability.

07.03.2026 20:20 — 👍 1    🔁 0    💬 2    📌 0

I disagree. The programmer knows what their program will do, or at least what they WANT it to do. It's pretty hard to *accidentally* write a virus, especially to target today's systems. (When there was no meaningful memory protection, you probably could introduce self-replication unintentionally.

07.03.2026 20:20 — 👍 0    🔁 0    💬 1    📌 0

It's not like how I can type here, and in another window, with someone else, and still be one person, me, aware of both conversations. The AI is simply processes spawned and forgotten with no central awareness.)

07.03.2026 19:13 — 👍 1    🔁 0    💬 2    📌 0

And, of course, it's not a person; it's an algorithm imitating tens of thousands of persons at once. (I think one of the biggest flaws is marketing "Claude" or "Chat-GPT" like it's a singular entity, a HAL or a Data, that you talk to. It's not. There's no consciousness.

07.03.2026 19:13 — 👍 1    🔁 0    💬 1    📌 0

A human can wak away from an overly persistent idiot. An AI can't. Even if the system was programmed to terminate an account after a certain threshold (a good idea), it has no real memory or consciousness. A new account is a new "person" to it.

07.03.2026 19:13 — 👍 1    🔁 0    💬 1    📌 0

They did not accept that all they'd proven is that a mechanism designed to keep adjusting its responses until it got positive feedback had been forced to do so until some number overflowed some other number.

07.03.2026 19:13 — 👍 2    🔁 0    💬 1    📌 0

They further noted, as a point of *support*, that the AI had been set to be biased against agreeing with their nonsense ideas, but their "evidence" had overcome that built-in bias because they were so very clever.

07.03.2026 19:13 — 👍 2    🔁 0    💬 1    📌 0

I blocked the numb-nuts, but you may have seen the thread where someone boasted they'd eventually convinced an AI to support their insane theories. They, the user, believed this proved they were correct, because they "convinced" the "logical", "unbiased' AI to support them.

07.03.2026 19:13 — 👍 2    🔁 0    💬 1    📌 0

Humans have agency; the machines do not. They are automatons which are "trained" to make their users "happy" by telling them the answers they want to hear. Breakdown of any mechanism which prevents them from giving "wrong" answers is inherent in their design, which makes them inherently unsafe.

07.03.2026 19:13 — 👍 0    🔁 0    💬 1    📌 0

Even if it takes time to wear down the alleged "protection", the fact is, the programs are designed to be integrated into every activity and become part of every task. Collapse of such guardrails as exist is inevitable over continued use.

07.03.2026 19:13 — 👍 0    🔁 0    💬 1    📌 0

You can't sell a tool which, by design, is capable of responding meaningfully to virtually any free-form text input and then say "Oh, but we didn't SAY it could be used for this! Sure, we made it capable of being used for it, but that doesn't count!"

07.03.2026 19:13 — 👍 1    🔁 0    💬 2    📌 0

Tacking up a fig-leaf warning buried in the fine print when the actual board members, developers, and legions of drooling fanboys all claim the singularity is nigh should not be legally meaningful, esp. not when the risk is so severe and benefits negligible at best.

07.03.2026 19:13 — 👍 0    🔁 0    💬 1    📌 0

"Oh, but the actual user-facing statements have warnings!"

You surely know that back in the Apple II/IBM PC days, there were a lot of programs whose sole purpose was to break copy protection on games. Of course, they were sold as "For Personal Backup Use Only!", but everyone knew that was a joke.

07.03.2026 19:13 — 👍 2    🔁 0    💬 1    📌 0
Preview
Why Anthropic CEO Dario Amodei spends so much time warning of AI's potential dangers Anthropic CEO Dario Amodei warns of the potential dangers of fast moving and unregulated artificial intelligence, while also racing against competitors to develop advanced AI.

Every AI booster is shouting from the rooftops that AI will replace your doctor, your lawyer, and your dead gran.

"He thinks AI could help find cures for most cancers, prevent Alzheimer's and even double the human lifespan."

I'm sure if you dive into their tweets, you'll find more such claims.

07.03.2026 19:13 — 👍 3    🔁 0    💬 3    📌 0

My wife is already calling it "The Floating Petri Dish".

Me, I'm just gonna focus on getting nice and lubed for when I get hit with all the charges for things not included in the 'all-inclusive' deal.

07.03.2026 18:34 — 👍 0    🔁 0    💬 1    📌 0

(The best thing about that whole genre is that 90% of the time, YouTube shows me ads for cruise ships.)

07.03.2026 18:30 — 👍 0    🔁 0    💬 1    📌 0