Alyss Ardent's Avatar

Alyss Ardent

@ardentslacker.bsky.social

It's 2024, right after the election, the only description that fits is "sad, tired, and grieving".

107 Followers  |  124 Following  |  237 Posts  |  Joined: 06.02.2024  |  2.7502

Latest posts by ardentslacker.bsky.social on Bluesky

Turns out, bigots ain't good people. They always knew what they were doing, and that it was to harm people and get away with it... everything they do is about harming and owning people and getting away with it. And lying about it so it doesn't sound bad. Until they feel safe enough to speak truth.

08.08.2025 20:56 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Or if its grammar rules, RNG, or typos in the input have made the output change the meaning.

Grandma buying a discount turkey and selling the meat at her deli, plus a term from the practice of buying homes to sell at profit, can lead to grandma flipping the bird at her customers.

21.07.2025 23:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You can sneak a pattern into the assignment. Use the same phrase 3-4 times or have multiple lines start with duplicating the last word of the previous line. Typos humans tend to ignore, but the bot *can't*.

Or quiz the human. "You said here," [quote that is or is not in their paper] "Elaborate."

16.07.2025 22:35 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Saturday Morning Breakfast Cereal - Prompt Saturday Morning Breakfast Cereal - Prompt

www.smbc-comics.com/comic/prompt read a fun solution to that the other day. Just an instruction in white text on white background to catch the chatbots.

But there are subtler attacks. Because it's predictive text, and it's mimicking patterns and grammar to guess at the next word?

16.07.2025 22:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Yeah, my psychiatrist's office sent the pharmacy a script for an *antibiotic* the same week I first read about that.

It also doubled my dose of a med I *did* take.

They didn't *say* what happened, but it did not happen again, and it sounded like I was not alone.

16.07.2025 22:24 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

... those works would be in the public domain. They already exist. Many are so old they predate some forms of soil... literally (ahem) older than dirt.

16.07.2025 22:19 β€” πŸ‘ 13    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

They just want the output, and don't actually enjoy writing or reading.

They like a car because of where it *might* take them. Despite never actually going there. So they never look up how many tires it should have, but boy are they proud of the turn signals!

16.07.2025 22:14 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yeah, that goes for a lot of scams. Every cult leader starts out a huckster, but winds up a true believer. Because there's a limit on how much you can lie without forgetting why you're doing it and just... trusting that you must know what you're talking about.

You are always your final mark.

16.07.2025 22:10 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

There's a joke, that we'd develop AI that proves a machine can have a rich inner experience...

Instead, we proved that techbros... don't.

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

And it's a safe bet, really. Hell, every instance of people implementing "powered by AI" when they add in a chatbot is proof that people making decisions do not *test* anything, and do not know good output from bad.

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The chatbot merchants are counting on CEOs not understanding philosophy, business, liability, programming, AI, etc... to sell them something that would take our whole species multiple lifespans to create, and more energy than the planet produces.

Complex programs do exponentially more "thinking".

16.07.2025 22:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You have to build a complex thing nobody's made before. And then you'd have to train it to do things that you cannot automate testing of.

We have peer-review for science papers for a reason. It isn't *possible* to automate that. Not "it's hard", not "it's expensive"... truth doesn't work that way.

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We're not "on the cusp" of a damn thing. We're literally using predictive text tech we've had for decades, and hooked it up with all human writing. To mimic our grammar. And nothing else.

It's old tech, and it cannot scale into anything. You can't build on it, because machine learning ain't modular

16.07.2025 22:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But the chatbot they're using as proof... the stochastic parrot... it can never become infinite unpaid labor.

I mean, it's not free, for one thing. It can't do any job reliably. But also, it can't get any better. (It *can* get worse, as they overtrain it on an internet spammed by chatbots.)

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Now Trump's had, what... three NFT scams in the last 2 years? Seems like the problem never quite goes away.

And we've still got companies full of people who absolutely know better... betting the farm on "AI"

That it's "just around the corner".

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And I figured liability lawsuits would probably end the companies involved. Just takes one case of negligence, or a class-action, and it all falls apart.

But that takes *years*. And these guys would make a pile, and leave some asshole holding the bag, like Sam Bankman-Fried. He got 25 years.

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But instead of this limping along and falling on its face... it gets widespread adoption as a fad. Pushed in everything. Useful at nothing.

Google's model plagiarized Onion articles, remember? One rock per day? Spaghetti with glue? Like, rock solid *proof* that it cannot do what it's tasked with.

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And it was just after the NFTs were on their dying gasps. Remember NFTs? The big investment scam? Same people suddenly really big on "AI"? And some of us saw the red flags.

It's just another techbro scam. Only instead of selling a receipt, they're selling the holy grail.

Unpaid Labor.

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And revealed that it was all scraped data, and not something anyone should have fed to a bot. emails, login credentials (passwords encrypted tho... but I don't trust that or my memory of it) and useless metadata.

So the development didn't curate the input. At all. Makes it harder to train.

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I just went "Oh!" and started looking for the people having fun *breaking* them. Like how one version of one, you could get it to output the raw "training data" if you just fed it the same word a really large # of times. Because it broke the "what word goes next?" loop so it just... went in order.

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Heck, I also grew up on faerie tales and mythologies with tricksters of every type. Like, we literally warned children for untold generations about clever liars.

So when the damned chatbot came out, just a more suave version of the bullshit chatbots I'd grown up alongside? The Elizas and such?

16.07.2025 22:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I looked it up, I read some of the more detailed explanations for how people can wind up trusting something that pretends even slightly to be human, to the ruin of all, because the not-human thing is literally just an electric parrot... and can't *do* anything it promises.

16.07.2025 22:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Before the LLM hype, by like, half a year, the big tech companies all fired their AI ethicists at the same time.

Said ethicists spoke out about the dangers of "stochastic parrots"... which is just a terrible phrase for communicating the threat when most people do not know "stochastic".

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And spending a stupid amount of time on social media with people who are just... confidently wrong most of the time...

I feel less like I "dodged a bullet" regarding the LLM hype, and more that I'm in a bunker and the bullets aren't even hitting concrete.

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

My memory *is* actually extremely good, at certain things, and with certain reminders. So knowing the history of chatbots and working around T9-word predictive text... (hmm, salsa isn't in the list. But sal-space-backspace-sa works!) while working customer service for a dishonest cellphone co...

16.07.2025 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I am *so* happy to have a flavor of autism, adhd, and anxiety that compels me to doublecheck shit. Like, I don't trust my own memory 100%, and I sure as heck don't trust a salesman's promise.

So I kinda live with an abiding belief that I don't know enough and some of what I know is wrong.

16.07.2025 22:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
l asked the Make Shit Up Engine to Make some Shit Up and it made shit up!

l asked the Make Shit Up Engine to Make some Shit Up and it made shit up!

Evergreen.

16.07.2025 21:07 β€” πŸ‘ 263    πŸ” 32    πŸ’¬ 0    πŸ“Œ 0

It is a mistake to think that Musk has any beliefs other than "what I want". Right wingers make excuses that they call beliefs. He never cared about free speech, he just wanted to *own* speech. So he bought the platform. To control it.

This is not different.

16.07.2025 21:13 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'd pick a different year. Ideally scheduled between an arrest and release so I'd have a thoroughly solid alibi. And have an *extremely* busy week.

Then send my younger self a postcard with a lottery number and some mental health... suggestions.

16.07.2025 21:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'm in both places, and use Threads more. Mostly because I already have an Instagram habit and don't really have the mental bandwidth for using multiple infinite doomscrolls.

10.07.2025 01:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@ardentslacker is following 20 prominent accounts