Currently reading "Disabling Intelligences" by Rua M. Williams @fractalecho.bsky.social in the UCGIS Humans and Geospatial AI Book Club. I should have read this one sooner!
There has been work saying it's okay to have robot slaves for a hot minute.
I get a little spicy about it in a peice called "All Robots are Disabled". But maybe not as spicy as I remember.
I definitely go after it in the book.
…I've said it so many fucking times, now, but I'll say it again, that it's really fucking messed up that all of these conversations seek to apply to fucking LLMs a standard of personhood we still don't consistently and meaningfully apply to other fucking humans.
So… I've been writing for about 18 years on the idea that if you somehow manage to make meaningfully conscious machine minds & then you treat them as tools, then you're enslaving those minds, and that's pretty messed up. And this ties into something else I've re-upped recently: LLMs can't say "No."
I can say with confidence that of all the recordings of me there are out there, there couldn't be more than two WITHOUT swearing.
I do *appearances* for those who are interested. All kinds. In person, virtual, pop in on a lecture. I'm not snobby. I'm also extremely funny and sweary.
"For many, it has become a magic box: problem + AI = profit."
"There is a lot of effort spent in making systems more fair, without ever asking if the system itself is fundamentally designed to cause harm, whether it's egalitarian about it or not."
communities.springernature.com/amp/posts/di...
It used to be said out loud, BTW. And like, some conditions are progressive. And therapy can ameliorate progression. But there's a balance between the labor of therapy and the agency of choosing when to allow the progression to do its thing. Harient McBride Johnson talks about letting her spine do.
You would not (you probably would) believe the grief I was put through during review of that article.
Under no circumstances should we allow OpenAI to become the self-authorized educational research evidenve source that it is trying to be. Vendors must not be research authorities. openai.com/index/unders...
In comments of the quoted thread, @fractalecho.bsky.social says:
"In my book I call this the Disability Diversion where all criticism of techno fascism is deflected with appeals to technosaviorism."
"Disabling Intelligences: Legacies of Eugenics and How We are Wrong about AI"
by Rua M. Williams
In my book I call this the Disability Diversion where all criticism of techno fascism is deflected with appeals to technosaviorism.
The Autistic People of Color Fund is hosting their 1st virtual political education workshop on AI, eugenics, and disability, on March 10th at 6pm EST, with @fractalecho.bsky.social & @lydiaxzbrown.com as a fundraiser, so please consider supporting them, if able! 🫂 www.eventbrite.com/e/the-apoc-f...
The panopticon, The Disability Diversion, and a Disability Dongle all in one.
Sitting in a department meeting about our mandatory system for reporting everything we do, because a word doc CV is somehow not good enough. Discussed when the system claims data that isn't yours. The speaker "It's supposed to learn. I haven't SEEN it learn. But it's supposed to." I have died.
Read Christina Cogdell, read @cyborgapologist.bsky.social, read @fractalecho.bsky.social, read @shirachess.bsky.social, read the history of "AI" and the internet and on and on and on. It's… it's all right there.
bsky.app/profile/wolv...
"Why can't we have a calm discussion about AI?"
Because folks don't actually want to discuss AI, they want blind submission to the ideological structures that motivate the contemporary use cases and deployments of AI. They don't want to hear the concerns except to dismiss them out of hand.
They are. It's built in to many course management systems. Turnit in, for example. But again most faculty are just operating on *vibes*.
bsky.app/profile/frac...
Neurodivergent people don't write like AI. But some of us write like what people think AI wites like. (AI sounds like a politician. A pointless meandering sycophant.)
Basically there are not objective ways to determine if something is AI, which is why neurodivergent students and students with international English or English as a second language keep getting falsely accused. The suspicion is triggered by an implicit sense of otherness.
Being on linked in is so crazy because I'll see something and assume it's parody but it will be a totally serious post simping for military AI.
In our latest, @ayeshaasiddiqi.bsky.social joins us to discuss “anti-aging” trends and longevity influencers as symptoms of imperial decline and the role the wellness industry has played in producing this moment of heightened fascism
www.patreon.com/posts/150271...
GIVE THE SUICIDE BOT ADVERTISING CAPABILITY WHAT COULD GO WRONG
You know that making yourself a little picrew and reading a horoscope is just as if not more fulfilling and less evil than asking chat fucking GPT to "draw" you right?
Yeah. And we have the extra problem of those that do disclose their disability only to say that they can do it so the student can too 🙃
Oh and also the erroneous belief that the stress of an emergency is the same or more debilitating than the stress of being observed by people you know don't believe in you and have the power to rip your dream away from you.
A specific flavor I'm looking at right now is the confluence of a general stigma against cognitive disability, a specific belief that certain cognitive disabilities must be mutually exclusive with veterinary care, and a belief that assessments accurately reflect real world practice
Hello Disability Internet. I know we know that the professions (like medical, veterinary, dentistry, other clinicians) are extremely ableist. Like they cannot figure out how to accommodate disabled students. Does anyone do this well? For Veterinary students?
PALAUT used to work. Idk if it still does