In retrospect, the time to fight back was when the tech industry started using us to train LLMs by making CAPTCHAs ubiquitous and required.
29.10.2025 13:44 — 👍 419 🔁 43 💬 3 📌 8@klcameron.bsky.social
Assistant professor, film & media, U of South Carolina. Currently writing on: police body cameras | AI business models | queer games fandom
In retrospect, the time to fight back was when the tech industry started using us to train LLMs by making CAPTCHAs ubiquitous and required.
29.10.2025 13:44 — 👍 419 🔁 43 💬 3 📌 8"You encounter a strange, cult-like group that lives in almost total isolation from the outside world. They jealously guard their arcane knowledge and practice some exceedingly cruel rituals."
27.10.2025 01:29 — 👍 183 🔁 51 💬 0 📌 12A whole lot of red states are actually just highly gerrymandered and/or voter suppressed states. Not all, of course -- places like North Dakota and Wyoming exist -- but enough that people should be more conscious of it than a lot are.
18.10.2025 17:30 — 👍 232 🔁 37 💬 5 📌 2oooh. Bookmarking this! "How to turn off AI tools in Apple, Google, Microsoft, and more." Step-by-step instructions from Consumer Reports.
15.10.2025 20:53 — 👍 5102 🔁 3101 💬 42 📌 44Another example of body cameras as technosolution. (Which isn't to say this order isn't meaningful, but maybe not quite as much in itself as one might want it to be.)
16.10.2025 15:55 — 👍 0 🔁 0 💬 0 📌 0An add for flock raven with the title "safety you can see and now hear" with an alert for someone "screaming"
An add for flock raven with the title "safety you can see and now hear" with an alert that says "distress"
🚨 We wrote about Flock rolling out "distress detection" that monitored human voices on their gunshot detection devices & asked how it was lawful under eavesdropping laws.
Now, they've changed the ad to replace a "SCREAMING" alert with a "DISTRESS" alert. See below:
www.eff.org/deeplinks/20...
With alt text
08.10.2025 17:51 — 👍 3 🔁 2 💬 0 📌 0A screenshot of a webinar titled Fanfiction: Combining Creativity and Disciplinary Identity in the Classroom.
I probably should not be surprised that ed-tech is "leveraging" fanfiction now but I admit this one is new to me.
08.10.2025 17:41 — 👍 3 🔁 1 💬 0 📌 0as a girl with a PhD in natural language processing and machine learning it's actually offensive to me when you say "we don't know how LLMs work so they might be conscious"
I didn't spend 10 years in mines of academia to be told ignorance is morally equal knowledge.
We know exactly how LLMs work.
Two classes I need to plan for this afternoon and suddenly I have all the writing ideas.
01.10.2025 14:34 — 👍 0 🔁 0 💬 0 📌 0The real generational divide is people who refuse to watch a video if it could be an article versus people who refuse to read an article if it could be a video
29.09.2025 13:45 — 👍 12435 🔁 2258 💬 516 📌 860Smart move by two young upstate NY reporters: they filed a FOIA for body cameras worn by the sheriff’s deputies at an ICE raid. These are almost always civil raids so there’s no ongoing criminal investigation that would require that footage to stay confidential. Replicable by any media outlet.
28.09.2025 00:58 — 👍 1161 🔁 537 💬 13 📌 9I think one of the biggest mistakes we’ve made in academia over the last few years was treating tenure and academic freedom as a guaranteed right and not a labor relation. Had more understood it as the latter perhaps we would have been better prepared to protect it
27.09.2025 12:26 — 👍 495 🔁 122 💬 18 📌 12photo of paulina borsook
1/ A longtime Wired editor just wrote a mush-brained essay about how he totally missed the political rot of Silicon Valley (& still doesn't get it).
But in the late 1990s, a Wired journalist warned of a toxic ideology bubbling up from tech. Paulina Borsook has largely been erased. Let's change that
As someone who spent much of last summer reading through earnings reports and SEC 10-K filings, I can't imagine trusting ChatGPT to surface what I would find important even if I trusted it not to make stuff up.
24.09.2025 13:38 — 👍 0 🔁 0 💬 0 📌 0Unfortunately, the FT did this "analysis" with ... ChatGPT.
They don't note this in the article. They do admit it in an FT podcast!
www.ft.com/content/9232...
New special issue of Popular Communication 
Queer Women's Fandom
#FanStudies
www.tandfonline.com/toc/hppc20/2...
Nevermind. Some parts of Act 2 are a wall I am for some reason choosing to keep beating my head against.
21.09.2025 17:20 — 👍 1 🔁 0 💬 0 📌 0You’ll be shocked to hear that their recommendation is not limiting the circumstances in which AI is deployed, but rather accepting the inevitability it will occasionally have catastrophic effects.
21.09.2025 11:19 — 👍 212 🔁 100 💬 9 📌 13Playing Silksong this week to hide from the world and while I don't think it's easier than Hollow Knight it is a lot less frustrating to me as an individual. Hornet's speed/reactivity means I can see failure as surmountable.
19.09.2025 15:05 — 👍 1 🔁 0 💬 1 📌 0New from me: Filings from a Virginia lawsuit offer a rare insight into how often Flock automatic license plate readers track people going about their daily lives: Multiple times a day, and hundreds over a period of a few months this year.
18.09.2025 11:44 — 👍 96 🔁 72 💬 0 📌 5OTW Recruitment
Do you have experience copyediting or proofreading academic journals? Would you like to wrangle #AO3 tags? Can you read and translate from Chinese or Italian to English? Do you have experience in managing or leading people? The #OTW is Recruiting! otw-news.org/w8thejrv
17.09.2025 13:40 — 👍 64 🔁 41 💬 0 📌 4🧵 The summer of 2025 has been AI's "cruel summer"—wrongful deaths, dangerous therapy chatbots, medical misinformation, facial recognition failures. These aren't isolated glitches but predictable harms from systems deployed without adequate oversight. www.science.org/doi/10.1126/...
11.09.2025 20:48 — 👍 296 🔁 123 💬 5 📌 7Every school that revised it's curriculum around AR/VR maybe jumped the gun a bit and is left with a bunch of stale classes prepping students for jobs that never materialized. There's a lesson here for unis pushing to do the same with AI...
09.09.2025 14:41 — 👍 89 🔁 24 💬 5 📌 1I remember someone saying that's how you know AI can never truly be a writer. If you tell AI to stop using em dashes, it'll stop. If you tell a writer to stop using em dashes they'll tell you to fuck yourself and that you can pry them from their cold, dead fingers.
09.09.2025 04:14 — 👍 584 🔁 189 💬 5 📌 16Also it will always be funny how much police tech companies want to make drones happen.
03.09.2025 13:30 — 👍 0 🔁 0 💬 0 📌 0Venture capital is the reason bad ideas get so much bigger: "It’s not yet profitable and has no imminent plan to be as it prioritizes growth, backed by a $275 million March funding round led by Andreessen Horowitz."
03.09.2025 13:28 — 👍 1 🔁 0 💬 1 📌 0Three police officers standing in front of Gaza solidarity encampment tents at Columbia University. (Photo credit: Wm3214 via Wikimedia Commons.)
Surveillance & Society has published a great new issue—w/ articles on data leaks, ceasefire monitoring, campus #protests, & much more! The journal is always fully #openaccess. 
Check it out, and please spread the word: ojs.library.queensu.ca/index.php/su...
Chinese writing, with last phrase highlight. Something like, 第七条 互联网应用程序分发平台在应用程序上架或者上线审核时,应当要求互联网应用程序服务提供者说明是否提供人工智能生 务。互联网应用程序服务提供者提供人工智能生成合成服务的,互联网应用程序分发平台应当核验其生成合成内容标识相关材料。 第八条 服务提供者应当在用户服务协议中明确说明生成合成内容标识的方法、样式等规范内容,并提示用户仔细阅读并理解相关 理要求。 第九条 用户申请服务提供者提供没有添加显式标识的生成合成内容的,服务提供者可以在通过用户协议明确用户的标识义务和 后,提供不含显式标识的生成合成内容,并依法留存提供对象信息等相关日志不少于六个月。 第十条 用户使用网络信息内容传播服务发布生成合成内容的,应当主动声明并使用服务提供者提供的标识功能进行标识。 任何组织和个人不得恶意删除、篡改、伪造、隐匿本办法规定的生成合成内容标识,不得为他人实施上述恶意行为提供工具或者 得通过不正当标识手段损害他人合法权益。 第十一条 服务提供者开展标识活动的,还应当符合相关法律、行政法规、部门规章和强制性国家标准的要求。 第十二条 服务提供者在履行算法备案、安全评估等手续时,应当按照本办法提供生成合成内容标识相关材料,并加强标识信息共 范打击相关违法犯罪活动提供支持和帮助。 第十三条 违反本办法规定的,由网信、电信、公安和广播电视等有关主管部门依据职责,按照有关法律、行政法规、部门规章的 处理。 第十四条 本办法自2025年9月1日起施行。
🤖📰 Effective YESTERDAY: China has mandated a digital watermark for all AI-generated content.
www.cac.gov.cn/2025-03/14/c...
Translating in 🧵.