I met with AI data labelers in Kenya who are organizing their colleagues to fight the brutal working conditions and horrible pay given to the workers at the "bottom of the AI supply chain." They believe the NDAs they've signed are unenforceable so are speaking out:
www.404media.co/ai-is-africa...
17. We need a mass movement, accountable to the people most impacted, against AI-enabled harm and techno authoritarianism. That movement should include people most focused on immediate harm/injustice AND people concerned about future harm.
Let's focus on building people power, not "being right."
16. So, if you're a doomer who thinks I'm silly to worry about surveillance when the tricksy robots might be making a supervirus that will kill us all, okay.
And if you're a skeptic who thinks I'm an idiot to think AI "mathy maths" are anything more than NFTs 2.0, also okay.
We have work to do.
15. Good governance of AI won't be simple or easy. We need policies that address privacy/data harms, transparency, sensitive use cases, discrimination, autonomous weapons, disclosure, job loss, fair use and copyright and more.
But unless we unite against industry power, we'll get none of this.
14. Most ppl don't care about the debate between doomers, boosters, and skeptics. They just want someone to do something about technology being used to make our lives worse and take away our rights. They want their kids to have jobs and basic freedoms. They want politicians to be accountable.
13. Organizations focused on "x-risk" would be wise to listen to and learn from organizations that have been building power against tech companies for years and who are meaningfully accountable to the people most impacted by AI harms.
And to be frank, they need to share some of their $$$$$$.
12. Meanwhile, many of the organizations that have been working for years to combat the immense harm that automated decision making and tech-enabled surveillance is already doing to Black and brown communities, poor folk, LGBTQ+ people etc are wildly under-resourced and struggling.
11. There are massive resource imbalances to contend with here too. Wealthy tech workers have poured hundreds of millions of dollars into brand new organizations that are exclusively focused on "AI safety," by which they mean avoiding the scenario where the robots come alive and kill us all.
10. Instead, I would argue that people concerned with immediate AI harms to marginalized communities and people concerned with future existential risk need to unite, at least in the short term, to overcome the influence of the industry and create space for meaningful governance & democratic control.
9. Every day we spend arguing over whether AI will kill us before we use it to kill ourselves is another day that AI companies are buying Senators, spinning up astroturf groups to push for deregulation, and running AIPAC style moneybomb plays against lawmakers speaking out about AI harms.
8. Here's the good news: i don't think it really matters that much whether the doomers or skeptics are right, because the actions we have to take right now are roughly the same:
we need to reduce the political power and influence of tech oligarchs & build massive public support for good regulation
7. Some would argue that there is a huge difference. A human despot can theoretically be reasoned with, shamed, or overthrown in a way that a supposed AI superintelligence can't be.
But I'd like my kid to grow up in a world with basic human rights, not just a world run by humans instead of robots.
6. Most importantly: I'm not sure it really matters whether the AI comes alive and kills us all or whether humans use AI to kill (or mass-enslave, subjugate, etc) ourselves and each other.
Is there really a meaningful difference between a human overlord with flying murderbots and a robot overlord?
5. But I also don't think it's out of the question that AI systems will do things that humans don't expect that could be catastrophic, even if they fall short of "trying to turn us all into paperclips." And I think there are good-faith people focused on these concerns worth engaging with meaninfully
4. Generally speaking I am much more concerned about what humans will do to each other with AI than I am about what some theoretical future super-AI-gone-rogue will do to us. We are already using AI for war, mass surveillance, and life or death decisions. Adoption is accelerating. The crisis is here
3. I also disagree with so-called "AI skeptics" who think that AI is essentially "fake" and just a marketing term. I think that machine learning enables humans to do things at a scale and speed that is impossible without it––in ways that can radically transform society (mostly for the worse).
2. I am skeptical about the "doomer" narrative that a superintelligent AI will go rogue and kill us all. I agree with the critique that some actors are pushing this narrative as a way to hype AI for monetary gain.
But I don't think we should dismiss all "x-risk" concerns as "boosters in disguise"
Here are some of my thoughts on AI and "existential risk" or "x-risk," in case anyone cares.
1. I do think that AI poses an existential threat to the future of human thriving. Even just the mass surveillance enabled by machine learning could effectively end human liberty as we know it.
it's so funny that this hot trash is what they are putting out as like a proof of concept
but given the credulous press coverage ... it might just work 😭
**Can you help us do a little bsky survey by reposting this**
➡️ What checklists/guides would you like to see ActivistChecklist.org create next? Comment below! 👇
Could be anything in the realms of activist digital security, physical safety, operational security, vetting, specific tools, etc.
"you should only torture puppies within the confines of the law" is how some of y'all sound. please be serious
focusing on the supposed "criminality" of the Trump admin instead of on the material harm it is doing reveals a deep misunderstanding of how law works in the United States and who it is actually for
“The University of Wisconsin-Madison Academic Staff Assembly passed a resolution on Monday calling for UWPD to end their contract with Flock Safety and publish all other contracts with surveillance technology companies, like Motorola Safeties and Rhombus.”
If I may be millennial and cringe for a moment, my organization @fightforthefuture.org has been running campaigns trying to strike down US government surveillance and corporate data harvesting for years.
like ... since before anyone had ever heard of Edward Snowden or Cambridge Analytica ;-)
The US is at war and repression at home is going to get worse.
I'm old enough to remember during the Iraq war the FBI infiltrating quaker meetings and vegan potlucks and cracking down on anti-war groups and outspoken Muslim leaders.
The US surveillance state has grown exponentially since then.
🧵
Scoop: DHS ousted multiple privacy officers at CBP after they questioned orders to purposely mislabel records about government surveillance to prevent their release under FOIA.
Instagram comment referring to me as “it” and suggesting I be used for target practice 👍
I wish ppl understood better that most propaganda is in this vein vs like some cartoon villain in a shadowy room cooking up conspiracy style coverups
Yeah I like to party
New Signal ad campaign just dropped.