I hear you. The question is in the how.
04.08.2025 16:21 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0@jsrailton.bsky.social
Chasing digital badness. Senior Researcher at Citizen Lab, but words here are mine.
I hear you. The question is in the how.
04.08.2025 16:21 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 06/ We do have some models for engagement that work.
I'd cite @meredithmeredith.bsky.social & related coalitions scoring big wins around chat privacy.
Perhaps folks can point me to others?
But ultimately we need so much more of this kind of high-impact engagement in front of big public audiences.
5/ Personally? Maybe we need to talk about empowering parents, not bureaucrats with hidden agendas.
Or supporting healthy communities, not corporations.
Ultimately, we need to stand up for the next generation. And their autonomy.
4/ Am I missing high-impact framing happening somewhere on age verification? I've been hunting for things like:
1- Reframing around values like parental autonomy
2- Highlighting risks to children from profiling (kids not suspects!)
3- Powerful harm anecdotes
4-Offer parental tools & alternatives.
3/ Seems like there are some big gaps in the age verification fight:
- Widespread framing that makes *enough sense* to the vast majority of people that they say 'ok this is net bad'
- Productively connecting with same anxieties politicians are drawing on
- Offering alternate paths & futures
2/ Saying that something speeds us towards surveillance dystopia works on me.
But not my neighbors.
Sure, most here intuitively understand the dangers... And nod along when we gesture at the dangers of surveillance overreach.
But for the general public, I'm not sure how well this works.
Age verification laws are coming fast.
And, from my perspective, opponents are struggling to find impactful messaging for the general public to frame damage they are about to do to freedom.
Or to propose alternate futures that address the underlying anxieties. 1/
It seems to me like a strong anti-AI stance is becoming left-coded.
I'm hoping to disentangle criticisms of biz models, power-grabs, externalities (e.g. labor disruption, environmental impact), privacy etc..
vs. views on AI itself, especially when used in open-source ways.
Thoughts?
Rhinoisotope
01.08.2025 20:35 โ ๐ 16 ๐ 0 ๐ฌ 0 ๐ 27/ And when the strong leader dies? The society can be incredibly unstable as it carries the weight of so many injustices, so many lies.
And for the system to persist? More repression needed.
6/ Because self-censorship scales better than physical coercion on each person.
People see opportunity for personal advantage. Some become informers.
Some delight in the cruelty of seeing people they dislike arbitrarily punished.
5/ People with new, better ideas that also happen to challenge the dictators entrenched interests? Or those of the dictators necessary economic allies? Family members? Point out corruption?
Co-opted or cut down.
Fueled by massive surveillance.
And the threat of violence.
4/ People with ambition need to play into the system and help prop up the dictator if they want to keep their resources.
Even then they are vulnerable to having everything taken.
And for anyone that dares point out increasingly obvious flaws?
Well, dictatorships will slide into repression.
3/ Care about equality of opportunity? dictatorships concentrate power without balance.
Over time as inequalities & unfairness become severe... the rule gets more brittle.
And dictators have to give more favors to the people that help them stay in power. Like economic favors.
2/Even when dictatorships do 'well' on a factor, in the short term, they send people into a freedom-robbing labyrinth.
In the long run with dictatorships you lose out on the chance for a society that supports freedom, personal rights & liberties & decentralization of knowledge + innovation.
It is a lot easier to celebrate a turn towards dictatorship when you are untethered to historical knowledge.
No amount of centralized power delivers a society with true personal freedom in the long run. 1/
What's something that teenage you thought was peak..
But adult you sees as weak?
I'll go first: MENSA.
Consider, your Honor, that my client was being extremely productive at the time of the crash.
24.07.2025 18:00 โ ๐ 144 ๐ 37 ๐ฌ 5 ๐ 6Couple of weeks ago we @citizenlab.ca published a report on a sophisticated Russia-linked ๐ท๐บ phishing campaign targeting a UK-based Russian critic and analyst citizenlab.ca/2025/06/russ...
The target @keirgiles.bsky.social has written an excellent overview: foreignpolicy.com/2025/07/02/g...
In Mexico ๐ฒ๐ฝ, Sheinbaum's government is pushing through Congress sweeping legal reforms that will establish mass identification and surveillance systems of the population.
Here is a thread ๐งต summarizing the scope and reach of these reforms:
This is a tool for one person to acquire a lot of power.
Sam Altman wants his technology to become unavoidable, and mandatory.
He makes the problem. Then we depend on him to fix it.
Bad move for privacy & freedom.
Worse move for #Reddit , which he might very well one day compete against.
This is an interesting idea & has been experimented with by some cool artists & projects
(also me, more informally)
But it only works on certain kinds of optical facial recognition that depend on infrared illuminators.
FR that does visual spectrum stuff is not bothered by this.
That's cool! And it's the right thing to make exceptions for use cases that help people with accessibility.
That said, I am comfortable criticizing products designed & overwhelmingly used for corporate surveillance.
And to discourage their proliferation.
Computer vision & HUDs are cool. AI augmented reality is fascinating.
But this is a trojan horse for megacorp to get into *all* your interactions.
Friends don't let friends bring Zuck in a backpack on their adventures.
Or, wear these Dorkleys yourself & become an NPC constantly asking your eyewear 'hey meta is this real?'
21.06.2025 09:40 โ ๐ 24 ๐ 2 ๐ฌ 1 ๐ 0I prefer the company of people that don't snitch my business.
21.06.2025 09:17 โ ๐ 46 ๐ 10 ๐ฌ 3 ๐ 2Sometimes I think that the big phishing operations have probably developed a more applicable & empirically tested understanding of human motivation and cognition than psychologists...
Tens of thousands of behavioral A/B tests a day...and that would be a low number.
You can patch software, but you can't patch people.
Social engineering will always work because human brain is loaded with forever-day vulnerabilities...
UPDATE: Looks to me like Paragon is again feeling the heat, and tossing the Italian government under the bus.
The scandal continues.
17/ Do you think you face increased risk because of who you are & what you do? Use Google's free Advanced Protection Program.
SET IT UP NOW: landing.google.com/intl/en_in/a...
And exercise extra skepticism when unsolicited interactions slide into suggesting you change account settings!