The Tyranny of Quantification means that unless we can be readily reduced to data, we don't matter. Only corporate profits do. (3/3)
12.01.2026 19:37 — 👍 1 🔁 0 💬 0 📌 0@matthewus.bsky.social
Advocate and researcher focused on fighting corporate power and building workers’ technology rights. Posts reflect my own views.
The Tyranny of Quantification means that unless we can be readily reduced to data, we don't matter. Only corporate profits do. (3/3)
12.01.2026 19:37 — 👍 1 🔁 0 💬 0 📌 0After all, cost to a business of complying with a regulation can be easily quantified. The value of a human life and reductions in our quality of life cannot. Rather than come up with a system that factors it in, corporate interests argue we should therefore ignore them entirely. (2/3)
12.01.2026 19:36 — 👍 0 🔁 0 💬 1 📌 0This is a particularly extreme and galling example of what I call the Tyranny of Quantification, where corporate interests argue that policy should be based solely on easy-to-quantify metrics. That inherently favors corporate interests. (1/3)
12.01.2026 19:35 — 👍 1 🔁 0 💬 1 📌 0In all seriousness, that was the best World Series since at least 1991. And the best Game 7 ever.
02.11.2025 04:24 — 👍 0 🔁 0 💬 0 📌 0Everything makes sense now
futurism.com/future-socie...
Given that genAI clearly enhances productivity in cheating and scamming but has mixed results everywhere else, there's a real possibility that genAI will actually have a net negative effect on profitability as companies are forced to invest ever-more in cybersecurity.
www.axios.com/2025/10/07/o...
We are thrilled to announce that our NEW Large Language Model will be released on 11.18.25.
01.10.2025 14:38 — 👍 25128 🔁 8346 💬 662 📌 2207Why it matters: ADSs shape access to jobs, housing, health care & more. Yet too often, they’re opaque, error-prone, and biased—leaving consumers & workers at risk. Transparency + accountability are essential.
29.09.2025 15:49 — 👍 3 🔁 2 💬 0 📌 0Building Transparency and Accountability.
New brief from CDT’s @matthewus.bsky.social & @consumerreports.org's Grace Gedye examines state efforts to regulate algorithmic decision systems (ADSs)—spotlighting promising proposals, pitfalls, and what effective regulation must include.
Read more: cdt.org/insights/bri...
California’s “No Robo Bosses Act” is a great start. And much more is needed, because many kinds of powerful institutions are using automated decision-making against us. www.eff.org/deeplinks/2...
26.09.2025 17:04 — 👍 99 🔁 29 💬 1 📌 1The Minnesota shooter apparently used data broker websites to find the home addresses of the people he shot and murdered.
Congress has had years to do something about data brokers and they've sided with the tech lobby over and over again.
Their inaction is deadly.
“@matthewus.bsky.social, senior policy counsel at CDT, has a more dire warning: ‘The moratorium [on state-level AI legislation] is so sweeping that it’s hard to imagine how any law that touches on AI or automated decision-making in any way could escape it.’" www.pcmag.com/news/t...
You know you've got a serious, serious brand issue when your main competitor is 11 points underwater in favorability but you're doing 10 points *worse* than they are.
thehill.com/homenews/cam...
Quick! Hide this from Ezra Klein!
18.05.2025 21:16 — 👍 6 🔁 1 💬 0 📌 0...anecdotes are not evidence. None of the predicted labor market disruptions have materialized. Level 5 AVs have been 3 yrs away for 10 yrs now. Productivity growth hasn't accelerated. Practical improvement in cutting edge models has slowed to a crawl. So I, and many others, just aren't seeing it.
18.05.2025 17:32 — 👍 0 🔁 0 💬 0 📌 0I do use it, both for research and for editing/polishing my writing. It's helpful for polishing my writing, but not in an "I'm blown away" way. And in research, frequent hallucinations in all the models mean that I often have to spend more time verifying info than if I'd not used AI. Anyway...
18.05.2025 17:23 — 👍 0 🔁 0 💬 1 📌 0I have, in fact, studied AI and its impact on the economy and labor market for a decade now. I've made something a career of it. And while I’m not a programmer, I have a better technical understanding of AI than most laypeople. Happy to engage in a reasoned debate, but not engage in name calling.
18.05.2025 05:08 — 👍 0 🔁 0 💬 0 📌 0I could be wrong. I often am. But AI hype increasingly strikes me as something driven by a desperate effort to delay the popping of a speculative bubble, and it boggles my mind that people brush off recent and repeated delays and admissions of fundamental flaws in new models by big LLM developers.
18.05.2025 01:16 — 👍 0 🔁 0 💬 0 📌 0Anecdotes are rarely meaningful evidence, and never evidence of transformative impact/potential. What you describe strikes me more akin to the impact that Google/decent Internet search had on knowledge professions as compared to early knowledge databases. And maybe not even that.
18.05.2025 01:10 — 👍 0 🔁 0 💬 1 📌 0I think the only thing that has been big/fast about AI is the hype surrounding it. I'm not alone in thinking so. You disagree, but I haven't seen real-world evidence (arbitrary benchmarks don't count) that it's having transformative impacts in its current state, nor do I see how it'll get there.
18.05.2025 00:42 — 👍 0 🔁 0 💬 0 📌 0As @randomwalker.bsky.social said two years ago: "Every exponential is a sigmoid in disguise."
18.05.2025 00:24 — 👍 0 🔁 0 💬 0 📌 0It's obvious that all big LLMs have plateaued. Improvements are X steps forward/Y steps back, with X fast decreasing and Y increasing. None are close to AGI; tweaks/scaling won't get them there. The only reason none of the big developers are saying so is no one wants to be first to admit to the con.
18.05.2025 00:23 — 👍 0 🔁 0 💬 1 📌 0Why would you need anything other than LLMs? I mean, once we scale LLMs up enough, we’ll have AGI. Didn’t you get the memo? 🙃
12.05.2025 03:11 — 👍 1 🔁 0 💬 0 📌 0This is seriously great. There has been a vacuum of moral leadership when it comes to the impact of AI on workers and their dignity. And there are a few (if any) people with a bigger moral megaphone than the Pontiff.
www.cnbc.com/2025/05/10/p...
And, of course, the reason nobody understands their tech is that they abuse the trade secret doctrine to keep people from finding out how it works and how it affects them. Which shows the need for… wait for it… regulation.
09.05.2025 03:51 — 👍 3 🔁 0 💬 1 📌 0That’s an excellent argument for starting regulation with strong transparency measures so that people understand how new tech works and how it affects them. But industry adamantly opposes such transparency (trade secrets!) because they want to maintain information monopolies on their technologies.
09.05.2025 03:39 — 👍 2 🔁 0 💬 0 📌 0True. But IMHO, it’s not even regulation if the target of the regulation has veto power over its contents.
09.05.2025 00:06 — 👍 1 🔁 0 💬 0 📌 0Respectfully, these aren't straw man arguments. All the issues with voluntary standards I described are basic economics principles. And that doesn't even get into monitoring/enforcement of voluntary standards. Sounds like your mind's made up on this though, so take care.
08.05.2025 23:31 — 👍 0 🔁 0 💬 1 📌 0That would be nice, if it weren't for little issues like information asymmetry, unequal bargaining power, collective action problems like the exponentially higher cost of consumers coordinating amongst each other, the ease of standards capture, and cumulative advantages for dominant industry actors.
08.05.2025 23:24 — 👍 2 🔁 0 💬 1 📌 0Colorado lawmakers stood firm on AI accountability — and CDT CEO @alexreevegivens.bsky.social backed their decision to protect workers and consumers rather than gut hard-won safeguards. www.denverpost.com/2...
08.05.2025 14:16 — 👍 4 🔁 1 💬 1 📌 0