Love this! And very much the anti-thesis of the current "AI maximalist" corporate ethos at Spotify, Amazon, etc, and some public organizations (schools, hospitals, govt) where AI use is forced upon workers and pushed out in as many use cases as is conceivable, without reason & to disastrous results.
22.02.2026 19:14 β
π 11
π 4
π¬ 1
π 0
Interesting - I agree that simple proxies should not be the only way to get to predictable model outcomes.
Tho I worry this criteria won't be well operationalized (ie "aligned"?). My pet theory is that model predictability is likely a byproduct of training *data* properties more than anything else.
30.01.2026 19:11 β
π 10
π 0
π¬ 1
π 0
π₯²
30.01.2026 18:44 β
π 2
π 0
π¬ 0
π 0
Yeah I mean that's the cost of participating these things. In my case, he was working with an excellent producer that approached me with some very great initial questions - there's no way I could have known how this would eventually materialize in the final cut of the film, and I'm ok with that.
30.01.2026 14:50 β
π 3
π 0
π¬ 0
π 0
AI can cause harm even if it doesn't seem to affect *you*, even if those harms aren't observable or felt in any direct way by *your* future children. Being protected from certain harms doesn't make them less real or less important to address.
Educating the public requires taking this broader view.
29.01.2026 20:12 β
π 37
π 10
π¬ 1
π 1
And I guess, more embarrassingly, it pushes those same folks to fixate on the far-out sci-fi type narratives of a version of AI that is so powerful or so out of control that it finally does impact them, at least in ways that are visible or legible to them. I found the whole premise a bit unsettling.
29.01.2026 19:56 β
π 14
π 3
π¬ 1
π 0
I think with him I saw a version of a conversation I've encountered before in silicon valley circles - "how will this affect *me*? *My* children? *My community?" This tends to blind folks to a broader range of issues that likely won't affect them but will affect the marginalized or society at large
29.01.2026 19:52 β
π 17
π 2
π¬ 2
π 0
Me talking through AI definitions actually came from a longer exchange where I was trying to convince him that the real world issues we see today (& w past "AI" tech) are already worth taking action on, especially since it affects the poor, PoC, ie. vulnerable populations that don't look like him
29.01.2026 19:49 β
π 22
π 3
π¬ 1
π 0
It made me wonder: why are the real world harms perpetrated by AI today not enough for some people to feel urgency? to take action? to care?
What he was hearing from corporate execs & "doomers" was scaring him but the real world issues me & Karen brought up didn't seem to have the same effect..
29.01.2026 19:44 β
π 28
π 8
π¬ 1
π 2
Interviewed for this doc years ago & have yet to see the final cut. Most of what I recall is how much Daniel had truly riled himself up - the confusion & chaos of this exaggerated boogyman version of "AI" had excited a genuine emotional response, despite successfully disguising AI's real terrors.
29.01.2026 19:38 β
π 39
π 12
π¬ 1
π 0
The spiciest parts (ie. the interdisciplinary panel discussions & provocations) are unfortunately not online, but the full talks are all recorded and posted immediately to here as we go along -- highly recommended for those interested in the topic: simons.berkeley.edu/workshops/br...
13.01.2026 15:53 β
π 4
π 0
π¬ 0
π 0
Co-chairing this workshop at Simons this week & it's been amazing so far!
Brought together folks from ML, stats, law, sociology & beyond to discuss the messy middle between individual predictions & the actions/policy change individuals/orgs take in response to those predictions
13.01.2026 15:53 β
π 10
π 0
π¬ 1
π 1
A fascinating recent development is that the ML research community -- as the earliest adopters of "AI for research" -- are at the frontlines of dealing with all the problems that come with that (ie. reduced trust in results & reviewers, increased submission load etc).
Every other field is next! π
27.12.2025 19:46 β
π 25
π 4
π¬ 0
π 0
So we have tons of "experimental evidence" for the effectiveness of bed nets but not because they are particularly impactful as an intervention but just because they're by far the most *studied* - for reasons that are pretty much socially convenient and honestly not too far from arbitrary lol
26.12.2025 16:28 β
π 28
π 0
π¬ 2
π 1
And the wildest part of why we "have the most statistical evidence" about malaria nets specifically is that researchers who wanted to show off their new casual estimation method or experiment design and compare it to the duflo study, kept going back to that same group & doing experiments w bed nets.
26.12.2025 16:25 β
π 12
π 0
π¬ 1
π 0
As in, they explored other interventions to rest experimentally but dismissed them for reasons of pure convenience (ie "we already have a working relationship with group A, who gives out bed nets") or ethics (ie. Not ethical to randomly withhold malaria medicine from an at-risk population), etc.
26.12.2025 16:23 β
π 13
π 0
π¬ 1
π 0
EAs love malaria nets because it's supposedly the intervention where we have the "most statistical evidence" of it's effectiveness. One kind of silly fact about this tho is that if you read Duflo's actual paper, the choice to do the experiments on bed nets as an intervention is pretty much arbitrary
26.12.2025 16:21 β
π 38
π 6
π¬ 1
π 2
"He took every Advanced Placement class he could, earned a scholarship to Brown and worked at Wawa over the summer to make enough money to buy a laptop, according to his two sisters."
15.12.2025 21:56 β
π 2458
π 748
π¬ 31
π 21
What Grover and Good Will Hunting can tell us about the limits of artificial intelligence
First, a quick thank you to all the new subscribers to the Cognitive Resonance Substack!
One of my very early essays explored a paper by @rajiinio.bsky.social & others that centers the story of Grover and the Everything in the Whole Wide World Museum. It is striking to me that AI hyperscalers today really believe we can encapsulate all of human experience in data. I mean, good luck.
15.12.2025 19:19 β
π 12
π 4
π¬ 3
π 0
Just a stream of unfortunate news this week. My heart goes out to anyone impacted by this - truly tragic. π€
15.12.2025 15:08 β
π 5
π 0
π¬ 0
π 0
It's always so funny to me when people frame this kind of AI "distrust" as an unfortunate PR issue... in actuality people have very legitimate and materially supported reasons to be skeptical - trust is earned! We shouldn't strongarm people into trusting institutions & tech that doesn't serve them π³
15.12.2025 02:43 β
π 23
π 6
π¬ 0
π 0
yay! Thanks so much for sharing these!
15.12.2025 02:25 β
π 2
π 0
π¬ 0
π 0
Careers at ACLU
Join our team! Weβre looking for committed, passionate people for open roles at the ACLU.
And we have lots of open roles at ACLU, including roles on our Technology Team: www.aclu.org/careers/
14.12.2025 01:46 β
π 10
π 8
π¬ 0
π 0
Trump doing this just weeks before Colorado is set to push out final amendments to SB 205 is so annoying. Just another performative roadblock to push negotiations further down the line -- and given that Colorado is named explicitly in the EO, not likely to be a coincidence either
12.12.2025 09:37 β
π 10
π 1
π¬ 0
π 0
Yes, unfortunately π
12.12.2025 08:34 β
π 0
π 1
π¬ 0
π 0
Explore Our Job Openings
We have some pretty cool roles open at @aspendigital.bsky.social!
www.aspeninstitute.org/about/our-ca...
12.12.2025 06:34 β
π 2
π 1
π¬ 0
π 0
bsky.app/profile/raji...
12.12.2025 02:41 β
π 3
π 0
π¬ 0
π 0
US CAISI is hiring -- the internal govt name for the role is "IT Specialist" but it is effectively a research scientist role!
Salary is $120,579 to - $195,200 per year, and you get to work on AI evaluation within government agencies!
Job posting (**closes EOD 12/28/2025**): lnkd.in/exJgkqr5
11.12.2025 22:01 β
π 24
π 10
π¬ 1
π 1