π Read my article (and join 92,000+ subscribers) here: www.luizasnewsletter.com/p/the-great-...
π¨ People are starting to realize that they are fully UNPROTECTED from the negative consequences of AI.
Now is the time for countries, individually and collectively, to democratically decide how they want to regulate AI.
In a few years, it could be too late.
My full article:
Many of the fantasies about the future of AI ignore the basic fact that people crave simple, deeply human things.
No agentic web, AI companion, or personal intelligence will replace our need for acceptance, belonging, love, etc.
We might end up fully rejecting the "age of AI."
How ironic that AI companies need to cite "human experts" to increase the perceived value of their products.
This is the whole history of the AI industry, and why current 'AI replacement' forecasts are exaggerated.
Humans value human expertise (and it should be compensated).
By the way, these laws have already arrived in NY: www.luizasnewsletter.com/p/new-yorks-...
It might sound CRAZY today, but in a few years it will be clear that if we consider work and human cognitive development important, we'll have to protect them with laws against 'AI intrusion.'
Believe me, the pro-human movement is just starting.
Join the club: www.aitechprivacy.com/ai-book-club
π¨ Our AI Book Club has reached 5,400+ members and 34 books. To dive deeper into AI's challenges, start here:
π Read my article (and join 91,900+ subscribers) here: www.luizasnewsletter.com/p/new-yorks-...
π¨ Most people haven't realized it yet, but pro-human AI laws are already here.
In a sea of inaction and deregulation, New York is surprisingly taking the lead.
My article below:
π¨ Here's the vibe coding you ordered:
π¨ BREAKING: According to Axios, the Trump administration is preparing an Executive Order prohibiting the use of Anthropic's models by the federal government.
It's the beginning of the end of the supremacy of language in AI (and a great moment for European innovation!)
π¨ BREAKING: OpenAI delays ChatGPT "adult mode"
(Of course, there would be too many scandals at the same time. It will wait until after the IPO.)
Strangely, both Anthropic and OpenAI seem to operate like cults.
Criticizing their AI practices leads to personal offenses, blocks, and angry apocalyptic replies.
That's a sad state of affairs in a time when serious AI governance efforts should be a priority.
1. My article on Claude's constitution: www.luizasnewsletter.com/p/claudes-st...
2. My article on the Adam Raine case (suicide): www.luizasnewsletter.com/p/horrifying...
3. My article "Against AI Idolatry": www.luizasnewsletter.com/p/against-ai...
π¨ A reminder that anthropomorphizing AI can be dangerous, and it has already led to mental health harm and suicides.
It's particularly risky when minors and vulnerable groups are involved.
Claude's new "constitution" fosters AI anthropomorphism.
My articles below:
π Read my article (and join my newsletter's 91,700+ subscribers) here: www.luizasnewsletter.com/p/ais-accele...
π¨ AI's Acceleration Paradox
The AI industry's acceleration narrative ignores basic facts about the human body, the human mind, human behavior, and human societies.
It might drag us to a dystopian future.
My full article:
To receive my article, join 91,700+ subscribers here: www.luizasnewsletter.com
π¨ Unpopular opinion:
The premises behind this bill are correct (increase the accountability of AI companies, support AI liability, protect people).
The execution is wrong (it shouldn't ban but demand strict guardrails for regulated areas).
More in my newsletter.
π¨ BREAKING: Dario Amodei apologizes.
"Anthropic has much more in common with the Department of War than we have differences."
π¨ The Pentagon has officially designated Anthropic and its AI products a "supply-chain risk."
It's the first time in U.S. history that an American company receives that designation. π±
AI might be inevitable at this point, but the way we govern it is a conscious collective choice.
We must exercise this choice.
π Join the club to receive the invites to the meetings: aitechprivacy.com/ai-ethics-pa...
π Read the paper: arxiv.org/abs/2602.12476
π¨ "Not a Silver Bullet for Loneliness: How Attachment and Age Shape Intimacy with AI Companions"
This is the 3rd selected paper of our AI Ethics Paper Club. Discussion on March 31 at 5 pm UK.
I need 6 commentators - DM if you are interested.
JOIN the club below, it's free:
- If you want to volunteer to be one of the 6 commentators in this meeting, please send me ASAP a direct message with your email.
- Joining the club is a great way to engage in critical AI governance discussions, learn, and help foster a pro-human future.
- Register for the AI Ethics Paper Club to receive the invitation for this and upcoming meetings (it's free): aitechprivacy.com/ai-ethics-pa...
- Participants are expected to read the paper before our discussion. Link: arxiv.org/abs/2602.12476
π¨ BREAKING: Anthropic is back in talks with the Pentagon
"Amodei told the audience that Anthropic is still talking to the Pentagon 'to try to de-escalate the situation and come to some agreement that works for us and works for them.'"
Was it all PR?