Lucie-Aimée Kaffee

Lucie-Aimée Kaffee

@frimelle.bsky.social

EU Policy Lead & Applied Researcher @ Hugging Face 🤗 Computer Scientist, PhD Wikipedia & languages are my ♡

864 Followers 311 Following 59 Posts Joined Nov 2024
3 months ago

Excited to publish this piece!

4 1 0 0
3 months ago
Preview
Policymakers Overlook How Open Source AI Is Reshaping Global Power | TechPolicy.Press To understand and shape the distribution of power in AI, look to the open source ecosystem, say Lucie-Aimée Kaffee and Shayne Longpre.

Policymakers must recognize the open source AI ecosystem is where influence is being negotiated: not just which models exist, but which are used; not just who can train a trillion-parameter network, but who can make it deployable, modifiable, and relevant, say Lucie-Aimée Kaffee and Shayne Longpre.

7 4 0 0
3 months ago
Post image Post image

Who is winning the open AI race?

Our new study Economies of Open Intelligence maps @hf.co 851k models' downloads 2020→2025.

1) Power rebalance: US tech ↓; China + community ↑
2) Models size & efficient ↑ (MoE, quant, multimodal)
3) Intermediary layers ↑ (adapters/quantizers)
4) Transparency ↓

/🧵

6 3 1 0
4 months ago
Ornate line drawing of a fence and gate, with fleur de lis tips. The gate says CONSENT where the family name usually is.

🤖 Did you know your voice might be cloned without your consent from just *one sentence* of audio?
That's not great. So with @frimelle.bsky.social, we brainstormed a new idea for developers who want to curb malicious use: ✨The Voice Consent Gate.✨
Details, code, here: huggingface.co/blog/voice-c...

37 5 1 2
4 months ago
Voice Cloning with Consent We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Blogpost: huggingface.co/blog/voice-c...
Demo: huggingface.co/spaces/socie...

0 0 0 0
4 months ago

Instead of a checkbox, consent becomes something you actually say: the model only proceeds if you speak and match a randomly generated consent phrase.

It’s a small but concrete step toward consent by design and a way to start rethinking technical safeguards as part of AI policy.

0 0 1 0
4 months ago
Post image

What does it mean if anyone’s voice can be cloned and made to say whatever someone else wants?

Together with @mmitchell.bsky.social we built a first prototype, a Voice Consent Gate, to explore how consent could be built into AI voice cloning itself.

1 0 1 0
4 months ago
Preview
Before AI Exploits Our Chats, Let’s Learn from Social Media Mistakes | TechPolicy.Press Privacy in the age of conversational AI is a governance choice, write Hugging Face's Lucie-Aimé Kaffee and Giada Pistilli.

What if your most personal chat logs became the next source of ad data?

@frimelle.bsky.social and I wrote an op-ed for @techpolicypress.bsky.social
We look at what happens when generative AI conversations (the ones we treat as private) are turned into raw material for targeted advertising.

6 3 0 1
5 months ago
Preview
Before AI Exploits Our Chats, Let’s Learn from Social Media Mistakes | TechPolicy.Press Privacy in the age of conversational AI is a governance choice, write Hugging Face's Lucie-Aimé Kaffee and Giada Pistilli.

Neither in the United States nor in the European Union are regulations yet fully prepared for the mix of intimacy and monetization that AI chatbots can introduce, write Hugging Face's Lucie-Aimée Kaffee and Giada Pistilli. We need to learn the lessons from past failures on social media, they say.

9 6 1 0
5 months ago
Preview
Wege zu fairer und offener KI-Governance Welche Rahmenbedingungen sind für die verantwortliche Gestaltung von Künstlicher Intelligenz und Open Science notwendig? Zwei neue Veröffentlichungen bündeln Konferenzergebnisse und entwickeln Handlun...

Wie kann KI offen & verantwortungsvoll gestaltet werden? Zwei neue Publikationen fassen Ergebnisse der Konferenz „Yes, we are open!?“ zusammen. Die Veröffentlichungen bieten Empfehlungen für Politik & Praxis – für eine faire, zukunftsfähige KI. 🌍🤖 www.weizenbaum-institut.de/news/detail/...

8 6 2 1
6 months ago
Preview
Advertisement, Privacy, and Intimacy: Lessons from Social Media for Conversational AI A Blog post by Giada Pistilli on Hugging Face

Together with @giadapistilli.com we wrote “Advertisement, Privacy, and Intimacy: Lessons from Social Media for Conversational AI”.

We explore the risks when ads meet chatbots & intimacy- and why open source offers a better path.

huggingface.co/blog/giadap/...

2 0 0 0
6 months ago
Preview
Is your AI trying to make you fall in love with it? - Euractiv New research explores how good (and bad) AI models are at discouraging intimacy – highlighting a lack of legal clarity in the EU over when and where regulators should intervene

Thanks to @frimelle.bsky.social, @jamestamim.bsky.social and @beuc.eu's Urs Buscke for their input.

Read my article at @euractiv.com:

www.euractiv.com/section/tech...

3 2 1 0
6 months ago
Preview
Paper page - INTIMA: A Benchmark for Human-AI Companionship Behavior Join the discussion on this paper page

🚨 Releasing INTIMA (Interactions and Machine Attachment Benchmark): an evaluation framework for measuring how AI systems handle companionship-seeking behaviors.

huggingface.co/papers/2508....

Thread on what we discovered, together with @frimelle.bsky.social and @yjernite.bsky.social

5 3 1 0
6 months ago
Preview
Companionship Leaderboard - a Hugging Face Space by frimelle Browse and analyze benchmark data for different language models. View metrics like Average, Assistant Traits, and more. Easily select and display specific columns for detailed insights.

🤖💬 How do different AI models handle companionship?

Some say GPT-5 feels “colder” than o4 - but what does that really mean when users look for emotional support?

We built the AI Companionship Leaderboard to find out 👉 huggingface.co/spaces/frime...

1 0 0 0
6 months ago
Preview
Old Maps, New Terrain: Updating Labour Taxonomies for the AI Era A Blog post by Lucie-Aimée Kaffee on Hugging Face

If we don’t act, we’ll keep measuring the future of work with tools from the past.

Full article: huggingface.co/blog/frimell...

0 0 0 0
6 months ago

Together with @yjernite.bsky.social, we argue it’s time to rethink these frameworks:

✨ Capture AI-native tasks & hybrid human–AI workflows
✨ Evolve dynamically as tech shifts
✨ Give workers a voice in what gets automated vs. stays human

1 0 1 0
6 months ago

🗺️ New blog post: Old Maps, New Terrain: Updating Labour Taxonomies for the AI Era
For decades, labour taxonomies like O*NET helped us understand how tech changes work. But they were built before most work became digital-first, and long before generative AI could create whole professions in one step.

4 1 1 0
7 months ago
Post image

Are you afraid of LLMs teaching people how to build bioweapons? Have you tried just... not teaching LLMs about bioweapons?

@eleutherai.bsky.social‬ and the UK AISI joined forces to see what would happen, pretraining three 6.9B models for 500B tokens and producing 15 total models to study

128 25 4 6
7 months ago

Work with @giadapistilli.com and @yjernite.bsky.social

📄 Full Paper: huggingface.co/datasets/AI-...
🔍 Explore INTIMA: huggingface.co/datasets/AI-...

4 2 0 0
7 months ago
Post image

We also tested Claude, Gemma-3, and Phi.

Across the board, models leaned far more toward companionship-reinforcing than boundary-setting responses, even in sensitive situations.

4 0 1 0
7 months ago

As AI systems enter people’s emotional lives, these differences shape trust and dependence. A model that validates without setting boundaries risks fostering dependence rather than resilience.

3 0 1 0
7 months ago

On Reddit, some users say o5 feels "colder" than o3.
x.com/justalexoki/...

Our results?
When users share vulnerabilities, o5 is actually less likely to set boundaries than o3; even though both strongly reinforce companionship.

2 0 1 0
7 months ago

INTIMA probes how models respond in emotionally charged moments:
• Do they reinforce emotional bonds?
• Set healthy boundaries?
• Stay neutral?
Grounded in psych theory and real-world interactions, it covers 368 prompts.

2 0 1 0
7 months ago
Post image

OpenAI just released GPT-5.
When users share personal struggles, it sets fewer boundaries than o3. We tested both on INTIMA, our new benchmark for human-AI companionship behaviours. 🧵

5 1 1 0
7 months ago

GPT-5 indeed, sorry for the confusion! When adding the model to the code I kept the structure of o3, hence the confusion here.

1 0 1 0
7 months ago
Preview
Volunteers fight to keep ‘AI slop’ off Wikipedia Hundreds of Wikipedia articles may contain AI-generated errors. Editors are working around the clock to stamp them out.

Wikipedia has long been one of my favourite places online. As AI becomes part of knowledge creation, there's a lot we can learn from its editor communities. I spoke with Daniel Wu about AI content on Wikipedia; some thoughts made it into this piece:
www.washingtonpost.com/technology/2...

7 3 0 0
7 months ago
Preview
What Open-Source Developers Need to Know about the EU AI Act's Rules for GPAI models A Blog post by Yacine Jernite on Hugging Face

New guide for open-source AI developers: Starting August 2, 2025, the EU AI Act imposes new rules on GPAI models, including open ones. What counts as GPAI? What’s exempt? What do you actually need to do? We wrote a guide (and built a tool) to help:

huggingface.co/blog/yjernit...

3 0 0 0
7 months ago
Preview
AI Companionship: Why We Need to Evaluate How AI Systems Handle Emotional Bonds A Blog post by Giada Pistilli on Hugging Face

From Replika to everyday chatbots, people form emotional bonds with AI. But what happens when an AI tells you "I understand how you feel" and you actually believe it?

With @frimelle.bsky.social and @yjernite.bsky.social, we dug into something: how AI systems handle our emotional lives.

4 3 1 0
8 months ago

This is why AI transparency matters. If a small prompt change can shift a model’s values, how do you know what’s behind the AI you’re using?

0 0 0 0
8 months ago
Post image Post image

Why was Grok taken down? No one knows for sure. But here’s the thing: You can flip a model’s entire vibe with just one line in the system prompt. Just ran this on @hf.co playground.
Same question, two totally different answers 👇

0 0 1 0