aicoffeebreak.bsky.social's Avatar

aicoffeebreak.bsky.social

@aicoffeebreak.bsky.social

πŸ“Ί ML Youtuber http://youtube.com/AICoffeeBreak πŸ‘©β€πŸŽ“ PhD student in Computational Linguistics @ Heidelberg University | Impressum: https://t1p.de/q93um

181 Followers  |  18 Following  |  65 Posts  |  Joined: 31.01.2024
Posts Following

Posts by aicoffeebreak.bsky.social (@aicoffeebreak.bsky.social)

Post image

🧠 Do Vision & Language Decoders Use Images and Text Equally?
In our latest episode, we speak with Letitia Parcalabescu about her ICLR 2025 paper examining how vision–language *decoder* models use images and text β€” and how self-consistent their explanations really are. (1/8🧡)

18.02.2026 17:37 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Post image

If you love @aicoffeebreak.bsky.social, this one's for you β€” Letitia Parcalabescu is our next guest on the #WiAIR_podcast!

Stay tuned for our conversation:
🎬 YouTube: www.youtube.com/@WomeninAIRe...

06.02.2026 17:01 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
What's up with Google's new VaultGemma model? – Differential Privacy explained
YouTube video by AI Coffee Break with Letitia What's up with Google's new VaultGemma model? – Differential Privacy explained

LLMs can memorize even a phone number seen once in training.πŸ”’
Google’s VaultGemma fixes that, being the first open-weight LLM trained from scratch with differential privacy, so rare secrets leave no trace.
β˜• new video explaining Differential Privacy through VaultGemma πŸ‘‡
πŸŽ₯ youtu.be/UwX5zzjwb_g

02.11.2025 13:28 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Diffusion Models and Flow-Matching explained side by side
YouTube video by AI Coffee Break with Letitia Diffusion Models and Flow-Matching explained side by side

We explain diffusion models and flow-matching models side by side. Flow-Matching models are the new generation of AI image generators that are quickly replacing diffusion models. They take everything diffusion did well, but make it faster, smoother, and deterministic.

πŸŽ₯ youtu.be/firXjwZ_6KI

19.10.2025 12:20 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Energy-Based Transformers explained | How EBTs and EBMs work
YouTube video by AI Coffee Break with Letitia Energy-Based Transformers explained | How EBTs and EBMs work

Works for image and video transformers too!
πŸŽ₯ youtu.be/18Fn2m99X1k

21.09.2025 12:48 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Ever wondered how Energy-Based Models (EBMs) work and how they differ from normal neural networks?
β˜•οΈ We go over EBMs and then dive into the Energy-Based Transformers paper to make LLMs that refine guesses, self-verify, and could adapt compute to problem difficulty.

21.09.2025 12:48 β€” πŸ‘ 22    πŸ” 6    πŸ’¬ 3    πŸ“Œ 0
Post image

The world’s largest NLP conference with almost 2,000 papers presented, ACL 2025 just took place in Vienna! πŸŽ“βœ¨ Here is a quick snapshot of the event via a short interview with one of the authors whose work caught my attention.
πŸŽ₯ Watch: youtu.be/GBISWggsQOA

14.09.2025 11:49 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Puppets of a Digital Brain – SciFilmIt

Check it out if you’re curious or feel like supporting thoughtful science storytelling; there's a β€œtext trailer” on these pages πŸ“œβœ¨:
πŸ”— scifilmit.com/puppetsofadi...

05.08.2025 18:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

My friend Vivi Nastase is working on a short science communication film called "Puppets of a Digital Brain". It aims to explain the tech behind AI chatbots (the good, the bad, the environmental) in an accessible, visual way.
πŸ’‘ GoFundMe: gofund.me/453ed662

05.08.2025 18:36 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Greedy? Random? Top-p? How LLMs Actually Pick Words – Decoding Strategies Explained
YouTube video by AI Coffee Break with Letitia Greedy? Random? Top-p? How LLMs Actually Pick Words – Decoding Strategies Explained

In this video, we break down each method and show how the same model can sound dull, brilliant, or unhinged – just by changing how it samples.
πŸŽ₯ Watch here: youtu.be/o-_SZ_itxeA

03.08.2025 12:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

How do LLMs pick the next word? They don’t choose words directly: they only output word probabilities. πŸ“Š Greedy decoding, top-k, top-p, min-p are methods that turn these probabilities into actual text.

03.08.2025 12:06 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

I'll co-organise the "Multilingualism: from data crawling to evaluation" social / birds-of-a-feather session. It's on the 29th at 4PM. Do come by if you're at ACL Vienna! :)

27.07.2025 15:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'm also at ACL, would be lovely to catch up!

27.07.2025 12:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Right now, attending the synthetic gata generation tutorial which is packed, because it turned out that "Data is the new source code."

#ACL2025NLP

27.07.2025 12:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Excited to be at ACL 2025 in Vienna this week πŸ‡¦πŸ‡Ή #ACL2025
I’m always up for a chat about reasoning models, NLE faithfulness, synthetic data generation, or the joys and challenges of explaining AI on YouTube.

If you're around, let’s connect!

27.07.2025 12:20 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Dancing with Right & Wrong? - Marsilius-Kolleg An interdisciplinary symposium on knowing, doing and not being sure

πŸ“… Looking forward to the discussion and to learning from fellow panelists and participants. If you're around Heidelberg, join us!

www.marsilius-kolleg.uni-heidelberg.de/de/dancing-w...

07.07.2025 17:03 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

πŸ”¨ What recent advancements in AI were particularly impactful for science?
πŸ” How do we calibrate trust in current AI systems?
πŸ§ͺ If AI takes over more of the scientific process… what’s left for us humans?

07.07.2025 17:03 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ€– Can we trust AI in science?
I'm excited to be speaking at the final event of the Young Marsilius Fellows 2025, themed "Dancing with Right & Wrong?" – a title that feels increasingly relevant these days.
I'll be joining a panel on "(How) can we trust AI in science?" to discuss questions like:

07.07.2025 17:03 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Human-like object concept representations emerge naturally in multimodal large language models - Nature Machine Intelligence Multimodal large language models are shown to develop object concept representations similar to those of humans. These representations closely align with neural activity in brain regions involved in o...

Yet another paper finding similarities between human concepts and AI concepts. www.nature.com/articles/s42...

20.06.2025 15:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We train AI on human-selected or -generated data (yes, even taking a photo is concept selection – we capture what we find interesting; text even more so, expressing our conceptualisation of the world). Then we’re surprised when the AI's concepts and representations are similar to ours. πŸ€·β€β™€οΈ

20.06.2025 15:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I'm very excited to finally share the main work of my PhD!
We explored the evolutionary dynamics of gene regulation and expression during gonad development in primates. We cover among others: X chromosome dynamics (incl. in a developing XXY testis), gene regulatory networks and cell type evolution.

20.06.2025 08:35 β€” πŸ‘ 14    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
AlphaEvolve: Using LLMs to solve Scientific and Engineering Challenges | AlphaEvolve explained
YouTube video by AI Coffee Break with Letitia AlphaEvolve: Using LLMs to solve Scientific and Engineering Challenges | AlphaEvolve explained

πŸ’‘ AlphaEvolve is a new AI system that doesn’t just write code, it evolves it. It uses LLMs and evolutionary search to make scientific discoveries.
We explain how AlphaEvolve works and the evolutionary strategies behind it (like MAP-Elites and island-based population methods).
πŸ“Ί youtu.be/Z4uF6cVly8o

19.06.2025 13:00 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ’‘ Participation includes talks, workshops, and lots of cross-disciplinary exchangeβ€”with accommodation and meals covered (fee: 100€).
If this sounds like your thing, the application deadline is June 27!

31.05.2025 11:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ“ Heidelberg, September 21–27, 2025
πŸ’¬ Language: English
🎯 Open to PhD students & advanced Master’s students from all disciplines working on AI-related research

31.05.2025 11:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
AI and Human Values - Marsilius-Kolleg

πŸ‘‰ www.marsilius-kolleg.uni-heidelberg.de/de/studium/i...
🧠 This interdisciplinary event brings together researchers from across fieldsβ€”computer science, linguistics, philosophy, law, medicine, theologyβ€”to explore the normative foundations of generative AI, and how values are embedded in its design.

31.05.2025 11:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Excited to share that I’ll be joining the Summer School β€œAI and Human Values” this September at the Marsilius-Kolleg of Heidelberg University as a speaker. I'll be giving an introduction to how large language models actually workβ€”before the summer school dives deeper into broader implications.

31.05.2025 11:47 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Token-Efficient Long Video Understanding for Multimodal LLMs | Paper explained
YouTube video by AI Coffee Break with Letitia Token-Efficient Long Video Understanding for Multimodal LLMs | Paper explained

Long videos are a nightmare for language modelsβ€”too many tokens, slow inference. ☠️
We explain STORM β›ˆοΈ, a new architecture that improves long video LLMs using Mamba layers and token compression. Reaches better accuracy than GPT-4o on benchmarks and up to 8Γ— more efficiency.

πŸ“Ί youtu.be/uMk3VN4S8TQ

18.05.2025 12:02 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Thank you! I post here as frequently as on Twitter. πŸ™ˆ I'm doing videos once a month now.

17.05.2025 06:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Follow @aicoffeebreak.bsky.social!! Letitia is very effective in communicating research papers in just a few mins! Perfect for your coffee break. πŸ˜‰

17.05.2025 00:51 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
4-Bit Training for Billion-Parameter LLMs? Yes, Really.
YouTube video by AI Coffee Break with Letitia 4-Bit Training for Billion-Parameter LLMs? Yes, Really.

We all know quantization works at inference time, but researchers successfully trained a 13B LLaMA 2 model using FP4 precision (only 16 values per weight!). 🀯
We break down how it works. If quantization and mixed-precision training sounds mysterious, this’ll clear it up.
πŸ“Ί youtu.be/Ue3AK4mCYYg

18.04.2025 12:11 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0