π§ Do Vision & Language Decoders Use Images and Text Equally?
In our latest episode, we speak with Letitia Parcalabescu about her ICLR 2025 paper examining how visionβlanguage *decoder* models use images and text β and how self-consistent their explanations really are. (1/8π§΅)
18.02.2026 17:37 β
π 2
π 1
π¬ 1
π 1
If you love @aicoffeebreak.bsky.social, this one's for you β Letitia Parcalabescu is our next guest on the #WiAIR_podcast!
Stay tuned for our conversation:
π¬ YouTube: www.youtube.com/@WomeninAIRe...
06.02.2026 17:01 β
π 2
π 1
π¬ 1
π 0
YouTube video by AI Coffee Break with Letitia
What's up with Google's new VaultGemma model? β Differential Privacy explained
LLMs can memorize even a phone number seen once in training.π
Googleβs VaultGemma fixes that, being the first open-weight LLM trained from scratch with differential privacy, so rare secrets leave no trace.
β new video explaining Differential Privacy through VaultGemma π
π₯ youtu.be/UwX5zzjwb_g
02.11.2025 13:28 β
π 6
π 2
π¬ 0
π 0
YouTube video by AI Coffee Break with Letitia
Diffusion Models and Flow-Matching explained side by side
We explain diffusion models and flow-matching models side by side. Flow-Matching models are the new generation of AI image generators that are quickly replacing diffusion models. They take everything diffusion did well, but make it faster, smoother, and deterministic.
π₯ youtu.be/firXjwZ_6KI
19.10.2025 12:20 β
π 2
π 1
π¬ 0
π 0
YouTube video by AI Coffee Break with Letitia
Energy-Based Transformers explained | How EBTs and EBMs work
Works for image and video transformers too!
π₯ youtu.be/18Fn2m99X1k
21.09.2025 12:48 β
π 5
π 0
π¬ 0
π 0
Ever wondered how Energy-Based Models (EBMs) work and how they differ from normal neural networks?
βοΈ We go over EBMs and then dive into the Energy-Based Transformers paper to make LLMs that refine guesses, self-verify, and could adapt compute to problem difficulty.
21.09.2025 12:48 β
π 22
π 6
π¬ 3
π 0
The worldβs largest NLP conference with almost 2,000 papers presented, ACL 2025 just took place in Vienna! πβ¨ Here is a quick snapshot of the event via a short interview with one of the authors whose work caught my attention.
π₯ Watch: youtu.be/GBISWggsQOA
14.09.2025 11:49 β
π 3
π 2
π¬ 0
π 0
Puppets of a Digital Brain β SciFilmIt
Check it out if youβre curious or feel like supporting thoughtful science storytelling; there's a βtext trailerβ on these pages πβ¨:
π scifilmit.com/puppetsofadi...
05.08.2025 18:36 β
π 0
π 0
π¬ 0
π 0
My friend Vivi Nastase is working on a short science communication film called "Puppets of a Digital Brain". It aims to explain the tech behind AI chatbots (the good, the bad, the environmental) in an accessible, visual way.
π‘ GoFundMe: gofund.me/453ed662
05.08.2025 18:36 β
π 3
π 1
π¬ 2
π 0
YouTube video by AI Coffee Break with Letitia
Greedy? Random? Top-p? How LLMs Actually Pick Words β Decoding Strategies Explained
In this video, we break down each method and show how the same model can sound dull, brilliant, or unhinged β just by changing how it samples.
π₯ Watch here: youtu.be/o-_SZ_itxeA
03.08.2025 12:06 β
π 0
π 0
π¬ 0
π 0
How do LLMs pick the next word? They donβt choose words directly: they only output word probabilities. π Greedy decoding, top-k, top-p, min-p are methods that turn these probabilities into actual text.
03.08.2025 12:06 β
π 3
π 2
π¬ 1
π 0
I'll co-organise the "Multilingualism: from data crawling to evaluation" social / birds-of-a-feather session. It's on the 29th at 4PM. Do come by if you're at ACL Vienna! :)
27.07.2025 15:24 β
π 1
π 0
π¬ 0
π 0
I'm also at ACL, would be lovely to catch up!
27.07.2025 12:21 β
π 2
π 0
π¬ 1
π 0
Right now, attending the synthetic gata generation tutorial which is packed, because it turned out that "Data is the new source code."
#ACL2025NLP
27.07.2025 12:20 β
π 2
π 0
π¬ 0
π 0
Excited to be at ACL 2025 in Vienna this week π¦πΉ #ACL2025
Iβm always up for a chat about reasoning models, NLE faithfulness, synthetic data generation, or the joys and challenges of explaining AI on YouTube.
If you're around, letβs connect!
27.07.2025 12:20 β
π 8
π 1
π¬ 2
π 0
Dancing with Right & Wrong? - Marsilius-Kolleg
An interdisciplinary symposium on knowing, doing and not being sure
π
Looking forward to the discussion and to learning from fellow panelists and participants. If you're around Heidelberg, join us!
www.marsilius-kolleg.uni-heidelberg.de/de/dancing-w...
07.07.2025 17:03 β
π 1
π 1
π¬ 0
π 0
π¨ What recent advancements in AI were particularly impactful for science?
π How do we calibrate trust in current AI systems?
π§ͺ If AI takes over more of the scientific processβ¦ whatβs left for us humans?
07.07.2025 17:03 β
π 2
π 0
π¬ 1
π 0
π€ Can we trust AI in science?
I'm excited to be speaking at the final event of the Young Marsilius Fellows 2025, themed "Dancing with Right & Wrong?" β a title that feels increasingly relevant these days.
I'll be joining a panel on "(How) can we trust AI in science?" to discuss questions like:
07.07.2025 17:03 β
π 2
π 1
π¬ 1
π 0
We train AI on human-selected or -generated data (yes, even taking a photo is concept selection β we capture what we find interesting; text even more so, expressing our conceptualisation of the world). Then weβre surprised when the AI's concepts and representations are similar to ours. π€·ββοΈ
20.06.2025 15:25 β
π 1
π 0
π¬ 1
π 0
I'm very excited to finally share the main work of my PhD!
We explored the evolutionary dynamics of gene regulation and expression during gonad development in primates. We cover among others: X chromosome dynamics (incl. in a developing XXY testis), gene regulatory networks and cell type evolution.
20.06.2025 08:35 β
π 14
π 5
π¬ 1
π 0
YouTube video by AI Coffee Break with Letitia
AlphaEvolve: Using LLMs to solve Scientific and Engineering Challenges | AlphaEvolve explained
π‘ AlphaEvolve is a new AI system that doesnβt just write code, it evolves it. It uses LLMs and evolutionary search to make scientific discoveries.
We explain how AlphaEvolve works and the evolutionary strategies behind it (like MAP-Elites and island-based population methods).
πΊ youtu.be/Z4uF6cVly8o
19.06.2025 13:00 β
π 2
π 0
π¬ 0
π 0
π‘ Participation includes talks, workshops, and lots of cross-disciplinary exchangeβwith accommodation and meals covered (fee: 100β¬).
If this sounds like your thing, the application deadline is June 27!
31.05.2025 11:47 β
π 0
π 0
π¬ 1
π 0
π Heidelberg, September 21β27, 2025
π¬ Language: English
π― Open to PhD students & advanced Masterβs students from all disciplines working on AI-related research
31.05.2025 11:47 β
π 1
π 0
π¬ 1
π 0
AI and Human Values - Marsilius-Kolleg
π www.marsilius-kolleg.uni-heidelberg.de/de/studium/i...
π§ This interdisciplinary event brings together researchers from across fieldsβcomputer science, linguistics, philosophy, law, medicine, theologyβto explore the normative foundations of generative AI, and how values are embedded in its design.
31.05.2025 11:47 β
π 0
π 0
π¬ 1
π 0
Excited to share that Iβll be joining the Summer School βAI and Human Valuesβ this September at the Marsilius-Kolleg of Heidelberg University as a speaker. I'll be giving an introduction to how large language models actually workβbefore the summer school dives deeper into broader implications.
31.05.2025 11:47 β
π 3
π 2
π¬ 1
π 0
YouTube video by AI Coffee Break with Letitia
Token-Efficient Long Video Understanding for Multimodal LLMs | Paper explained
Long videos are a nightmare for language modelsβtoo many tokens, slow inference. β οΈ
We explain STORM βοΈ, a new architecture that improves long video LLMs using Mamba layers and token compression. Reaches better accuracy than GPT-4o on benchmarks and up to 8Γ more efficiency.
πΊ youtu.be/uMk3VN4S8TQ
18.05.2025 12:02 β
π 5
π 2
π¬ 0
π 0
Thank you! I post here as frequently as on Twitter. π I'm doing videos once a month now.
17.05.2025 06:59 β
π 1
π 0
π¬ 1
π 0
Follow @aicoffeebreak.bsky.social!! Letitia is very effective in communicating research papers in just a few mins! Perfect for your coffee break. π
17.05.2025 00:51 β
π 2
π 2
π¬ 0
π 0
YouTube video by AI Coffee Break with Letitia
4-Bit Training for Billion-Parameter LLMs? Yes, Really.
We all know quantization works at inference time, but researchers successfully trained a 13B LLaMA 2 model using FP4 precision (only 16 values per weight!). π€―
We break down how it works. If quantization and mixed-precision training sounds mysterious, thisβll clear it up.
πΊ youtu.be/Ue3AK4mCYYg
18.04.2025 12:11 β
π 2
π 0
π¬ 1
π 0