Here's my attempt at visualizing the training pipeline for DeepSeek-R1(-Zero) and the distillation to smaller models.
Note they retrain DeepSeek-V3-Base with the new 800k curated data instead of continuing to finetune the checkpoint from the first round of cold-start SFT + RL
21.01.2025 01:11 —
👍 6
🔁 0
💬 0
📌 0
It was a pleasure working on this project during my first year at Google DeepMind with our amazing collaborators led by @anianruoss.bsky.social, @pardofab.bsky.social, @bonniesjli.bsky.social, @vladmnih.bsky.social, and Tim Genewein!
05.12.2024 03:27 —
👍 2
🔁 0
💬 0
📌 0
If you can imagine it, you can play it in Genie 2 🧞
Our foundation world model is capable of generating interactive worlds controllable with keyboard/mouse actions, starting from a single prompt image
So proud to have been part of this work led by @jparkerholder.bsky.social and @rockt.ai 🙏
05.12.2024 03:24 —
👍 5
🔁 0
💬 0
📌 0
LMs see, can LMs do?
LMAct benchmarks current SOTA foundation models' ability to act in text/visual environments using text as low-level actions in many domains using in-context expert (multimodal) demonstrations. We're excited to see how this benchmark drives further progress!
05.12.2024 03:07 —
👍 6
🔁 0
💬 1
📌 0