Lewis Tunstall's Avatar

Lewis Tunstall

@lewtun.bsky.social

πŸ€— LLM whisperer @huggingface πŸ“– Co-author of "NLP with Transformers" book πŸ’₯ Ex-particle physicist 🀘 Occasional guitarist πŸ‡¦πŸ‡Ί in πŸ‡¨πŸ‡­

581 Followers  |  2 Following  |  10 Posts  |  Joined: 15.11.2024
Posts Following

Posts by Lewis Tunstall (@lewtun.bsky.social)

Preview
Open R1: Update #2 A Blog post by Open R1 on Hugging Face

πŸ“ŠWe match the performance of DeepSeek-Distill-Qwen-7B by finetuning Qwen-7B-Math-Instruct on our dataset.

πŸ”Ž Read our blog post for all the nitty gritty details: huggingface.co/blog/open-r1...

10.02.2025 18:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

⏳ Automated filtering: We apply Math Verify to only retain problems with at least one correct answer. We also leverage Llama3.3-70B-Instruct as a judge to retrieve more correct examples (e.g for cases with malformed answers that can’t be verified with a rules-based parser)

10.02.2025 18:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ“€512 H100s running locally: Instead of relying on an API, we leverage vLLM and SGLang to run generations locally on our science cluster, generating 180k reasoning traces per day.

10.02.2025 18:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🐳 800k R1 reasoning traces: We generate two answers for 400k problems using DeepSeek R1. The filtered dataset contains 220k problems with correct reasoning traces.

10.02.2025 18:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What’s new compared to existing reasoning datasets?

β™Ύ Based on NuminaMath 1.5: we focus on math reasoning traces and generate answers for problems in NuminaMath 1.5, an improved version of the popular NuminaMath-CoT dataset.

10.02.2025 18:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
open-r1/OpenR1-Math-220k Β· Datasets at Hugging Face We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Introducing OpenR1-Math-220k!

huggingface.co/datasets/ope...

The community has been busy distilling DeepSeek-R1 from inference providers, but we decided to have a go at doing it ourselves from scratch πŸ’ͺ

More details in 🧡

10.02.2025 18:09 β€” πŸ‘ 16    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - huggingface/open-r1: Fully open reproduction of DeepSeek-R1 Fully open reproduction of DeepSeek-R1. Contribute to huggingface/open-r1 development by creating an account on GitHub.

We are reproducing the full DeepSeek R1 data and training pipeline so everybody can use their recipe. Instead of doing it in secret we can do it together in the open!

Follow along: github.com/huggingface/...

25.01.2025 13:29 β€” πŸ‘ 199    πŸ” 36    πŸ’¬ 6    πŸ“Œ 6
Scaling test-time compute - a Hugging Face Space by HuggingFaceH4 Discover amazing ML apps made by the community

Here's the links:

- Blog post: huggingface.co/spaces/Huggi...

- Code: github.com/huggingface/...

Enjoy!

16.12.2024 17:08 β€” πŸ‘ 16    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute πŸ”₯

How? By combining step-wise reward models with tree search algorithms :)

We're open sourcing the full recipe and sharing a detailed blog post πŸ‘‡

16.12.2024 17:08 β€” πŸ‘ 109    πŸ” 21    πŸ’¬ 4    πŸ“Œ 1
Post image



Hey ML peeps, we found a nice extension to beam search at Hugging Face that is far more scalable and produces more diverse candidates

The basic idea is to split your N beams into N/M subtrees and then run greedy node selection in parallel

Does anyone know what this algorithm is called?

12.12.2024 10:15 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0