Code and methods open source in a new library ,โlearn and searchโ
Blog: huggingface.co/spaces/Huggi...
Learn and Search Repo: github.com/huggingface/...
@philschmid.bsky.social
Tech Lead and LLMs at @huggingface ๐จ๐ปโ๐ป ๐ค AWS ML Hero ๐ฆธ๐ป | Cloud & ML enthusiast | ๐Nuremberg | ๐ฉ๐ช https://philschmid.de
Code and methods open source in a new library ,โlearn and searchโ
Blog: huggingface.co/spaces/Huggi...
Learn and Search Repo: github.com/huggingface/...
- Introduce DVTS, a new method of performance on larger compute budgets by maintaining solution diversity
- Using compute-optimal scaling, a Llama 3 3B outperforms 70B (22x larger) on mathematical reasoning tasks
- Process Reward Models (PRMs) played a crucial role in the search process by evaluating intermediate solution steps
- Different search strategies work better for different problem difficulties - beam search for harder problems, Best-of-N for simpler ones
- Test-time compute scaling offers an alternative to training larger models by allowing smaller models to "think longer"
- Explored Best-of-N sampling, beam search, and Diverse Verifier Tree Search (DVTS)
- Llama 3 1B achieved 55% accuracy on the MATH benchmark using optimal search strategies
By scaling test-time compute, smaller models can match or even surpass the performance of larger models. Llama 3.2 3B can outperform Llama 3.1 70B on MATH-500!๐คฏ
17.12.2024 07:30 โ ๐ 2 ๐ 1 ๐ฌ 1 ๐ 0How we implemented test-time computing for open models to solve complex math problems like OpenAI o1. ๐ย Test-time compute methods use dynamic inference strategies to have LLMs โthink longerโ on harder problems, e.g. difficult math problems.
17.12.2024 07:30 โ ๐ 19 ๐ 3 ๐ฌ 2 ๐ 1- ๐ ๏ธ Cuts down costs to ~2.29% and time to ~2.36% of human evaluation
- ๐ฐ Costs $30 vs $1,297 for human evaluation
- โกย Reduced time to 118.43 minutes vs 86.5 hours
- ๐งโโ๏ธย LLM achieved a 60-70% alignment rate to humans
- ๐ฅย Agent achieved a 90% alignment rate to humans
huggingface.co/datasets/DEV...
The Agent-as-a-Judge is a graph-based agent with tools to locate, read, retrieve, and evaluate files and information for a code project to evaluate the results of other agents by comparing its judgments to human evaluations (alignment rate, judge shift).
Github: github.com/metauto-ai/a...
What is better than an LLM as a Judge? Right, an Agent as a Judge! Meta created an Agent-as-a-Judge to evaluate code agents to enable intermediate feedback alongside DevAI a new benchmark of 55 realistic development tasks.
Paper: huggingface.co/papers/2410....
Sora UI: sora.com
Kudos to OpenAI for shipping this! The UI/UX looks really thorough! ๐ข
OpenAI trained a new Turbo model to make it easier and faster to use. With "storyboards" users get a CapCut/Tiktok/Reel-like text-to-video editor, that can be used to edit and create new short-form content! Social media will be flooded.๐
A big day for AI and sad day for the EU. OpenAI releases Sora, their text-to-video model, with a dedicated UI Studio! Sora will be free for all ChatGPT Pro and Plus subscribers without additional cost. Sora will be available to later today, except if you live in the EU or UK. ๐คฏ
09.12.2024 18:41 โ ๐ 5 ๐ 1 ๐ฌ 2 ๐ 0Blog: qwenlm.github.io/blog/qwq-32b...
Model: huggingface.co/Qwen/QwQ-32B...
Demo: huggingface.co/spaces/Qwen/...
- โ ๏ธ notable limitations including language mixing, recursive reasoning loops, and safety considerations
- ๐ย Released under Apache 2.0 on Hugging Face
- ๐ย Full โreasoningโ (CoT) available in the demo
- ๐จโ๐ฌย QwQ-32B-Previewย is an experimental research
- ๐ง 32.5B parameters and 32,768 context length
- ๐ 65.2% on GPQA, 50.0% on AIME, 90.6% on MATH-500, and 50.0% on LiveCodeBench
First open-weights for OpenAI-o1-like reasoning model! QwQ from the Qwen team is a 32B model that beats OpenAI O1 mini and competes w/ O1 preview and is available under Apache 2.0 on Hugging Face! ๐คฏ
28.11.2024 08:01 โ ๐ 42 ๐ 2 ๐ฌ 2 ๐ 2Models: huggingface.co/HuggingFaceT...
Blog: huggingface.co/blog/smolvlm
๐ฅ Surprising video capabilities with 27.14% on CinePile
๐ Released under Apache 2.0 on @huggingface.bsky.social
๐ฑ Can run efficiently on laptops and edge devices
๐ Smallest SOTA vision language model at only 2B parameters
๐ ๏ธ Released 3 variants with Base, Synthetic, and Instruct
๐พ Requires only 5GB GPU RAM and achieves 38.8% on MMMU, 81.6% on DocVQA
โก 3.3-4.5x faster prefill and 7.5-16x faster generation vs Qwen2-VL
SmolLM can now see! ๐ Meet SmolVLM - a tiny 2B but powerful vision language model that runs on your device! Built on top of SmolLM and released under Apache 2.0. ๐
26.11.2024 16:31 โ ๐ 42 ๐ 5 ๐ฌ 3 ๐ 0Blog: neuralmagic.com/blog/24-spar...
Pruning is not a new technique, but it was much harder to achieve good results and maintain performance across tasks compared to quantization. Let's see if Neural Magic can change that.
- ๐ Full recovery on fine-tuning tasks (GSM8K, Evol-CodeAlpaca, Ultrachat-200K)
- โก 1.4-2.1x better multi-query throughput
- ๐ฑย Pruned using 13B tokens training, 26 hours on 32 H100s
- ๐ง Optimized for NVIDIA Ampere GPUs and newer
- ๐ 98.4% original accuracy on on Open LLM Leaderboard v1 with 50% less parameters using 2:4 sparsity pattern
- ๐ 30% higher throughput and 1.8x lower latency with up to 5.0x when combined with quantization
- ๐ป Works with 4-bit quantization (GPTQ) and Sparse-Marlin kernels
How far can we push LLM optimizations? Turns out, pretty far! A new study achieves 98% accuracy recovery on key benchmarks while removing 50% of Llama 3.1 8B's parameters using pruning. Pruning strategically to remove unnecessary connections in a neural network to make it smaller and faster. ๐
26.11.2024 08:24 โ ๐ 21 ๐ 1 ๐ฌ 1 ๐ 1TIL: @huggingface.bsky.social Transformers has native Tensor Parallelism support for better inference on multiple GPUs! This will enable many benefits and optimizations in the future.๐
For now, it supports Llama. Which one would you want to see next?
Created a visual for how function calling works. Wdyt? ๐ค
25.11.2024 11:34 โ ๐ 24 ๐ 2 ๐ฌ 6 ๐ 0Blog: blog.dottxt.co/say-what-you...
No-structured outputs can actually improve LLM performance when implemented correctly.
๐ฏ JSON generation reached 77% accuracy vs the paper's reported <10%
๐ฎ Examples in prompts should match the exact format expected in the actual tasks
๐งฐ Structured generation works best when implemented as "running our response parser as a generator"
๐ ๏ธ Key success criteria is to align your prompt, parser, and generator - it's not just about using JSON mode
๐ JSON generation requires careful prompt design, including specifying the desired schema.
๐ Good prompts should mimic information for human to understand the task and expected response format
๐ The "Let Me Speak Freely" poor results came from weak prompts and wrong use of structured prompting
๐ Structured outputs outperform unstructured on the test GSM8K: 0.78 vs 0.77, Last Letter: 0.77 vs 0.73, Shuffle Object: 0.44 vs 0.41