Read the blog: huggingface.co/blog/nanovlm
21.05.2025 13:10 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0@andimara.bsky.social
Multimodal research @huggingface
Read the blog: huggingface.co/blog/nanovlm
21.05.2025 13:10 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Train your Vision-Language Model in just two commands:
> git clone github.com/huggingface/...
> python train.py
New Blog๐โจ:
nanoVLM: The simplest way to train your own Vision-Language Model in pure PyTorch explained step-by-step!
Easy to read, even easier to use. Train your first VLM today!
Link: webml-community-smolvlm-realtime-webgpu.static.hf.space/index.html
14.05.2025 15:39 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Real-time SmolVLM in a web browser with transformers.js.
All running locally with no installs. Just open the website.
If youโre intoย efficient multimodal models, youโll love this one.
Check out the paper: huggingface.co/papers/2504....
๐ฑย Real-world Efficiency: We've created an app using SmolVLM on an iPhone 15 and got real-time inference directly from its camera!
๐ย Browser-based Inference? Yep!ย We get lightning-fast inference speeds of 40-80 tokens per second directly in a web browser. No tricks, just compact, efficient models!
๐ย State-of-the-Art Performance, SmolVLM comes in three powerful yet compact sizesโ256M, 500M, and 2.2B parametersโeach setting new SOTA benchmarks for their hardware constraints in image and video understanding.
08.04.2025 15:12 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0โจย Less CoT, more efficiency: Turns out, too much Chain-of-Thought (CoT) data actually hurts performance in small models. They dumb
โจย Longer videos, better results: Increasing video length during training enhanced performance on both video and image tasks.
โจย System prompts and special tokens are key: Introducing system prompts and dedicated media intro/outro tokens significantly boosted our compact VLMโs performanceโespecially for video tasks.
08.04.2025 15:12 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0โจย Pixel shuffling magic: Aggressively pixel shuffling helped our compact VLMs "see" better, same performance with sequences 16x shorter!
โจย Learned positional tokens FTW: For compact models, learned positional tokens significantly outperform raw text tokens, enhancing efficiency and accuracy.
โจย Smaller is smarter with SigLIP: Surprise! Smaller LLMs didn't benefit from the usual large SigLIP (400M). Instead, we use the 80M base SigLIP that performs equally well at just 20% of the original size!
08.04.2025 15:12 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Here are the coolest insights from our experiments:
โจย Longer context = Big wins: Increasing the context length from 2K to 16K gave our tiny VLMs a 60% performance boost!
Today, we share the tech report forย SmolVLM: Redefining small and efficient multimodal models.
๐ฅ Explaining how to create a tiny 256M VLM that uses less than 1GB of RAM and outperforms our 80B models from 18 months ago!
huggingface.co/papers/2504....
SmolDocling isย available todayย ๐๏ธ
๐ Model:ย huggingface.co/ds4sd/SmolDo...
๐ Paper: huggingface.co/papers/2503....
๐ค Space: huggingface.co/spaces/ds4sd...
Try it and let us know what you think! ๐ฌ
At only 256M parameters, SmolDoclingย outperformsย much larger models on key document conversion tasks:
๐๏ธย Full-page transcription: Beats models 27ร bigger!
๐ย Equations: Matches or beats leading models like GOT
๐ปย Code recognition: We introduce the first benchmark for code OCR
What makes it unique?
๐ Handles everything a document has:ย tables, charts, code, equations, lists, and more
๐ Works beyond scientific papersโsupportsย business docs, patents, and forms
๐ It runs with less than 1GB of RAM, so running at large batch sizes is super cheap!
How does SmolDocling beat models 27ร bigger? SmolDocling transforms any documentย intoย structured metadata withย DocTags, being SOTA in:
โ
Full-page conversion
โ
Layout identification
โ
Equations, tables, charts, plots, code OCR
๐ We just droppedย SmolDocling: a 256M open-source vision LM for complete document OCR! ๐โจ
Lightning fast, process a page inย 0.35 sec onย consumer GPU using < 500MB VRAM โก
SOTA in document conversion, beating every competing model we tested (including models 27x more params) ๐คฏ
But how? ๐งถโฌ๏ธ
Extremely bullish on @CohereForAI's Aya Vision (8B & 32B) - new SOTA open-weight VLMs
- 8B wins up to 81% of the time in its class, better than Gemini Flash
- 32B beats Llama 3.2 90B!
- Integrated on @hf.co from Day 0!
Check out their blog! huggingface.co/blog/aya-vis...
Me too! Highlight of my career so far :)
31.01.2025 15:21 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0And that was why we didn't release this before. It's live research code. Most gets rewritten fairly often, and some parts have been the same for years.
It works, it manages to produce SOTA results at 256M and 80B sizes, but it's not production code.
Go check it out:
github.com/huggingface/...
And it also has a bunch of bugs like this one in our modeling_vllama3.py file. We start from a pretrained LLM, but for some reason the weights of the head are not loaded from the model. I still don't know why :(
31.01.2025 15:06 โ ๐ 3 ๐ 0 ๐ฌ 2 ๐ 0The codebase is full of interesting insights like this one in our dataset.py file: How do you get reproducible randomness in different processes across different machines?
Start different random number generators based on a tuple (seed, rank)!
Post training, you can run the evaluation on all of these tasks by running:
sbatch vision/experiments/evaluation/vloom/async_evals_tr_346/run_evals_0_shots_val_2048 . slurm
Launching the training for SmolVLM 256M is as simple as:
./vision/experiments/pretraining/vloom/tr_341_smolvlm_025b_1st_stage/01_launch . sh
Then we use wandb to track the losses.
Check out the file to find out details!
Fuck it, today we're open-sourcing the codebase used to train SmolVLM from scratch on 256 H100s ๐ฅ
Inspired by our team's effort to open-source DeepSeek's R1, we are releasing the training and evaluation code on top of the weights ๐ซก
Now you can train any SmolVLMโor create your own custom VLMs!
Links :D
Demo: huggingface.co/spaces/Huggi...
Models: huggingface.co/collections/...
Blog: huggingface.co/blog/smolervlm
SmolVLM upgrades:
โข New vision encoder: Smaller but higher res.
โข Improved data mixtures: better OCR and doc understanding.
โข Higher pixels/token: 4096 vs. 1820 = more efficient.
โข Smart tokenization: Faster training and better performance. ๐
Better, faster, smarter.
We have partnered with IBM 's Docling to build amazing smol models for document understanding. Our early results are amazing. Stay tuned for future releases!
23.01.2025 13:33 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0