As always, the demo is open source (which you can find under the "Files" tab), so I'm excited to see how the community builds upon this! ๐
๐ Link to demo: huggingface.co/spaces/Liqui...
@xenova.bsky.social
Bringing the power of machine learning to the web. Currently working on Transformers.js (@huggingface ๐ค)
As always, the demo is open source (which you can find under the "Files" tab), so I'm excited to see how the community builds upon this! ๐
๐ Link to demo: huggingface.co/spaces/Liqui...
The next generation of AI-powered websites is going to be WILD! ๐คฏ
In-browser tool calling & MCP is finally here, allowing LLMs to interact with websites programmatically.
To show what's possible, I built a demo using Liquid AI's new LFM2 model, powered by ๐ค Transformers.js.
That's right, we're running Mistral's new Voxtral-Mini-3B model 100% locally in-browser on WebGPU, powered by Transformers.js and ONNX Runtime Web! ๐ฅ
Try it out yourself! ๐
huggingface.co/spaces/webml...
Introducing Voxtral WebGPU: State-of-the-art audio transcription directly in your browser! ๐คฏ
๐ฃ๏ธ Transcribe videos, meeting notes, songs and more
๐ Runs on-device, meaning no data is sent to a server
๐ Multilingual (8 languages)
๐ค Completely free (forever) & open source
Model: huggingface.co/lazy-guy12/c...
Online demo: lazy-guy.github.io/chess-llama/
A community member trained a tiny Llama model (23M parameters) on 3 million high-quality @lichess.org games, then deployed it to run entirely in-browser with ๐ค Transformers.js! Super cool! ๐ฅ
It has an estimated ELO of ~1400... can you beat it? ๐
(runs on both mobile and desktop)
The most difficult part was getting the model running in the first place, but the next steps are simple:
โ๏ธ Implement sentence splitting, enabling streamed responses
๐ Multilingual support (only phonemization left)
Who wants to help? ๐ค
huggingface.co/spaces/webml...
We did it! Kokoro TTS (v1.0) can now run 100% locally in your browser w/ WebGPU acceleration. Real-time text-to-speech without a server. โก๏ธ
Generate 10 seconds of speech in ~1 second for $0.
What will you build? ๐ฅ
The model is also extremely resilient to quantization. The smallest variant is only 86 MB in size (down from the original 326 MB), with no noticeable difference in audio quality! ๐คฏ
Link to models/samples: huggingface.co/onnx-communi...
import { KokoroTTS } from "kokoro-js"; const tts = await KokoroTTS.from_pretrained( "onnx-community/Kokoro-82M-ONNX", { dtype: "q8" }, // fp32, fp16, q8, q4, q4f16 ); const text = "Life is like a box of chocolates. You never know what you're gonna get."; const audio = await tts.generate(text, { voice: "af_sky" }, // See `tts.list_voices()` ); audio.save("audio.wav");
You can get started in just a few lines of code! ๐งโ๐ป
Huge kudos to the Kokoro TTS community, especially taylorchu for the ONNX exports and Hexgrad for the amazing project! None of this would be possible without you all! ๐ค
Try it out yourself: huggingface.co/spaces/webml...
Introducing Kokoro.js, a new JavaScript library for running Kokoro TTS, an 82 million parameter text-to-speech model, 100% locally in the browser w/ WASM. Powered by ๐ค Transformers.js. WebGPU support coming soon!
๐ npm i kokoro-js ๐
Link to demo (+ sample code) in ๐งต
For the AI builders out there: imagine what could be achieved with a browser extension that (1) uses a powerful reasoning LLM, (2) runs 100% locally & privately, and (3) can directly access/manipulate the DOM! ๐
๐ป Source code: github.com/huggingface/...
๐ Online demo: huggingface.co/spaces/webml...
Is this the future of AI browser agents? ๐ WebGPU-accelerated reasoning LLMs are now supported in Transformers.js! ๐คฏ
Here's MiniThinky-v2 (1B) running 100% locally in the browser at ~60 tps (no API calls)! I can't wait to see what you build with it!
Demo + source code in ๐งต๐
This project was greatly inspired by Brendan Bycroft's amazing LLM Visualization tool โ check it out if you haven't already! Also, thanks to Niels Rogge for adding DINOv2 w/ Registers to transformers! ๐ค
Source code: github.com/huggingface/...
Online demo: huggingface.co/spaces/webml...
Another interesting thing to see is how the attention maps become far more refined in later layers of the transformer. For example,
First layer (1) โ noisy and diffuse, capturing broad general patterns.
Last layer (12) โ focused and precise, highlighting specific features.
Vision Transformers work by dividing images into fixed-size patches (e.g., 14 ร 14), flattening each patch into a vector and treating each as a token.
It's fascinating to see what each attention head learns to "focus on". For example, layer 11, head 1 seems to identify eyes. Spooky! ๐
The app loads a small DINOv2 model into the user's browser and runs it locally using Transformers.js! ๐ค
This means you can analyze your own images for free: simply click the image to open the file dialog.
E.g., the model recognizes that long necks and fluffy ears are defining features of llamas! ๐ฆ
First project of 2025: Vision Transformer Explorer
I built a web app to interactively explore the self-attention maps produced by ViTs. This explains what the model is focusing on when making predictions, and provides insights into its inner workings! ๐คฏ
Try it out yourself! ๐
Yeah, I ran into this during development, and is unfortunately a bug in Firefox:
- bugzilla.mozilla.org/show_bug.cgi...
- bugzilla.mozilla.org/show_bug.cgi...
Huge shout-out to the Useful Sensors team for such an amazing model and to Wael Yasmina for his 3D audio visualizer tutorial! ๐ค
โ๐ป Source code: github.com/huggingface/...
๐ Online demo: huggingface.co/spaces/webml...
Introducing Moonshine Web: real-time speech recognition running 100% locally in your browser!
๐ Faster and more accurate than Whisper
๐ Privacy-focused (no data leaves your device)
โก๏ธ WebGPU accelerated (w/ WASM fallback)
๐ฅ Powered by ONNX Runtime Web and Transformers.js
Demo + source code below! ๐
๐คNEW PIECE:
โOpen-sourceโ is becoming a buzzword for many aspects of modern journalism, including open-source AI. But what is it, and how can journalists benefit from it?
@marinaadami.bsky.social spoke to @fdaudens.bsky.social to find out.
reutersinstitute.politics.ox.ac.uk/news/journal...
Huge shout-out to OuteAI for their amazing model (OuteTTS-0.2-500M) and for helping us bring it to the web! ๐ค Together, we released the outetts NPM package, which you can install with `npm i outetts`.
๐ป Source code: github.com/huggingface/...
๐ Demo: huggingface.co/spaces/webml...
The model is multilingual (English, Chinese, Korean & Japanese) and even supports zero-shot voice cloning! ๐คฏ Stay tuned for an update that will add these features to the UI!
More samples:
bsky.app/profile/reac...
Introducing TTS WebGPU: The first ever text-to-speech web app built with WebGPU acceleration! ๐ฅ
High-quality and natural speech generation that runs 100% locally in your browser, powered by OuteTTS and Transformers.js. ๐ค Try it out yourself!
Demo + source code below ๐
6. MGP-STR for optical character recognition (OCR)
7. PatchTST & PatchTSMixer for time series forecasting
That's right, everything running 100% locally in your browser (no data sent to a server)! ๐ฅ Huge for privacy!
Check out the release notes for more information. ๐
github.com/huggingface/...
2. Qwen2-VL from Qwen for dynamic-resolution image understanding
3. JinaCLIP from Jina AI for general-purpose multilingual multimodal embeddings
4. LLaVA-OneVision from ByteDance for Image-Text-to-Text generation
5. ViTPose for pose estimation
We just released Transformers.js v3.1 and you're not going to believe what's now possible in the browser w/ WebGPU! ๐คฏ Let's take a look:
1. Janus from Deepseek for unified multimodal understanding and generation (Text-to-Image and Image-Text-to-Text)
Demo (+ source code): hf.co/spaces/webml...
~1.1GB ๐ huggingface.co/HuggingFaceT...
27.11.2024 23:22 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0๐ค Learn more about SmolLM2: github.com/huggingface/...
๐ Online WebGPU demo: huggingface.co/spaces/Huggi...