Daniรซl de Kok's Avatar

Daniรซl de Kok

@danieldk.eu.bsky.social

Machine Learning, Natural Language Processing, LLM, transformers, macOS, NixOS, Rust, C++, Python, Cycling. Working on inference at Hugging Face ๐Ÿค—. Open source ML ๐Ÿš€.

1,481 Followers  |  157 Following  |  37 Posts  |  Joined: 10.10.2023  |  1.8541

Latest posts by danieldk.eu on Bluesky

Hugging Face Kernel Builder Walkthrough | Image to Grayscale CUDA Kernel
YouTube video by David Holtz Hugging Face Kernel Builder Walkthrough | Image to Grayscale CUDA Kernel

David Holz made an introduction video showing how to make your own kernels with kernel-builder:

www.youtube.com/watch?v=HS5P...

26.07.2025 11:37 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

The kernel ecosystem is completely open: you can make your own kernels with kernel-builder, upload them to the hub, and register a mapping using the kernels package and they get used by transformers.

github.com/huggingface/...
github.com/huggingface/...

26.07.2025 11:37 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Release v4.54.0: Kernels, Transformers Serve, Ernie, Voxtral, LFM2, DeepSeek v2, ModernBERT Decoder... ยท huggingface/transformers Important news! In order to become the source of truth, we recognize that we need to address two common and long-heard critiques about transformers: transformers is bloated transformers is slow O...

Transformers 4.54.0 is out! This release adds support for compute kernels hosted on the Hub. When enabled, transformers can replace PyTorch layer implementations by fast, specialized kernels from the hub.

github.com/huggingface/...

26.07.2025 11:34 โ€” ๐Ÿ‘ 7    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
GitHub - koaning/mktestdocs: Run pytest against markdown files/docstrings. Run pytest against markdown files/docstrings. Contribute to koaning/mktestdocs development by creating an account on GitHub.

Just released a new version of mktestdocs. It now also supports huggingface docstrings!

github.com/koaning/mkt...

26.07.2025 10:00 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Some of the ModernBERT team is back with new encoder models: Ettin, ranging from tiny to small: 17M, 32M, 68M, 150M, 400M & 1B parameters. They also trained decoder models & checked if decoders could classify & if encoders could generate.

Details in ๐Ÿงต:

17.07.2025 15:23 โ€” ๐Ÿ‘ 7    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Your open-source companion - Reachy Mini
YouTube video by Pollen Robotics Your open-source companion - Reachy Mini

So excited to finally release our first robot today: Reachy Mini

A dream come true: cute and low priced, hackable yet easy to use, powered by open-source and the infinite community.

Read more and order now at huggingface.co/blog/reachy-...

09.07.2025 10:09 โ€” ๐Ÿ‘ 79    ๐Ÿ” 17    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 7
Preview
SUSE Refines, Releases Open-Source LLM to Fuel Community Collaboration Today, SUSE has released a new fine-tuned version of the language model, Cavil-Qwen3-4B, as open source on openSUSEโ€™s Hugging Face in order to make legal com...

SUSE has released Cavil-Qwen3-4B, a fine-tuned, #opensource #LLM on #HuggingFace. Built to detect #legal text like license declarations, it empowers #devs to stay #compliant. #fast #efficiently. #openSUSE #AI #Licenses news.opensuse.org/2025/06/24/s...

24.06.2025 13:59 โ€” ๐Ÿ‘ 11    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Learn the Hugging Face Kernel Hub in 5 Minutes Weโ€™re on a journey to advance and democratize artificial intelligence through open source and open science.

Over the past few months, we have worked on the @hf.co Kernel Hub. Kernel Hub allows you to get cutting-edge compute kernels directly from the hub in a few lines of code.

David Holz made a great writeup of how you can use kernels in your projects: huggingface.co/blog/hello-h...

17.06.2025 07:47 โ€” ๐Ÿ‘ 9    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Hi Berlin people! @hugobowne.bsky.social is in town & we're celebrating by hosting a meetup together ๐ŸŽ‰ This one is all about building with AI & we'll also open the floor for lightning talks. If you're around, come hang out with us!

๐Ÿ“† June 16, 18:00
๐Ÿ“ Native Instruments (Kreuzberg)
๐ŸŽŸ๏ธ lu.ma/d53y9p2u

02.06.2025 07:48 โ€” ๐Ÿ‘ 9    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
Release v3.3.1 ยท huggingface/text-generation-inference This release updates TGI to Torch 2.7 and CUDA 12.8. What's Changed change HPU warmup logic: seq length should be with exponential growth by @kaixuanliu in #3217 adjust the round_up_seq logic to a...

TGI v3.3.1 is released! This version switches to Torch 2.7 and CUDA 12.8. This should improve support for GPUs with compute capabilities 10.0 (B200) and 12.0 (RTX50x0 and NVIDIA RTX PRO Blackwell GPUs).

github.com/huggingface/...

22.05.2025 13:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

@aob.nl mooie tijdslijn van de stakingen in het onderwijsblad, alleen de staking van 18 maart bij de @rug.nl vergeten, wel een beetje jammer!

17.05.2025 11:51 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Release v3.3.0 ยท huggingface/text-generation-inference Notable changes Prefill chunking for VLMs. What's Changed Fixing Qwen 2.5 VL (32B). by @Narsil in #3157 Fixing tokenization like https://github.com/huggingface/text-embeddinโ€ฆ by @Narsil in #3156...

We just released text-generation-inference 3.3.0. This release adds prefill chunking for VLMs ๐Ÿš€. We have also Gemma 3 faster & use less VRAM by switching to flashinfer for prefills with images.

github.com/huggingface/...

09.05.2025 15:39 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
A T-Shirt with a hugging face emoji with a construction hat and the text 'kernels'.

A T-Shirt with a hugging face emoji with a construction hat and the text 'kernels'.

At @hf.co we are also building...

16.04.2025 14:59 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Welcome Llama 4 Maverick & Scout on Hugging Face Weโ€™re on a journey to advance and democratize artificial intelligence through open source and open science.

The entire Xet team is so excited to bring Llama 4 to the @hf.co community. Every byte downloaded comes through our infrastructure โค๏ธ ๐Ÿค— โค๏ธ ๐Ÿค— โค๏ธ ๐Ÿค—

Read the whole post to see more about these models.

05.04.2025 20:05 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Video thumbnail

Gemma 3 is live ๐Ÿ”ฅ

You can deploy it from endpoints directly with an optimally selected hardware and configurations.

Give it a try ๐Ÿ‘‡

12.03.2025 11:28 โ€” ๐Ÿ‘ 7    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

HuggingChat keycap sticker when?

11.03.2025 15:48 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
A screenshot of the "About Orion" window for the Orion browser. The window features the Orion logo, a starburst design with a central white star and smaller stars on a purple and blue gradient background. The text reads: "Orion. Version 0.0.0.0.1 (WebKit development). Build date Mar 5, 2025. x86_64 (Ubuntu 22.04.5 LTS). Orion Browser by Kagi. Copyright ยฉ 2020-2025 Kagi. All rights reserved. Humanize the Web." Below the text are three buttons labeled "Get Orion+," "Send Feedback," and "Licenses." The background is a soft purple gradient.

A screenshot of the "About Orion" window for the Orion browser. The window features the Orion logo, a starburst design with a central white star and smaller stars on a purple and blue gradient background. The text reads: "Orion. Version 0.0.0.0.1 (WebKit development). Build date Mar 5, 2025. x86_64 (Ubuntu 22.04.5 LTS). Orion Browser by Kagi. Copyright ยฉ 2020-2025 Kagi. All rights reserved. Humanize the Web." Below the text are three buttons labeled "Get Orion+," "Send Feedback," and "Licenses." The background is a soft purple gradient.

We're thrilled to announce that development of the Orion Browser for Linux has officially started!

Register here to receive news and early access opportunities throughout the development year: forms.kagi.com?q=orion_linu...

07.03.2025 00:56 โ€” ๐Ÿ‘ 202    ๐Ÿ” 30    ๐Ÿ’ฌ 14    ๐Ÿ“Œ 4
Post image

want to try QwQ-32B? it just landed on HuggingChat!

06.03.2025 20:06 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Jikes, good to hear you are feeling better.

I had them two years ago while we were on vacation. Best advise from a Danish doctor: take a lot of painkillers and drink enough beer ๐Ÿ˜ (or water).

04.03.2025 11:05 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Six months after joining @hf.co weโ€™re kicking off the first migrations from LFS -> Xet backed storage for a handful of repos on the Hugging Face Hub.

A few months ago, I published a timeline of our work and this is a big step (of many!) to bring our storage to the Hub - more in ๐Ÿงต๐Ÿ‘‡

21.02.2025 03:22 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Followers gezocht. Nu we niet meer actief zijn op X (algemeen FS-account 150k followers) en Mastodon helaas niet het volume van het oude Twitter lijkt te krijgen, hoop ik dat BlueSky die plaats kan innemen. Social media is toch een goedkope manier om publiek te informeren. pls rt

07.02.2025 17:16 โ€” ๐Ÿ‘ 1350    ๐Ÿ” 1220    ๐Ÿ’ฌ 58    ๐Ÿ“Œ 42

Not only is DeepSeek R1 open, you can now run it on your own hardware with Text Generation Inference 3.1.0.

Awesome work by @mohit-sharma.bsky.social and @narsilou.bsky.social !

03.02.2025 10:56 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

Want to run Deepseek R1 ?

Text-generation-inference v3.1.0 is out and supports it out of the box.

Both on AMD and Nvidia !

31.01.2025 14:25 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Video thumbnail

๐Ÿณ DeepSeek is on Hugging Face ๐Ÿค—

Free for inference!
1K requests for free
20K requests with PRO

Code: https://buff.ly/4glAAa5
900 models more: https://buff.ly/40x1rua

28.01.2025 13:00 โ€” ๐Ÿ‘ 9    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Release v3.0.2 ยท huggingface/text-generation-inference Tl;dr New transformers backend supporting flashattention at roughly same performance as pure TGI for all non officially supported models directly in TGI. Congrats @Cyrilvallez New models unlocked: ...

Text-generation-inference v3.0.2 is out.

Basically we can run transformers models (that support flash) at roughly the same speed as native TGI ones.
What this means is broader model support.

Today it unlocks
Cohere2, Olmo, Olmo2 and Helium

Congrats Cyril Vallez

github.com/huggingface/...

24.01.2025 14:55 โ€” ๐Ÿ‘ 6    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿ DeepSeek is not on the @hf.co Hub to take part, they are there to take over!

Amazing stuff from the DeepSeek team, ICYMI they recently released some reasoning models (DeepSeek-R1 and DeepSeek-R1-Zero), fully open-source, their performance is on par with OpenAI-o1 and it's MIT licensed!

23.01.2025 13:45 โ€” ๐Ÿ‘ 10    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Hello on the new sky!

22.01.2025 06:06 โ€” ๐Ÿ‘ 34    ๐Ÿ” 3    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

Hi ๐Ÿ‘‹

22.01.2025 19:26 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
GitHub Actions finishing in 30 seconds

GitHub Actions finishing in 30 seconds

The speed of uv is just insane. Just experimented with using it for CI of a project and installing a project, its dependencies (including Torch), and running some tests takes 30 seconds ๐Ÿคฏ.

22.01.2025 12:40 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@danieldk.eu is following 20 prominent accounts