run an LLM with a supercharged engine powered by Java and GraalVM (ht @alina-yurenko.bsky.social )
www.youtube.com/shorts/7zSEa...
@mukel.bsky.social
@graalvm.org padawan at Oracle Labs. Working on Espresso: A metacircular Java bytecode interpreter for GraalVM. Java in Java by day, LLMs in Java by night.
run an LLM with a supercharged engine powered by Java and GraalVM (ht @alina-yurenko.bsky.social )
www.youtube.com/shorts/7zSEa...
✅ @graalvm.org Native Image
✅ Llama3.java
✅ Vector API, FFM API
✅ Apple Silicon ❤️
— `git clone github.com/mukel/llama3... `
— `sdk install java 25.ea.17-graal`
— `make native` (optionally preload a model for zero overhead)
— Profit!🚀
#Java #GraalVM #LLM #LLama
I had lots of fun talking about my early days as a dev, how I caused double credit bookings all over Germany, stopped smoking to survive long GraalVM meetings and how to heat the room with Java inception.
Already looking forward to part 2.
"Crafting Interpreters" checks all the boxes, here's a nice writeup from the author journal.stuffwithstuff.com/2020/04/05/c...
07.02.2025 15:44 — 👍 2 🔁 0 💬 0 📌 0what's new in llama3.java and the upcoming @graalvm.org for JDK 24 🔥
Try it out here: github.com/mukel/llama3...
@stephanjanssen.be @mukel.bsky.social
#VDCERN #VoxxedDaysCERN
This will be a big deal 🪄🤩. Normally the debug experience of natively compiled languages get obfuscated due to compiler optimisations like inlining. Not here. All optimizations can be enabled to debug GraalVM native images and it looks just the same as without opts.
20.12.2024 12:00 — 👍 12 🔁 2 💬 1 📌 1We just merged the current status of the upcoming JDWP support for @graalvm.org Native Image! 🥳
This will soon provide developers with the same debugging experience they are used to in Java, but for native images! Stay tuned for more details.
github.com/oracle/graal...
Excited to welcome @mukel.bsky.social as my co-speaker for the VoxxedDays #CERN talk on "The Era of AAP: Ai Augmented Programming using only Java" ☕️ 🚀 🔥
cern.voxxeddays.com/talk/the-era...
https://buff.ly/40KmT0t
Graal compiler: +10% faster inference with the latest early access build.
New features: batched prompt processing & AVX512 support.