Clojure fast matrix library Neanderthal has just been updated with native Apple silicon engine
Please check the new release 0.54.0 in Clojars.
#Java #AI #Clojure #CUDA #Apple
neanderthal.uncomplicate.org
@draganrocks.bsky.social
Interactive Programming for Artificial Intelligence books read now https://aiprobook.com #Clojure #AI #ML #DeepLearning #Bayesian #Java https://dragan.rocks
Clojure fast matrix library Neanderthal has just been updated with native Apple silicon engine
Please check the new release 0.54.0 in Clojars.
#Java #AI #Clojure #CUDA #Apple
neanderthal.uncomplicate.org
For all programmers thinking that they'll leave the "boring tasks" to AI code assistants, while they just do the "creative parts". You won't be able to even get to the creative parts, yet alone to solve them...
arxiv.org/abs/2506.08872
Fast matrices and number crunching now available on Apple Silicon #MacOS. Check out the newest snapshots of Neanderthal in the Clojars! Add 0.54.0-SNAPSHOT to your project.clj and you're ready to go!
github.com/uncomplicate...
#Clojure #NumPy #CUDA
So, we created SQL so "analysts" can query the DB and get rid of programmers. They delegated this to programmers anyways. Now, they created AI that writes SQL, that queries the DB for "analysts". Guess who's going to be stuck writing shitty AI prompts. news.ycombinator.com/item?id=4400...
17.05.2025 17:38 β π 0 π 1 π¬ 0 π 0Damn, this is the clearest evidence yet of my βAI powered Dunning-Krugerβ hypothesis: that AI proponents are only bullish about AI for work they donβt actually know or understand.
30.04.2025 16:29 β π 4 π 2 π¬ 1 π 0We need to put the punk back in cyber.
24.12.2024 02:54 β π 28 π 7 π¬ 1 π 0Apple M CPU Accelerate backend implemented for #Clojure Neanderthal! Now you have 3 superfast native CPU and 2 GPU choices when crunching numbers on the JVM!
Still available as snapshots on github.com/uncomplicate/neanderthal
(waiting for upstream releases). Thank you clojuriststogether.org
Vectors/matrices/tensors are really the economy of scale at work! Don't process individual elements in your own loops; use the built-in operations to process the whole structure without looking in! CPU, GPU, CUDA, etc..
aiprobook.com
#Clojure #PyTorch #programming
I love physical books too!
I can't provide those for my books due to logistical issues, but I don't have anything against you printing the PDF (no drm) and binding in a hardcover binding (if such shops are available in your area).
If youβre a dev whoβs felt like most ML/Math content talks over your head β this is for you.
You can preview the books or support the work on my site. β€οΈ
π aiprobook.com
Or just retweet this thread so others can find it.
They've helped hundreds of devs actually get backprop, eigenvalues, gradient descent, and more β without needing a PhD or pretending math is magic.
A few chapters are even free at aiprobook.com if you want to explore. Lots of content is available as blog articles.
That feedback led me to create two books:
π Deep Learning for Programmers
π Linear Algebra for Programmers
Theyβre built entirely from the intuition that if you can code, you can understand math.
Code-first, jargon-free, honest.
One day, a blog post of mine made the front page of Hacker News.
It didn't break my server, but it was read by many people.
That gave me a signal: there's a hunger out there for programmers who want hands-on, code-first explanations of βscaryβ math concepts.
10y ago, many programmers were frustrated trying to understand how Deep Learning worked under the hood. Every resource was either:
Way too theoretical
Or shallow βframework tutorialsβ
I started writing blog posts at dragan.rocks just to explain things to my past self.
I've spent many years building HPC and ML libraries, and writing 2 books that teach Linear Algebra and Deep Learning to actual programmers (not math PhDs).
Here's how I went from writing my first blog post to building a following, front-paging Hacker News, and useful books.π§΅π
Thank you!
I really don't know, as I don't get any contact details from Patreon. I am surprised that they close accounts for such reasons. The only thing that I can suggest is to try with a more traditional email, such as gmail...
The best programmers arenβt good because they know more.
Theyβre good because they ask:
βWhatβs really going on here?β
Programmers treat linear algebra like magic.
But it's not magic.
It's code. It's vectors. It's yours to master.
Linear Algebra for Programmers shows you howβwith zero fluff.
If you write code, you need this book.
π aiprobook.com/numerical-li...
#DevLife #MachineLearning #AI #Coding
π If you're a programmer struggling with math, Linear Algebra for Programmers by Dragan Djuric is the book you didnβt know you needed.
π’ No fluff. Just the math that powers ML, graphics, and moreβexplained in code.
π Get smarter where it counts: aiprobook.com/numerical-li...
#AI
A lot of code that makes PyTorch useful might already be in the Deep Diamond. No need to create a PyTorch port, just the integration of the most useful stuff from libtorch into Clojure.
27.04.2025 19:41 β π 1 π 0 π¬ 0 π 0That's all right. I might do the PyTorch part, and other people will do some other pieces.
27.04.2025 19:39 β π 1 π 0 π¬ 0 π 0Nothing stops us from doing the same deep integration to onnx, of course. Or any other runner. But, as I understand, the selling pitch for onnx is portability, not performance. Why I would go with PyTorch is that most models are developed on PyTorch anyway, so there's less friction there...
26.04.2025 18:33 β π 1 π 0 π¬ 1 π 0This would be orthogonal, for production use of the models. You could collaborate with Python colleagues in whatever way you can collaborate now. The point is that when there is a model (public or private) that you want to build your application on, you can run it from the JVM, without Python.
26.04.2025 15:55 β π 1 π 0 π¬ 1 π 0I am not sure what exactly you consider the PyTorch ecosystems, but my vision is, first, to be able to load and run popular PyTorch models (saved models, not python code to produce these models) by the PyTorch engine, without conversion, from Clojure/Deep Diamond in-process.
26.04.2025 15:19 β π 1 π 0 π¬ 1 π 0For now I'm still just thinking about it, not actually looking into that. First I need to see how many people would be engaged, and whether it's a good use of my (not limitless) resources ;)
26.04.2025 15:16 β π 1 π 0 π¬ 0 π 0I'm thinking of creating a #Clojure integration to #PyTorch (C++. without the Python part). I guess, if you can't beat them, join them...
How many Clojurians are interested in that? Would it be something you use passionately (at least for running existing model in-process)?
BTW Accelerate itself doesn't support GPU. For the GPU support, I'll have do do it using Metal.
14.04.2025 20:40 β π 1 π 0 π¬ 0 π 0GPU acceleration under MacOS is not yet available, but I plan to do it this summer!
14.04.2025 20:39 β π 1 π 0 π¬ 1 π 0