An animation for an upcoming video on DeepLIFT. This is by far the most time-consuming one I've made.
26.02.2026 13:36 β π 0 π 0 π¬ 0 π 0Three different definitions of Explainable AI
26.02.2026 10:16 β π 0 π 0 π¬ 0 π 0
Explainable AI (XAI) is about illuminating black-box machine learning models and explaining them in a way that we can understand.
I'm fascinated with this topic. But learning about it was a struggle as there's not much educational content out there. So I made a course. And it's free!
This is amazing work! It helped me find some interesting people in my cluster.
It would be great if you could filter by recent activity (e.g. only show those who have posted in the last month). This would help remove "dead" accounts.
I made a map of 3.4 million Bluesky users - see if you can find yourself!
bluesky-map.theo.io
I've seen some similar projects, but IMO this seems to better capture some of the fine-grained detail
There is a lot of research in this area, but it is focused on predictive machine learning. These are easier to explain as we typically interpret a model's decision on a single input instance.
I have no idea how you would do it for GenAI where the training data is vast and unknown.
What kind of XAI methods could be used for the output of GenAI models like this? βΊοΈ
26.02.2026 08:49 β π 0 π 0 π¬ 0 π 0
I think this is more of a warning about being a matplotlib contributor than anything else
theshamblog.com/an-ai-agent-...
How a CNN makes predictions.
Earlier layers may extract certain features like edges and textures from the input. These are then combined in deeper layers to create features representing specific objects, like pieces of sushi.
I yearn for simpler times
24.02.2026 08:44 β π 0 π 0 π¬ 0 π 0The real tragedy is that AI has killed the novelty of those Photoshop edits where they combine two animals
24.02.2026 08:42 β π 0 π 0 π¬ 0 π 1Finally going to try YouTube's A/B testing. Which Thumbnail do you think will do better?
23.02.2026 17:13 β π 0 π 0 π¬ 0 π 0
New videos coming soon βοΈ
youtu.be/sz964mAVcAE
A little animation from my upcoming video on LIME
23.02.2026 13:23 β π 0 π 0 π¬ 0 π 0
I'm definitely not one to overhype AI, but saying it is useless is just wrong.
My dad is 60 years old, and he can stop talking about how helpful it is to his painting manufacturing business. He's planning to expand into a bunch of products with its help.
Done editing! I've moved over to a new mic, so hopefully these come out well.
19.02.2026 17:18 β π 1 π 0 π¬ 0 π 0Loss landscape gif created using Python code from Claude. LLMs really are an amazing storytelling tool.
18.02.2026 17:25 β π 1 π 0 π¬ 0 π 0A little animation (or catimation, rather) from an upcoming video on smoothgrad.
18.02.2026 16:50 β π 1 π 0 π¬ 0 π 0
SmoothGrad: adding noise to remove noise
Working on the gradient-based section of my course. Going to turn this image into a little animation.
Actually I was wrong. This problem has more to do with the linearity assumption.
12.02.2026 11:34 β π 0 π 0 π¬ 0 π 0
I've been digging into it a bit, and I think the Captum implementation of KernelSHAP is super misleading.
It uses a baseline. Although this is practical for image data, it means we are really applying Baseline SHAP (BShap).
captum.ai/api/kernel_s...
Glass Beams make the best music to work/focus to
music.youtube.com/watch?v=v_mp...
I've been reading a lot of Roddy Doyle lately
12.02.2026 08:38 β π 0 π 0 π¬ 0 π 0
One of the major limitations of applying KernelSHAP to image data.
Features (i.e. clusters of pixels) tend to be highly correlated. This means we can only get reasonable results when we use large superpixels that are approximately independent.
One of my favourite uses of LLMs:
11.02.2026 11:55 β π 0 π 0 π¬ 0 π 0Machine learning models are complex functions. The idea behind LIME is that this complexity falls away if we zoom into the feature space in the area around an instance. The function is much simpler or even linear.
10.02.2026 14:26 β π 2 π 0 π¬ 0 π 0
A w.i.p. figure for my LIME article.
I want to show how, although the original model uses images, we end up training a surrogate model on tabular features.
Working on the Integrated Gradients section of my Explainable AI course.
One of the more computationally expensive gradient-based approaches. Thankfully, it tends to converge with around 50 backwards passes.