Actually, your blog has been on fire lately!
06.11.2025 18:45 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0@spimescape.bsky.social
Building recommender systems @ Consumer Tech Co
Actually, your blog has been on fire lately!
06.11.2025 18:45 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0I am biased, because I mainly use Python, but the BQ sdk is pretty nice and you can read from sqlite with Python and ingest rows in bulk using the sdk.
30.10.2025 20:09 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0This is how I learnt about GPUs without basically any background knowledge.
20.10.2025 19:12 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0modern LLM inference engines like vLLM & SGlang are becoming tough to dive into. to learn how these inference engines work, nano-vllm is a fantastic educational projectโcomplete Page Attention & LLM scheduler in <1k loc.๐คฏ
flaneur2020.github.io/posts/2025-1...
TIL that junk journaling is a thing!
12.10.2025 11:24 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Microblogging like it's 2010s with vibes.
12.10.2025 09:06 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0I was trying to research how prevalent remote Jupyter kernels are, but could only find a few open source projects (ie. something called kernel gateway - anyone using it?)
11.10.2025 17:58 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0I have a very poor understanding of concurrency in general and more specifically a poor understanding of the kinds of Heisenbugs that will now be foisted on potentially unsuspecting Python users.
11.10.2025 17:56 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Me after hearing that Python 3.14 has removed the GIL:
"ah, finally increased throughput of pulling data from Big Query into my Jupyter notebooks."
Also me: "ah, a new footgun to add to my repertoire"
This means that if you max out memory by say loading a dataset that's larger than what you have capacity for, you crash the kernel and potentially also lose any code changes in your notebook that hadn't been written to disk.
11.10.2025 17:53 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0One of the design issues with Jupyter notebooks when it comes to heavy ML workloads is that the notebook server runs by default on the same machine as the kernel that executes the code.
11.10.2025 17:53 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0NVIDIA seems to invest a lot of engineering effort into making higher level libraries for writing efficient GPU code and yet everyone is flexing by rolling out their own CUDA kernels.
11.10.2025 15:13 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0A speech about what drives me, how science and open source are bitter victories, unable to make improve the world if society does not embrace them for the better:
gael-varoquaux.info/personnal/a-...
I was looking for a solution to "migrate a container that is close to OOM" onto another node and found CRIU.
Still a bit unclear if it supported on Google's GKE or not.
Container image experts - is it possible to manually create a new layer by manipulation the files in the tar archive you get after running docker image save?
08.10.2025 20:17 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0One of the major design flaws of many notebook environments like Jupyter is that the kernel that does computations is not separate from the machine that runs the notebook server itself.
08.10.2025 20:15 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0I am honestly curious to learn what is the concrete end product in this vision/plan.
05.10.2025 14:56 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0When folks say they are going to build AGI - what exactly does that look like?
05.10.2025 14:55 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0After my inevitable rejection, I asked for feedback from the interview and I wish I hadn't, because the letter that was sent stated that I simply did not have the abilities or talent to become a scientist.
I can laugh about it now, but at 15 and aspiring to become a chemist, this was devastating.
I suppose one could argue that if I was really into chemistry, I could have maybe come across this knowledge myself in my extracurricular studying, but the interview questions can be pretty much anything and the field is huuuuuge!
04.10.2025 12:31 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0I was baffled, because even though I was at a good local school, this kind of knowledge was in the first year university curriculum in my country. Only later did I find out, that many top schools that send students to top research unis have years of interview prep.
04.10.2025 12:31 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Many years ago I was in this position and got an interview to study a science course at one of the top unis. When I went to the interview, I was given a picture of a molecule and asked to sketch a graph of the signals this molecule would give if run through a particular type of spectroscopy.
04.10.2025 12:31 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0I really miss London.
30.09.2025 20:51 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0Many k8s benefits have to do with horizontal scaling. With many services you can deal with increased load by having more replicas. Same with ML inference but ML training doesn't scale horizontally in the same way.
30.09.2025 20:14 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0I've probably ranted about this elsewhere but for many ML teams, container image is the wrong abstraction unit.
ML containers can be truly gargantuan in size and restarting them is not as cheap as with more lightweight containers for web services for ex.
For example, a question we can answer with napkin math is how much of the model weights or data could we dump to disk in the x seconds grace period that k8s gives to terminating containers.
30.09.2025 19:53 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0I am very much a beginner in this space but napkin math for systems problems is quite a lot of fun!
30.09.2025 19:53 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0I use k8s extensively in the AI/ML space and would love to see a k8s that is more geared towards ML pipeline or batch job needs.
The reason we stick with k8s despite the shortcomings is the sheet number of cloud services and oss platforms that integrate with k8s out of the box.
Interesting but it looks like it also targets services so perhaps not so suitable for ML batch jobs.
"Running a shared-nothing architecture at the edge, we needed a simple way to scale HTTP/TCP based containers without the overhead of complex infrastructure or additional dependencies. "
Is anyone working on a K8s alternative? Kinda curious.
24.06.2025 17:52 โ ๐ 12 ๐ 2 ๐ฌ 10 ๐ 2