hmm, interesting, but it does seem like a bit of funny marketing ha.
L0 seems to be a renamed, more efficient L1
hmm, interesting, but it does seem like a bit of funny marketing ha.
L0 seems to be a renamed, more efficient L1
The logic was essentially, hey system fonts are pretty good now…why not just default to what’s native?
Apple get’s apple fonts. Windows get’s windows fonts.
There’s a great blogpost from Mark Otto, GitHub’s director of design about the switch:
markdotto.com/blog/github-...
Hence, the early web was very…Times New Roman-y.
Github was arguably one of the first major players to go *against* the custom font / FOUT hell of the mid 2010s.
In mid 2017, they essentially re-adopted the 90s method of using direct system fonts!
FOUTs were essentially unheard of in the 90s.
The entire world basically defaulted to web-safe fonts.
In the rare instance someone got fancy with something non-standard, the browser would just fallback to a default.
This didn’t really change until 2010!
First, ignore JavaScript for a second.
Even with plain HTML+CSS, it’s quite common to get FOUTs these days.
FOUT = Flash of Unstyled Text.
AKA temporarily load a system-native font, then when the custom font finally rolls in, "snap" to the new font.
Websites today load wildly differently than in the 90s.
Arguably, worse.
The HTML spec was designed to be read sequentially, so text used to stream in, then display instantaneously. Basically, read -> paint.
A lot of today’s modern weirdness comes from…fonts.
From AMD, more on the performance side:
“Improving the Utilization of Micro-operation Caches in x86 Processors”
The other is more security angle + some interesting timing attacks:
“UC-Check: Characterizing Micro-operation Caches in x86 Processors and Implications in Security and Performance”
The smaller pieces are thus able to fit entirely in the uOP cache, avoiding thrashing the decoder constantly.
There are quite a few papers on the subject, but these two give a really nice overview:
99% of programmers shouldn’t care; but those who squeeze the absolute maximum last bit of performance out of x86 pay attention.
Loop Fission is an interesting technique, where you spit up a complex loop into multiple smaller sequential ones.
x86 “looks” CISC, but all of the engine is RISC underneath.
You don’t *want* to wake up the decoder if you don’t have to. It wastes about ~6 cycles + extra power.
Usually, the compiler aligns everything for you...as long as your loop is small enough.
There is one problem though.
You can’t see it.
Well, not directly at least. You’ll never find uOPs in the binary.
But! You can see the “shape” of it with performance tools…and there are subtle tells in the binary as well (hint, some nops).
Most programmers are taught that L1 is the “top level” cache on x86.
It’s not quite true anymore!
Intel calls it the Decoded Stream Buffer (DSB), AMD the OpCache.
Only enough room for ~4,000 micro-ops, but there are interesting ways to take advantage of it.
hahaha
25.02.2026 22:13 — 👍 1 🔁 0 💬 0 📌 0
(side note: most rand() implementations moved on to other LCGs, or mersenne twisters and such…but it’s arguable that 16807 is still quite ubiquitous!)
Original Paper if you’d like to read:
dl.acm.org/doi/10.1145/...
It’s kind of funny that so few listened. FreeBSD was still using 16807 in rand() all the way until 2021!
So if you ever see that constant in disassembled code…now you know :)
That fits nicely in 32-bit hardware. Only a few instructions.
Apple put it in CarbonLib, FreeBSD also used it; for a few decades it was kind of everywhere.
A few years later they discovered that 48271 was a little better.
Specifically, a bit more even on spectral tests up to 6 dimensions.
They weren’t really trying to make a perfect algorithm; it was more about being “reasonably good and efficient”.
Called the minimal standard, it’s a quick little multiplication routine, just one line:
x = seed × 16807 mod 2^31 - 1
Today it feels trivial, but for decades random number generation was REALLY bad. Mostly IBM's fault.
Two researchers, Park + Miller got so sick of bad RNGs, they released a paper to the ACM in 1988 titled:
"Random Number Generators: Good Ones Are Hard to Find."
16807 is a very special number in Computer Science.
You can find it in the Playstation 5 (freebsd 11), almost every Mac Classic game, and even the C++11 standard!
Give it the right prime number, you can produce an evenly distributed sequence for over 2 BILLION values.
The original title of the paper if you want to search:
“Implications of the Turing completeness of reaction-diffusion models, informed by GPGPU simulations on an XBox 360: Cardiac arrhythmias, re-entry and the
Halting problem”
Boom. Thousands of simulated cardiac cells running at high speed on a single box.
A fun benefit, you get visualizations for “free” by tacking on a little render code to the end of the sim.
It’s certainly an entertaining read, even if the utility is questionable.
So why the Xbox 360?
Mostly, computational bang for the buck…I also speculate they were trying to be funny. Consoles were somewhat lopsided in that era, you genuinely got a ton of compute if you knew how to use it.
The author writes some C++ for the simulation, ports some of it to HLSL shaders.
Now that you’ve proven cardiac tissue is Turing complete, uh oh, it’s vulnerable to the Halting problem.
Thus, there is no general algorithm that can look at the state of cardiac tissue and decide if it will ever stop.
Arrhythmias are fundamentally uncomputable!
The author figured out you can build a NOR gate from heart cells.
NOR is a universal gate, so you can build all the other gates out of NORs.
Thus, arbitrary logic circuits, plus time…boom you have a computer.
But wait! Computers have interesting properties:
The human heart is a Turing Machine.
Researchers figured it out with an Xbox 360.
I realize how fake that sounds...but it’s real research published in Elsevier's Computational Biology and Chemistry journal in 2009.
Hearts are electrically excitable media.
god i wish there was an easier way
21.02.2026 08:21 — 👍 133 🔁 3 💬 13 📌 0
what do you mean by detector? like the actual image sensor?
I'm unfamiliar with the astrophotography world so I'm not sure how big they really get.
The physically largest image sensors I've seen are medium format...but I'm sure they go larger for other applications
Probably the most comprehensive paper I’ve seen on the overall subject of sensor noise is from MDPI, “The Geometry of Noise in Color and Spectral Image Sensors”
Check it out here:
www.mdpi.com/1424-8220/20...
Mark Shelley, an astrophotography enthusiast, has amazingly detailed reverse engineering writeups on his blog about various sensor issues.
Please check it out, it’s super cool:
www.markshelley.co.uk
When you prefer a “Canon color”, what you are technically valuing is whatever compromises were made in the processing pipeline.
I’ve just *barely* scratched the surface; haven’t even touched on codecs.
Fascinating how much diversity is still there to be pushed using…extremely similar “engines”.