From AMD, more on the performance side:
“Improving the Utilization of Micro-operation Caches in x86 Processors”
The other is more security angle + some interesting timing attacks:
“UC-Check: Characterizing Micro-operation Caches in x86 Processors and Implications in Security and Performance”
27.02.2026 19:42 —
👍 25
🔁 1
💬 2
📌 0
The smaller pieces are thus able to fit entirely in the uOP cache, avoiding thrashing the decoder constantly.
There are quite a few papers on the subject, but these two give a really nice overview:
27.02.2026 19:42 —
👍 21
🔁 0
💬 1
📌 0
99% of programmers shouldn’t care; but those who squeeze the absolute maximum last bit of performance out of x86 pay attention.
Loop Fission is an interesting technique, where you spit up a complex loop into multiple smaller sequential ones.
27.02.2026 19:42 —
👍 20
🔁 0
💬 2
📌 0
x86 “looks” CISC, but all of the engine is RISC underneath.
You don’t *want* to wake up the decoder if you don’t have to. It wastes about ~6 cycles + extra power.
Usually, the compiler aligns everything for you...as long as your loop is small enough.
27.02.2026 19:42 —
👍 19
🔁 0
💬 1
📌 0
There is one problem though.
You can’t see it.
Well, not directly at least. You’ll never find uOPs in the binary.
But! You can see the “shape” of it with performance tools…and there are subtle tells in the binary as well (hint, some nops).
27.02.2026 19:42 —
👍 21
🔁 0
💬 3
📌 0
Most programmers are taught that L1 is the “top level” cache on x86.
It’s not quite true anymore!
Intel calls it the Decoded Stream Buffer (DSB), AMD the OpCache.
Only enough room for ~4,000 micro-ops, but there are interesting ways to take advantage of it.
27.02.2026 19:42 —
👍 149
🔁 18
💬 4
📌 1
hahaha
25.02.2026 22:13 —
👍 1
🔁 0
💬 0
📌 0
It’s kind of funny that so few listened. FreeBSD was still using 16807 in rand() all the way until 2021!
So if you ever see that constant in disassembled code…now you know :)
25.02.2026 22:13 —
👍 41
🔁 0
💬 2
📌 0
That fits nicely in 32-bit hardware. Only a few instructions.
Apple put it in CarbonLib, FreeBSD also used it; for a few decades it was kind of everywhere.
A few years later they discovered that 48271 was a little better.
Specifically, a bit more even on spectral tests up to 6 dimensions.
25.02.2026 22:13 —
👍 24
🔁 0
💬 2
📌 0
They weren’t really trying to make a perfect algorithm; it was more about being “reasonably good and efficient”.
Called the minimal standard, it’s a quick little multiplication routine, just one line:
x = seed × 16807 mod 2^31 - 1
25.02.2026 22:13 —
👍 25
🔁 0
💬 1
📌 0
Today it feels trivial, but for decades random number generation was REALLY bad. Mostly IBM's fault.
Two researchers, Park + Miller got so sick of bad RNGs, they released a paper to the ACM in 1988 titled:
"Random Number Generators: Good Ones Are Hard to Find."
25.02.2026 22:13 —
👍 29
🔁 0
💬 1
📌 0
16807 is a very special number in Computer Science.
You can find it in the Playstation 5 (freebsd 11), almost every Mac Classic game, and even the C++11 standard!
Give it the right prime number, you can produce an evenly distributed sequence for over 2 BILLION values.
25.02.2026 22:13 —
👍 104
🔁 14
💬 2
📌 0
The original title of the paper if you want to search:
“Implications of the Turing completeness of reaction-diffusion models, informed by GPGPU simulations on an XBox 360: Cardiac arrhythmias, re-entry and the
Halting problem”
23.02.2026 19:27 —
👍 28
🔁 2
💬 0
📌 0
Boom. Thousands of simulated cardiac cells running at high speed on a single box.
A fun benefit, you get visualizations for “free” by tacking on a little render code to the end of the sim.
It’s certainly an entertaining read, even if the utility is questionable.
23.02.2026 19:27 —
👍 17
🔁 0
💬 1
📌 0
So why the Xbox 360?
Mostly, computational bang for the buck…I also speculate they were trying to be funny. Consoles were somewhat lopsided in that era, you genuinely got a ton of compute if you knew how to use it.
The author writes some C++ for the simulation, ports some of it to HLSL shaders.
23.02.2026 19:27 —
👍 15
🔁 0
💬 2
📌 0
Now that you’ve proven cardiac tissue is Turing complete, uh oh, it’s vulnerable to the Halting problem.
Thus, there is no general algorithm that can look at the state of cardiac tissue and decide if it will ever stop.
Arrhythmias are fundamentally uncomputable!
23.02.2026 19:27 —
👍 16
🔁 3
💬 1
📌 1
The author figured out you can build a NOR gate from heart cells.
NOR is a universal gate, so you can build all the other gates out of NORs.
Thus, arbitrary logic circuits, plus time…boom you have a computer.
But wait! Computers have interesting properties:
23.02.2026 19:27 —
👍 12
🔁 0
💬 2
📌 0
The human heart is a Turing Machine.
Researchers figured it out with an Xbox 360.
I realize how fake that sounds...but it’s real research published in Elsevier's Computational Biology and Chemistry journal in 2009.
Hearts are electrically excitable media.
23.02.2026 19:27 —
👍 77
🔁 8
💬 5
📌 2
god i wish there was an easier way
21.02.2026 08:21 —
👍 133
🔁 3
💬 13
📌 0
what do you mean by detector? like the actual image sensor?
I'm unfamiliar with the astrophotography world so I'm not sure how big they really get.
The physically largest image sensors I've seen are medium format...but I'm sure they go larger for other applications
19.02.2026 20:11 —
👍 2
🔁 0
💬 1
📌 0
Probably the most comprehensive paper I’ve seen on the overall subject of sensor noise is from MDPI, “The Geometry of Noise in Color and Spectral Image Sensors”
Check it out here:
www.mdpi.com/1424-8220/20...
19.02.2026 19:27 —
👍 19
🔁 0
💬 1
📌 0
Mark Shelley, an astrophotography enthusiast, has amazingly detailed reverse engineering writeups on his blog about various sensor issues.
Please check it out, it’s super cool:
www.markshelley.co.uk
19.02.2026 19:27 —
👍 27
🔁 0
💬 1
📌 0
When you prefer a “Canon color”, what you are technically valuing is whatever compromises were made in the processing pipeline.
I’ve just *barely* scratched the surface; haven’t even touched on codecs.
Fascinating how much diversity is still there to be pushed using…extremely similar “engines”.
19.02.2026 19:27 —
👍 19
🔁 1
💬 1
📌 0
You end up with a non-linear distribution of noisy data. Tricky.
Thus begins the NR pipeline…and this is where I start to have a real problem with Sony.
They bake spatial NR directly into the RAW path. It's a sneaky trick to cheat on dynamic range benchmarks.
19.02.2026 19:27 —
👍 20
🔁 0
💬 1
📌 0
Much of it comes down to company taste.
Sony produces the majority of sensors; ironically I think they do the worst job with the signal chain.
First, you start with the color correction matrix (CCM).
The catch is punchy colors start to mathematically multiply noise.
19.02.2026 19:27 —
👍 20
🔁 0
💬 1
📌 0
A open secret is that all cameras are basically the same. Just look at the sensor.
Leica SL2-S? IMX410
Sony a7 III? IMX410
Lumix S5II? IMX410
BMCC6k? IMX410
Same photosites…but they still manage different feels. The processing pipeline is where it gets interesting.
19.02.2026 19:27 —
👍 112
🔁 6
💬 3
📌 0
YouTube video by LaurieWired
2026 Computer Science Predictions
Full Video:
www.youtube.com/watch?v=cnX5...
18.02.2026 18:34 —
👍 75
🔁 9
💬 2
📌 3
ignore the rumors, computer science is still cool
here's my predictions for 2026
18.02.2026 18:34 —
👍 152
🔁 13
💬 6
📌 0
Rise of the Triforce
During the rapid technological advancements of the early 1990s, the video game industry was on the cusp of a massive addition - another dimension. With console shenanigans like the Super FX chip givin...
The funny part is, some cabinets used an optical drive exactly *once*.
You’d load the game into DRAM, which was then on a battery backup, so hopefully you never have to use the drive again.
It’s a great lunchtime read, go check it out:
dolphin-emu.org/blog/2026/02...
17.02.2026 18:15 —
👍 43
🔁 4
💬 2
📌 0