Pekka Väänänen's Avatar

Pekka Väänänen

@pekkavaa.bsky.social

Avid reader, computer graphics fan and atmospheric jungle beats enjoyer. Demoscene: cce/Peisik. Blog at https://30fps.net/

1,125 Followers  |  352 Following  |  213 Posts  |  Joined: 14.10.2024  |  2.17

Latest posts by pekkavaa.bsky.social on Bluesky

Preview
Subpixel Zoo: A Catalog of Subpixel Geometry Display devices show color by illuminating small flat-color 'subpixels'—typically red, green, blue. They can be arranged in many ways. This page intends to be an exhaustive gallery of all subpixel…

A cool collection of monitor and sensor filter subpixel patterns geometrian.com/resources/su...

15.10.2025 17:56 — 👍 10    🔁 2    💬 0    📌 0

Perhaps in that case it's also possible to fit some kind of better weights of the filter kernel? At least if you focus on a selected corpus of images.

14.10.2025 09:01 — 👍 0    🔁 0    💬 1    📌 0
A four-pane comparison of different formulas for greyscale conversion.
Top row: Linear sRGB (the correct way to do it), Gamma sRGB (the somewhat wrong "luma" formula).
Bottom row: CIELAB L* (a high quality option), and the new alternative gamma-space version.

Careful inspection shows the "Gamma sRGB" case is slightly dimmer at points than the others.

A four-pane comparison of different formulas for greyscale conversion. Top row: Linear sRGB (the correct way to do it), Gamma sRGB (the somewhat wrong "luma" formula). Bottom row: CIELAB L* (a high quality option), and the new alternative gamma-space version. Careful inspection shows the "Gamma sRGB" case is slightly dimmer at points than the others.

A quote from “Principles of Digital Image Processing” by Wilhelm Burger and Mark J. Burge. It shows weights that are a better fit for gamma-space greyscale conversion.

A quote from “Principles of Digital Image Processing” by Wilhelm Burger and Mark J. Burge. It shows weights that are a better fit for gamma-space greyscale conversion.

New article on my site: Better sRGB to greyscale conversion

The commonly used greyscale formula is slightly off when computed in gamma space but can it be fixed?

📜 30fps.net/pages/better...

13.10.2025 17:56 — 👍 37    🔁 9    💬 2    📌 0

After these improvements the median split position works surprisingly well and the only thing I've consistently seem to improve the result (MSE-wise) is Celebi's 2012 "VCL" method that splits at the mean but runs k-means with two clusters afterwards to finetune the split plane in 3D.

09.10.2025 11:03 — 👍 2    🔁 0    💬 0    📌 0

Sure I've tried a lot of things. These help: Give more weight to green, RGB weights like (1,1.2,0.8) work OK, split the cluster with largest variance (sum of squared errors), split the axis with the most marginal variance, run 1-10 k-means iterations at the end, pixel map in Oklab space (maybe).

09.10.2025 11:01 — 👍 1    🔁 0    💬 1    📌 0
A top-down hierarchical color quantization routine shown as a binary tree. The algorithm starts from the top box 1 that contains all image colors and continues splitting the box with the greatest error (in this case just max volume) into color subsets. Each box is split at the median, shown in red. Splitting stops when the requested number of colors has been reached.

The mean colors of the leaf nodes at the bottom define the final 12-color palette.

A top-down hierarchical color quantization routine shown as a binary tree. The algorithm starts from the top box 1 that contains all image colors and continues splitting the box with the greatest error (in this case just max volume) into color subsets. Each box is split at the median, shown in red. Splitting stops when the requested number of colors has been reached. The mean colors of the leaf nodes at the bottom define the final 12-color palette.

The final image and its palette (top right). I chose a purposefully primitive algorithm as a demonstration and the colors are a bit greyish and dim because of it.

The final image and its palette (top right). I chose a purposefully primitive algorithm as a demonstration and the colors are a bit greyish and dim because of it.

My new visualization for color quantization algorithm execution, though still unfinished. Attached is the result for a simple 12-color median cut. Each box contains a subset of image colors with the X-axis as a chosen sort axis; Y is PCA for plotting. See how e.g. box 3 has unwanted greens in it.

07.10.2025 07:32 — 👍 9    🔁 2    💬 1    📌 0

Uneven or messy line art is another such detail, in illustrations I mean. When drawing with the target resolution in mind it's possible to adapt the style the constraints. Hard to do when converting.

03.10.2025 21:48 — 👍 4    🔁 0    💬 1    📌 0
Post image

New blog post! In "Billions of triangles in minutes" we'll walk through hierarchical cluster level of detail generation of, well, billions of triangles in minutes. Reposts welcome!

zeux.io/2025/09/30/b...

30.09.2025 17:40 — 👍 161    🔁 54    💬 1    📌 4

OK I see, perhaps "sensitivity" or "density" would fit. Sometimes it's surprisingly difficult to find names for simple things, for example the "black&white" intensity of a color we can call "luminosity" or "lightness" but what is its opponent? Colorfulness? Chromaticity?

03.10.2025 21:07 — 👍 1    🔁 0    💬 0    📌 0

Works super well! And doesn't sound like it's hard to implement after the ramps have been detected.

03.10.2025 21:01 — 👍 1    🔁 0    💬 1    📌 0

I wrote a simple box collision resolution code today and made the classic mistake of not checking if box_a != box_b before testing for overlap. This was the bug that made me post my first programming question online 20 years ago or so! I was unaware of the <> Basic "not equals" operator at the time.

03.10.2025 17:56 — 👍 6    🔁 0    💬 0    📌 0

I think here I had a 2D array, so x[i] returned a *view* to a row. And however the a,b=b,a syntactic sugar is implemented, it fails to make temporary copy (how could it know?) and writes the same value twice instead. Advanced indexing is handled by NumPy internally and it could be still in Python.

28.09.2025 12:20 — 👍 0    🔁 0    💬 0    📌 0
Preview
Swap two values in a numpy array. Is there something more efficient than the following code to swap two values of a numpy 1D array? input_seq = arange(64) ix1 = randint(len(input_seq)) ixs2 = randint(len(input_seq)) temp = input...

Tried to be clever with some NumPy array partitioning code and exchange two values with the "x[i], x[j] = x[j], x[i]" idiom. Nope: got to use "x[[i,j]] = x[[j, i]]" instead, see stackoverflow.com/a/47951813

26.09.2025 11:52 — 👍 1    🔁 0    💬 1    📌 0

I remember hearing that in aviation +X forward is standard. My theory is that an airplane is usually drawn on paper flying from left to right and when they went to 3D, they kept the convention.

24.09.2025 21:01 — 👍 1    🔁 0    💬 1    📌 0

Looks like it's working :) What do the "Control" sliders do?

24.09.2025 20:49 — 👍 1    🔁 0    💬 1    📌 0

The papers in question:
Paul Debevec, "A Median Cut Algorithm for Light Probe Sampling": vgl.ict.usc.edu/Research/Med...
F. Banterle et al., "A framework for inverse tone mapping" www.researchgate.net/publication/...

24.09.2025 17:56 — 👍 0    🔁 0    💬 0    📌 0
Figure 1: The Grace Cathedral light probe subdivided into 64 regions of equal light energy using the median cut algorithm. The small circles are the 64 light sources chosen as the energy centroids of each region; the lights are all approximately equal in energy.

From Paul Debevec, "A Median Cut Algorithm for Light Probe Sampling"

Figure 1: The Grace Cathedral light probe subdivided into 64 regions of equal light energy using the median cut algorithm. The small circles are the 64 light sources chosen as the energy centroids of each region; the lights are all approximately equal in energy. From Paul Debevec, "A Median Cut Algorithm for Light Probe Sampling"

Comparison of median cut results using an HDRI and LDRI of memorial HDRI. a The memorial LDRI. b Median cut result, 1024 light sources generated starting from a LDRI. c Median cut result, 1024 light sources generated starting from a HDR

From F. Banterle et al. "A framework for inverse tone mapping"

Comparison of median cut results using an HDRI and LDRI of memorial HDRI. a The memorial LDRI. b Median cut result, 1024 light sources generated starting from a LDRI. c Median cut result, 1024 light sources generated starting from a HDR From F. Banterle et al. "A framework for inverse tone mapping"

A neat little paper from 2006 "A Median Cut Algorithm for Light Probe Sampling" where an environment map is compressed to a set of point lights using the median cut algorithm. Seems like a precursor to another 2007 paper where others cast the task as density estimation.

24.09.2025 17:56 — 👍 12    🔁 0    💬 1    📌 0
Video thumbnail

Got my flight sim taking keyboard input and using that to drive the aerodynamic control surfaces. One step closer to take-off! #gamedev #flightsim #physics 🛩️

18.09.2025 13:09 — 👍 36    🔁 4    💬 1    📌 0
Video thumbnail

N64brew user @boxingbruin.bsky.social continues to showcase more work that they've done for their Pickle64 project, their action-RPG platformer hybrid with dark souls-like boss fights.

14.09.2025 11:36 — 👍 124    🔁 36    💬 2    📌 0
Clang-format configurator Interactive clang-format configuration tool\nCreate or modify a clang-format configuration file using simple GUI interface while watching how the changes affect code formatting

I recall using this preview tool clang-format-configurator.site but it was still hard to get if-else blocks look right...

14.09.2025 11:35 — 👍 1    🔁 0    💬 0    📌 0

Looks plausible and fits the visual style, really neat!

10.09.2025 11:01 — 👍 2    🔁 0    💬 0    📌 0
A diagram showing three images: the Source, a crop from Big Buck Bunny; the target image, the PICO-8 palette; and the result image. The result image's color distribution has been moved to match the target.

Paper reference: "Color Transfer Between Images" Reinhard et al. 2001.

A diagram showing three images: the Source, a crop from Big Buck Bunny; the target image, the PICO-8 palette; and the result image. The result image's color distribution has been moved to match the target. Paper reference: "Color Transfer Between Images" Reinhard et al. 2001.

There's a classic color transfer technique from 2001: convert source and target image to a decorrelated color space (Oklab works fine), make target's mean and stddev match the source, convert back to RGB. Using a palette image as a target is no problem :)

Paper: home.cis.rit.edu/~cnspci/refe...

10.09.2025 07:32 — 👍 19    🔁 3    💬 0    📌 0
A four-pane comparison of pixel mapping results in CAM16-UCS, Oklab, and the CIELAB colorspace (HyAB distance function). Top right is the input image.

A four-pane comparison of pixel mapping results in CAM16-UCS, Oklab, and the CIELAB colorspace (HyAB distance function). Top right is the input image.

A short article on mapping pixel colors to the PICO-8 palette in the CAM16-UCS color space. Surprisingly, it didn't do much better than Oklab, which is derived from CAM16.

📜 30fps.net/pages/percep...

08.09.2025 11:52 — 👍 10    🔁 0    💬 0    📌 0
# First deduplicate the colors
unique_colors, counts = deduplicate_colors(img.reshape(-1, 4))

# Create a structured array so that we can have float RGBA colors, per color weight and
# the original RGBA8 color accessible and sortable with a single index.

ColorRGBA = np.dtype([
('r', 'f4'),
('g', 'f4'),
('b', 'f4'),
('a', 'f4'),
('weight', 'f4'),
('original', 'u4'),
])

colors = np.rec.array(np.zeros(unique_colors.shape[0], dtype=ColorRGBA))
colors.r = unique_colors[:, 0] / 255.0
colors.g = unique_colors[:, 1] / 255.0
colors.b = unique_colors[:, 2] / 255.0
colors.a = unique_colors[:, 3] / 255.0
colors.weight = counts / num_total
colors.original = unique_colors.view(np.uint32).reshape(-1)

# Scale the channels to weight luminance more when picking a split-plane axis.
# This is why its convenient to store the original RGBA8 color: the final step
# that averages colors for the palette doesn't need to know about this transform.

colors.r *= 0.30
colors.g *= 0.59
colors.b *= 0.11
colors.a *= 0.50

# First deduplicate the colors unique_colors, counts = deduplicate_colors(img.reshape(-1, 4)) # Create a structured array so that we can have float RGBA colors, per color weight and # the original RGBA8 color accessible and sortable with a single index. ColorRGBA = np.dtype([ ('r', 'f4'), ('g', 'f4'), ('b', 'f4'), ('a', 'f4'), ('weight', 'f4'), ('original', 'u4'), ]) colors = np.rec.array(np.zeros(unique_colors.shape[0], dtype=ColorRGBA)) colors.r = unique_colors[:, 0] / 255.0 colors.g = unique_colors[:, 1] / 255.0 colors.b = unique_colors[:, 2] / 255.0 colors.a = unique_colors[:, 3] / 255.0 colors.weight = counts / num_total colors.original = unique_colors.view(np.uint32).reshape(-1) # Scale the channels to weight luminance more when picking a split-plane axis. # This is why its convenient to store the original RGBA8 color: the final step # that averages colors for the palette doesn't need to know about this transform. colors.r *= 0.30 colors.g *= 0.59 colors.b *= 0.11 colors.a *= 0.50

Got familiar with NumPy's structured arrays today. It's basically an Array Of Structs, where fields can of course be different types. Could be useful with binary files, but I used it to organize code. Adding a "record array" allows a nice `colors.r` access syntax.

numpy.org/doc/stable/u...

03.09.2025 18:52 — 👍 0    🔁 0    💬 0    📌 0
Video thumbnail

Some fun with shader-based debug drawing: here I'm drawing an arrow for each path taken in the path tracer, starting with the pixel under the mouse cursor.

31.08.2025 23:10 — 👍 47    🔁 3    💬 1    📌 0

Now _that_ is an idea😀

30.08.2025 21:59 — 👍 0    🔁 0    💬 0    📌 0

Some day :)

30.08.2025 21:54 — 👍 1    🔁 0    💬 1    📌 0
The screenshot shows the "VariQuant" tool interface with a preview panel displaying an image named "boom.png"

The interface includes settings for the number of colors, cut order, split plane, split position ("Mean" is chosen), color space for palette design and pixel mapping, averaging, luminance-scale, and K-means iterations. File operation buttons for opening, exporting, canceling, and generating palettes are also visible.

The screenshot shows the "VariQuant" tool interface with a preview panel displaying an image named "boom.png" The interface includes settings for the number of colors, cut order, split plane, split position ("Mean" is chosen), color space for palette design and pixel mapping, averaging, luminance-scale, and K-means iterations. File operation buttons for opening, exporting, canceling, and generating palettes are also visible.

I finally added alpha support to VariQuant. It took a lot of head-scratching to make it work with both RGB and L*a*b*😅 I premultiply the alpha and handle it like the other color channels. For RGBA color distance I'm using Kornel Lesiński's clever formula: stackoverflow.com/a/8796867

27.08.2025 17:56 — 👍 3    🔁 0    💬 0    📌 0
Mapping Pixel Art Palettes While researching pixel art palettes, I noticed a lot of discussion about color ramps. I've come up with an alternative to crossword-style layouts.

By the way, here's a neat algorithm to find color ramps in a palette: eastfarthing.com/blog/2016-05... For each color, takes its closest neighbors and adds an edge to all that are lighter.

27.08.2025 16:57 — 👍 1    🔁 0    💬 1    📌 0

Doesn't sound easy! Is the pair mapping constructed as an optimization? I haven't implemented color dithering myself so the details are a bit unclear but I thought a nearest-neighbor query into palette would be enough for ordered dither. I don't know how exactly your constraints would fit in though.

26.08.2025 18:59 — 👍 1    🔁 0    💬 1    📌 0

@pekkavaa is following 19 prominent accounts