One of my professors used to have a downwards-pointing clip art space shuttle printed out and posted on his door with the caption, “But it’s only a sign error!”
Post a pic you took, no context, to bring some zen to the feed
On the flip side, it makes AI-written code hard to maintain without AI. And yes, on average. I definitely have seen a strong tendency toward this sort of code in a couple places I could name, and which predated AI by a while.
Do not like. 😑
Got a $25 Peet’s gift card as a result of one of those data breaches, and I’m feeling, basically, just really good about croissants right now.
Also apparently seems very highly trained on frequently-photographed locations.
Although, as I contemplate this, I guess one ought to be very careful in how they use this, as depictions of real places could easily be misinterpreted as an actual representation of that place.
Me: “I mean it’s not that— oh, actually that’s pretty— wait, what the…?”
Fixed the other issue, I was setting (x,y,z,w) = (0,0,0,0), which is invalid on some GPUs and causes the triangle to get culled, on other GPUs is the origin.
As for the text… I think I didn't handle non-retina displays correctly. Not sure about the rest of the artifacts!
Elevations are encoded as images. Brave adds noise when decoding images, for fingerprinting prevention. So if Brave wants randomness, Brave gets randomness!
Will it crash your browser? Maybe! But you can test out the (WIP) viewer here (WebGPU required): rreusser.github.io/notebooks/de... 🏔️
In hindsight, seems dramatically low.
Starting to look alright. Added line rendering, pictured here
Jack Kuenzle's FKT route (www.strava.com/activities/9...)
Somewhat happy with progress on my crappy ad hoc map renderer. Labels are hard.
Nice, figured out how to fetch some Arctic DEM data, fill in the gaps with Copernicus GLO-30, run it through rio-tiler, and connect a WebGPU viewer with some basic level of detail refinement.
My fullerene simulator for designing toilet paper tube constructions is still one of my favorites. Try it in its new home as an @observablehq.com Notebook Kit notebook! 🧻🧻🧻 rreusser.github.io/notebooks/ch...
Threw WebGPU at some data from Sloan Digital Sky Survey. No real content here, but gosh it's satisfying to visualize. (Hope I didn't get any details wrong!) rreusser.github.io/notebooks/vi...
I miss fluids.
Are you able to connect images to your environment? I just had to pass back a base64 image URI and suddenly it could see things (though this isn’t terribly useful for this sort of simulation). I can imagine connecting proper shader debugging tools so that it could step through and query values.
Agreed. One should tread very carefully because it can be hard to tell which is which. I try to contemplate carefully the places where my close oversight strongly matters and those where it just doesn't. Extracting faces on a rando project? TBH if it looks good, not worth my time to dig farther. 🤷♂️
Agreed. It was consistently able to one-shot the FFT. For more complicated things, especially with confounding bugs, I really had to break things down when we ran into problems: okay, set up a ping-pong framebuffer setup and see if we can just move the pattern one pixel to the right each frame…
And it did *something* to extract faces and color them by edge count despite a bad data structure for it. After some finagling about methods, it seems pretty robust. I have a general sense of what it did, but I tried this a couple times and failed, so you win some, you lose some, I guess. 🤷♂️
It basically nailed the FFT first go, but TBH by far the biggest challenge was getting it to center figures with left-justified captions not wider than the figure.
Saw a site lately that prompts different LLMs to create a HTML/CSS clock once every minute, and obviously they suck and we all have a laugh, but… there's no feedback mechanism… *obviously* they're gonna suck.
Made it possible to debug WebGPU stuff so that I could translate and modernize a bunch of notebooks pretty quickly. TBH I just asked it, roughly speaking, "make a webgpu fft plz" and then gave it the tools to debug itself… rreusser.github.io/notebooks/mu...
I've had relatively good luck with similar things, TBH. The key for me was hooking up a strong feedback loop through MCP (which, naturally, I asked claude to do) so that it *can* capture/interpret images, query values, poke at the DOM, etc. via a WebSocket connection. github.com/rreusser/mcp...
Log entry: Sometime in January. 2026, I think. Still fighting instanced lines.