Since the renderer is used as a Hydra render delegate, this is the concern of the host application (usdview). I just pass it some CPU memory.
25.01.2026 12:09 β π 0 π 0 π¬ 0 π 0@pablode.bsky.social
rendering @ chaos. homepage: pablode.com
Since the renderer is used as a Hydra render delegate, this is the concern of the host application (usdview). I just pass it some CPU memory.
25.01.2026 12:09 β π 0 π 0 π¬ 0 π 0In general, it was nice getting to know Metal a bit - and I can't wait writing some user land code on my MacBook Air! Btw: Vulkan backend is ~3,8k LOC and Metal ~2,2k LOC.
18.01.2026 15:52 β π 2 π 0 π¬ 0 π 0As these functions don't have access to intrinsics, we need to pass them along as part of their signature. Which the function tables are parameterized with. Here's an example of how the ray generation shader signature ends up looking with two ray payloads:
18.01.2026 15:52 β π 0 π 0 π¬ 1 π 0Closest-hit shaders and miss shaders need to be emulated. They are compiled with their entry points being "visible" and are invoked from visible function tables (VFTs) based on the traversal result. For each payload type we have 1x IFT and 2x VFTs.
18.01.2026 15:52 β π 0 π 0 π¬ 1 π 0An important difference between the graphics APIs is that Metal only has 'intersection' shaders that are invoked for non-opaque geometry. This is what I map any-hit shaders to. Their function addresses are stored in an intersection function table (IFT), similar to an SBT.
18.01.2026 15:52 β π 0 π 0 π¬ 1 π 0On the API side I use metal-cpp and for shaders I partially implement the GLSL_EXT_ray_tracing extension in SPIRV-Cross. It's a bit hacky and it only supports the features I need, but overall content agnostic. Ideally it's transitionary until Slang or KosmicKrisp's compiler can be used.
18.01.2026 15:52 β π 3 π 0 π¬ 1 π 0Finally finished Gatling's Metal backend! Here's a teaser of NVIDIA's USD / MDL sample scene 'Attic' running on macOS. What's special about the backend is probably that it uses the same GLSL code as the Vulkan backend, complete with hardware ray tracing. How does that work?
18.01.2026 15:52 β π 15 π 2 π¬ 2 π 0My "No Graphics API" blog post is live! Please repost :)
www.sebastianaaltonen.com/blog/no-grap...
I spend 1.5 years doing this. Full rewrite last summer and another partial rewrite last month. As Hemingway said: "First draft of everything is always shit".
Blog post on motion blur rendering (which is impressively thorough) by Alex Gauggel
gaukler.github.io/2025/12/09/n...
This scene from WireWheelsClub looks like this out of the box - no editing was required.
Performance is not a priority right now, but is good enough with this simple lighting on an RTX 2060 @ WQHD
There are still some minor issues to resolve, including getting rid of static state (for multiple viewports) and implementing support for orthographic cameras.
07.12.2025 22:47 β π 0 π 0 π¬ 1 π 0Spent some time this weekend updating Gatlingβs Blender integration to the latest 5.0 release.
07.12.2025 22:47 β π 1 π 0 π¬ 1 π 0A cloud rendered using jackknife transmittance estimation and the formula used to do so.
Ray marching is a common approach to GPU-accelerated volume rendering, but gives biased transmittance estimates. My new #SIGGRAPHAsia paper (+code) proposes an amazingly simple formula to eliminate this bias almost completely without using more samples.
momentsingraphics.de/SiggraphAsia...
What Iβm working on right now is a Metal backend for gatling. The idea is to use metal-cpp with GLSL shaders, by forking SPIRV-cross to support ray tracing pipelines :) (very WIP)
12.10.2025 13:50 β π 1 π 0 π¬ 0 π 0As usual, you can find it on GitHub: github.com/pablode/usds...
12.10.2025 13:36 β π 0 π 0 π¬ 0 π 0Btw, a few weeks ago I released a small plugin for usdview. It allows inspection of UsdShade networks using a custom Qt node graph. Itβs open source, lightweight and simple to install.
12.10.2025 13:36 β π 3 π 0 π¬ 1 π 0The Pixar RenderMan team has a paper out at HPG 2025 this week about the architecture of RenderMan XPU. There's a lot of interesting details in the paper- definitely a worthwhile read!
diglib.eg.org/bitstreams/d...
Source code on GitHub: github.com/pablode/cg-a...
28.06.2025 16:49 β π 0 π 0 π¬ 0 π 0Small weekend experiment: ported an LCD shader from Blender to #MaterialX (originally authored by PixlFX)
28.06.2025 16:49 β π 1 π 0 π¬ 1 π 0We just posted a recording of the presentation I gave at DigiPro last year on Hyperion's many-lights sampling system. It's on the long side (30 min) but hopefully interesting if you like light transport! Check it out on Disney Animation's website:
www.disneyanimation.com/publications...
Would be nice if 2DGS were to win the race. Much simpler to render π
11.03.2025 17:55 β π 1 π 0 π¬ 0 π 0Implemented USD point instancer primvar support in my toy renderer the last few days. Can now render this 2D Gaussian Splatting (2DGS) scene with a single mesh & MaterialX material.
(Created by Oliver Markowski with Houdini, link see below β¬οΈ)
Thanks to a lot of colleagues' great work, happy to share Vulkan samples for RTX Mega Geometry. They should run on all RTX GPUs using today's new drivers
github.com/nvpro-sample...
github.com/nvpro-sample...
github.com/nvpro-sample...
github.com/nvpro-sample...
Three different examples of the Chiang Hair BSDF in MaterialX v1.39.2, rendered in NVIDIA RTX.
Highlighting one of the key contributions in MaterialX v1.39.2, Masuo Suzuki at NVIDIA contributed the Chiang Hair BSDF, seen below in NVIDIA RTX, which opens the door to the authoring of cross-platform, customizable hair shading models in MaterialX and OpenUSD.
github.com/AcademySoftw...
Wrote a blog post about my development process and the tools Iβve built to develop the Spark codecs:
ludicon.com/castano/blog...
Lastly, implemented proper support for AOVs including those used for picking (primId, instanceId, elementId). (asset: standard shader ball) (4/4)
03.01.2025 11:15 β π 0 π 0 π¬ 0 π 0To reduce the memory footprint, I first run the geometry through meshoptimizer which culls and deduplicates vertices. Next, the data is compressed with blosc. The results are quite good. Decompression only happens when a BLAS is rebuilt. (3/4)
03.01.2025 11:15 β π 0 π 0 π¬ 1 π 0Furthermore implemented #MaterialX geompropvalues (MDL scene data), and GeomSubsets. This required quite some code rewriting, as I now store a copy of the CPU geometry data in the renderer itself. Hereβs a render of stehrani3dβs MaterialEggs which make use of these features (2/4)
03.01.2025 11:15 β π 1 π 0 π¬ 1 π 0