You can dump the PTX intermediate representation (see the documentation), but figuring out the calling convention of the kernel for your own use will be tricky. The system is not designed to be used in this way.
22.08.2025 10:36 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Just write your solver in plain CUDA. How hard can it be? ๐
18.08.2025 19:33 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0
This approach is restricted to software that only needs the CUDA driver. If your project uses cuSolver, you will likely need to have a dependency on the CUDA python package that ships this library on PyPI (similar to PyTorch et al.)
18.08.2025 19:09 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
Differentiable rendering has transformed graphics and 3D vision, but what about other fields? Our SIGGRAPH 2025 introduces misuka, the first fully-differentiable path tracer for acoustics.
12.08.2025 19:26 โ ๐ 81 ๐ 14 ๐ฌ 1 ๐ 0
Wasnโt that something.. Flocke (German for โflakeโ) says hi!
15.08.2025 04:45 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 0
If you are fitting a NeRF and you want a surface out at the end, you should probably be using the idea in this paper.
12.08.2025 06:43 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Given the focus on performance, I would suggest to switch from pybind11 to nanobind. Should just be a tiny change ๐
11.08.2025 15:05 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 0
Mokume
For the paper and data, please check out the project page: mokumeproject.github.io
08.08.2025 11:53 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
YouTube video by Maria Larsson
The Mokume Dataset and Inverse Modeling of Solid Wood Textures (Technical Paper, SIGGRAPH 2025)
This video explains the process more detail: www.youtube.com/watch?v=H6N-...
08.08.2025 11:53 โ ๐ 3 ๐ 1 ๐ฌ 1 ๐ 0
The Mokume project is a massive collaborative effort led by Maria Larsson at the University of Tokyo (w/Hodaka Yamaguchi, Ehsan Pajouheshgar, I-Chao Shen, Kenji Tojo, Chia-Ming Chang, Lars Hansson, Olof Broman, Takashi Ijiri, Ariel Shamir, and Takeo Igarashi).
08.08.2025 11:53 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
To reconstruct their interior, we: 1๏ธโฃLocalize annual rings on cube faces 2๏ธโฃ Optimize a procedural growth field that assigns an age to every 3D point (when that wood formed during the tree's life) 3๏ธโฃ Synthesize detailed textures via procedural model or a neural cellular automaton
08.08.2025 11:53 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
The Mokume dataset consists of 190 physical wood cubes from 17 species, each documented with:
- High-res photos of all 6 faces
- Annual ring annotations
- Photos of slanted cuts for validation
- CT scans revealing the true interior structure (for future use)
08.08.2025 11:53 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Wood textures are everywhere in graphics, but realistic texturing requires knowing what wood looks like throughout its volume, not just on the surfaces.
The patterns depend on tree species, growth conditions, and where and how the wood was cut from the tree.
08.08.2025 11:53 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
How can one reconstruct the complete 3D interior of a wood block using only photos of its surfaces? ๐ชต
At SIGGRAPH'25 (Thursday!), Maria Larsson will present *Mokume*: a dataset of 190 diverse wood samples and a pipeline that solves this inverse texturing challenge. ๐งต๐
08.08.2025 11:53 โ ๐ 75 ๐ 15 ๐ฌ 2 ๐ 1
My lab will be recruiting at all levels. PhD students, postdocs, and a research engineering position (worldwide for PhD/postdoc, EU candidates only for the engineering position). If you're at SIGGRAPH, I'd love to talk to you if you are interested in any of these.
08.08.2025 08:09 โ ๐ 12 ๐ 7 ๐ฌ 0 ๐ 0
The reason is that the "volume" of this paper is always rendered as a surface (without alpha blending) during the optimization. Think of it as an end-to-end optimization that accounts for meshing, without actually meshing the object at each step.
08.08.2025 08:01 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0
To get a triangle mesh out at the end, you will still need a meshing step (e.g. marching cubes). The key difference is that NeRF requires addtl. optimization & heuristics to create a volume that will ultimately produce a high quality surface. With this new method, it just works.
08.08.2025 08:01 โ ๐ 6 ๐ 3 ๐ฌ 1 ๐ 0
Wow, this is such a cool paper! Basically with a surprisingly small modification to existing NeRF optimization, this paper gets a really good direct surface reconstruction technique that doesn't require all of the usual mess that meshing a NeRF requires (raymarching, marching cubes, etc).
08.08.2025 07:39 โ ๐ 25 ๐ 5 ๐ฌ 1 ๐ 0
Check out our paper for more details at rgl.epfl.ch/publications...
07.08.2025 12:21 โ ๐ 9 ๐ 1 ๐ฌ 0 ๐ 0
This is a joint work with @ziyizh.bsky.social, @njroussel.bsky.social, Thomas Mรผller, @tizian.bsky.social, @merlin.ninja, and Fabrice Rousselle.
07.08.2025 12:21 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
Our method minimizes the expected loss, whereas NeRF optimizes the loss of the expectation.
It generalizes deterministic surface evolution methods (e.g., NvDiffrec) and elegantly handles discontinuities. Future applications include physically based rendering and tomography.
07.08.2025 12:21 โ ๐ 4 ๐ 1 ๐ฌ 2 ๐ 0
Instead of blending colors along rays and supervising the resulting images, we project the training images into the scene to supervise the radiance field.
Each point along a ray is treated as a surface candidate, independently optimized to match that ray's reference color.
07.08.2025 12:21 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
By changing just a few lines of code, we can adapt existing NeRF frameworks for surface reconstruction.
This patch shows the necessary changes to Instant NGP, which was originally designed for volume reconstruction.
07.08.2025 12:21 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
Methods like NeRF and Gaussian Splats model the world as radioactive fog, rendered using alpha blending. This produces great results.. but are volumes the only way to get there?๐ค Our new SIGGRAPH'25 paper directly reconstructs surfaces without heuristics or regularizers.
07.08.2025 12:21 โ ๐ 104 ๐ 24 ๐ฌ 3 ๐ 2
Release notes - Mitsuba 3
It also adds support for function freezing so that the process of rendering a scene can be captured and cheaply replayed. See mitsuba.readthedocs.io/en/latest/re... for details.
07.08.2025 11:15 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
Release notes - Mitsuba 3
Mitsuba significantly improves performance on the OptiX backend and adopts SER (Shader Execution Reordering) throughout various integrators. It adds special shapes and an integrator for gaussian splatting, a sun-sky emitter, and fixes missing partial derivative terms in differentiable integrators.
07.08.2025 11:15 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0
Changelog -
Dr.Jit adds matrix operations that compile to tensor core (CUDA) or vector instruction sets like AVX512, neural network abstractions, grid/permutohedral encodings, function freezing, and shader execution reordering (SER). Many improvements simplify development. drjit.readthedocs.io/en/stable/ch...
07.08.2025 11:15 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
Dr.Jit+Mitsuba just added support for fused neural networks, hash grids, and function freezing to eliminate tracing overheads. This significantly accelerates optimization &realtime workloads and enables custom Instant NGP and neural material/radiosity/path guiding projects. What will you do with it?
07.08.2025 11:15 โ ๐ 36 ๐ 6 ๐ฌ 2 ๐ 2
๐๐ฏ๐๐ซ๐ฉ๐ซ๐ข๐ง๐ญ๐ข๐ง๐ ๐ฐ๐ข๐ญ๐ก ๐๐จ๐ฆ๐จ๐ ๐ซ๐๐ฉ๐ก๐ข๐ ๐๐จ๐ฅ๐ฎ๐ฆ๐๐ญ๐ซ๐ข๐ ๐๐๐๐ข๐ญ๐ข๐ฏ๐ ๐๐๐ง๐ฎ๐๐๐๐ญ๐ฎ๐ซ๐ข๐ง๐
Imagine 3D printing directly onto an object that already exists.
With Tomographic Volumetric Additive Manufacturing (TVAM), we can now print over existing components, opening up a world of possibilities for multi-component devices.
21.07.2025 08:22 โ ๐ 10 ๐ 3 ๐ฌ 5 ๐ 0
I think everyone just uses Ubuntu ๐คท. If you want to be on the bleeding edge, the blood might be yours..
17.07.2025 11:29 โ ๐ 13 ๐ 0 ๐ฌ 1 ๐ 0
Just for announcing articles published in the Journal of Computer Graphics Tools, a diamond open access (free for all) journal at https://jcgt.org
Computer Graphics PhD student at TU Berlin.
(differentiable) rendering, inverse graphics, GPGPU
mworchel.github.io
Senior at Disney Animation & Professor at Edinburgh Napier University SCEBE. Part-time CTO of Cobra Simulation, 3Finery & new DanceGraph spinout. Real-time Rendering, Animation, Games, Movies & Themeparks
Activision, previously at Unity, Bungie, AMD/ATI
all opinions my own.
Real-Time Rendering Enthusiast. But let's be honest - all rendering. and some ML.
I'm a researcher at AMD working on improving computer graphics with the help of deep learning.
Previously: Intel Labs. PhD from UCSD with Prof. Ravi Ramamoorthi
https://alexku.me/
Geometry processing postdoc at รฉcole polytechnique
https://markjgillespie.com/
PhD student at graphics.cg.uni-saarland.de working on light transport simulation. Ex rendering researcher at Weta Digital.
PhD student at UTokyo, interested in graphics.
Professor at ETH Zรผrich, Research Scientist at Google
Assistant Professor of Computer Science at TU Darmstadt, Member of @ellis.eu, DFG #EmmyNoether Fellow, PhD @ETH
Computer Vision & Deep Learning
Research scientist at NVIDIA. Learned physics models, generative video & more.
Prof. At the CS department at the Technion. Intrigued by geometry, shapes, math and people, not necessarily in this order.
https://mirela.net.technion.ac.il/
Graphics researcher at TU Delft. Formerly Intel, KIT, NVIDIA, Uni Bonn. Known for moment shadow maps, MBOIT, blue noise, spectra, light sampling. Opinions are my own.
https://MomentsInGraphics.de
Sr. Distinguished Engineer @nvidia
Computer Graphics & Vision @ Adobe
Assistant Professor @uchicago @uchicagocs. PhD from @TelAvivUni. Interested in computer graphics, machine learning, & computer vision ๐ค
Research Scientist @ Intel
PhD candidate @ Universidad de Zaragoza
Computer Graphics and Deep Learning - Material Appearance and Visual Perception
๐ Women in Tech
๐ผ Hakuna Matata