Ben Beilharz's Avatar

Ben Beilharz

@ben.graphics.bsky.social

I do my PhD on physically based (differentiable) rendering, material appearance modeling/perception @tsawallis.bsky.social's Perception Lab I enjoy photography, animation/VFX/games, working on my renderer, languages and contributing to open source.

254 Followers  |  497 Following  |  142 Posts  |  Joined: 29.12.2023
Posts Following

Posts by Ben Beilharz (@ben.graphics.bsky.social)

Also found something else, which is now included in v0.2.2. I forgot to allow the sensor to yield a film plugin.

02.03.2026 13:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
GitHub - pixelsandpointers/mitsuba-scene-description: An automatically generated Pythonic API for Mitsuba plugins to build your scenes programmatically. An automatically generated Pythonic API for Mitsuba plugins to build your scenes programmatically. - pixelsandpointers/mitsuba-scene-description

New bugfix is available for: github.com/pixelsandpoi...

Now nested plugins serialize/resolve correctly in your scene, so you can start using BlendedBSDFs :)

02.03.2026 12:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Are you talking about Johannes’ jo.dreggn.org/home/2015_mn...?

26.02.2026 17:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Fly through of part of a Keeper level directly in the Unreal Engine editor. Part way through I turn on all of the game objects the player doesn't see -- it takes a lot to make games work.

25.02.2026 19:23 β€” πŸ‘ 364    πŸ” 96    πŸ’¬ 10    πŸ“Œ 2
Preview
GitHub - rlguy/Blender-FLIP-Fluids: The FLIP Fluids addon is a tool that helps you set up, run, and render high quality liquid fluid effects all within Blender, the free and open source 3D creation su... The FLIP Fluids addon is a tool that helps you set up, run, and render high quality liquid fluid effects all within Blender, the free and open source 3D creation suite. - rlguy/Blender-FLIP-Fluids

I was today's years old: github.com/rlguy/Blende...

25.02.2026 17:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

A couple of great Slang Shader talks are ready for viewing:

24.02.2026 18:57 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Vulkan releases game engine tutorial

Vulkan releases game engine tutorial

The Vulkan Working Group has published, Building a Simple Game Engine, a new in-depth tutorial for developers ready to move beyond the basics and into professional-grade engine development.

Learn more: www.khronos.org/blog/new-vul...
#vulkan #tutorial #programming #gpu #gameengine

25.02.2026 14:34 β€” πŸ‘ 243    πŸ” 37    πŸ’¬ 10    πŸ“Œ 5

Late to the party, but…
I just had the most cinematic encounter in my entire video game career fighting against a primed Titan in #FFXVI.

Good job @square-enix-games.com, I’m flabbergasted.

24.02.2026 21:40 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This release mainly stabilizes the scraping process so we can build an automatic API during package build. I also introduced a builder pattern to build the scene.

If you use Mitsuba and want to build scenes programmatically, give it a try and let me know if you find any rough edges! :)

Cheers!
3/3

23.02.2026 10:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You can pass the MITSUBA_VERSION env var, also as an inline during your pip install mitsuba-scene-description to specify a version you want to build. If you do not pass the var, the package will attempt to import Mitsuba and source the plugin reference from that version or falls back to v3.7.1.

2/x

23.02.2026 10:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Image shows a code example for the Python package introduced in this post.

The code is as following (for screenreaders):
import mitsuba_scene_description as msd
import mitsuba as mi

mi.set_variant("llvm_ad_rgb")

# Define components
diffuse = msd.SmoothDiffuseMaterial(reflectance=msd.RGB([0.8, 0.2, 0.2]))
ball = msd.Sphere(
    radius=1.0,
    bsdf=diffuse,
    to_world=msd.Transform().translate(0, 0, 3).scale(0.4),
)
cam = msd.PerspectivePinholeCamera(
    fov=45,
    to_world=msd.Transform().look_at(
        origin=[0, 1, -6], target=[0, 0, 0], up=[0, 1, 0]
    ),
)
integrator = msd.PathTracer()
emitter = msd.ConstantEnvironmentEmitter()

# builder pattern
scene = (
    msd.SceneBuilder()
    .integrator(integrator)
    .sensor(cam)
    .shape("ball", ball)
    .emitter("sun", emitter)
    .build()
)

# or 
scene = msd.Scene(
    integrator=integrator,
    sensors=cam,  # also accepts a list for multi-sensor setups
    shapes={"ball": ball},
    emitters={"sun": emitter},
)

mi.load_dict(scene.to_dict())
# will return:
{'ball': {'bsdf': {'reflectance': {'type': 'rgb', 'value': [0.8, 0.2, 0.2]},
                   'type': 'diffuse'},
          'radius': 1.0,
          'to_world': Transform[
  matrix=[[0.4, 0, 0, 0],
          [0, 0.4, 0, 0],
          [0, 0, 0.4, 1.2],
          [0, 0, 0, 1]],
  ...
],
          'type': 'sphere'},
 'integrator': {'type': 'path'},
 'sensor': {'fov': 45,
            'to_world': Transform[...],
            'type': 'perspective'},
 'sun': {'type': 'constant'},
 'type': 'scene'}

Image shows a code example for the Python package introduced in this post. The code is as following (for screenreaders): import mitsuba_scene_description as msd import mitsuba as mi mi.set_variant("llvm_ad_rgb") # Define components diffuse = msd.SmoothDiffuseMaterial(reflectance=msd.RGB([0.8, 0.2, 0.2])) ball = msd.Sphere( radius=1.0, bsdf=diffuse, to_world=msd.Transform().translate(0, 0, 3).scale(0.4), ) cam = msd.PerspectivePinholeCamera( fov=45, to_world=msd.Transform().look_at( origin=[0, 1, -6], target=[0, 0, 0], up=[0, 1, 0] ), ) integrator = msd.PathTracer() emitter = msd.ConstantEnvironmentEmitter() # builder pattern scene = ( msd.SceneBuilder() .integrator(integrator) .sensor(cam) .shape("ball", ball) .emitter("sun", emitter) .build() ) # or scene = msd.Scene( integrator=integrator, sensors=cam, # also accepts a list for multi-sensor setups shapes={"ball": ball}, emitters={"sun": emitter}, ) mi.load_dict(scene.to_dict()) # will return: {'ball': {'bsdf': {'reflectance': {'type': 'rgb', 'value': [0.8, 0.2, 0.2]}, 'type': 'diffuse'}, 'radius': 1.0, 'to_world': Transform[ matrix=[[0.4, 0, 0, 0], [0, 0.4, 0, 0], [0, 0, 0.4, 1.2], [0, 0, 0, 1]], ... ], 'type': 'sphere'}, 'integrator': {'type': 'path'}, 'sensor': {'fov': 45, 'to_world': Transform[...], 'type': 'perspective'}, 'sun': {'type': 'constant'}, 'type': 'scene'}

G'day!
I've just published a new version of mitsuba-scene-description to GitHub and PyPI: github.com/pixelsandpoi...

I've changed the generation process, so you no longer need to manually clone and build the API yourself. The Mitsuba plugin API will now be generated during package build.

1/x

23.02.2026 10:09 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Thank you for the warm words!

It would already suffice to just use git and check with a git diff before one continues to package the TeX project. Claude actually provides diffs and asks for permissions. Maybe there was just one permission too much.

20.02.2026 11:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

cover it up with the hope that no one will see the err. I'm an honest guy, and I'd like to keep it that way. I just feel sorry for the people whose reputations I may have hurt with this, i.e. Tom and the respective authors of the paper.

Here’s to new scientific integrity.

20.02.2026 11:32 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I just decided to let Claude do it for me instead. While it fixed some issues it really did more harm than good. To be frank, I just messed up by not checking stuff meticulously after I let Claude do its thing. In the end, I had my learning and I rather be open about my wrongdoing than trying to...

20.02.2026 11:32 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Pretty much my reaction to this as well. I'm pretty anti-LLMs for the scientific process, but the paper I revised had tons of weird Tikz hacks in it, where I wanted to embed Tikz plots in a table which caused all sorts of problems that Arxiv didn't accept. After 2 hours of trying to fix it myself...

20.02.2026 11:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What a banger set of contributions πŸ₯³ Ordered!

20.02.2026 08:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Excited to share that GPU Zen 4: Advanced Rendering Techniques is officially out. This volume features work from some of the most visually impactful projects of recent years, including #AssassinsCreed , #DOOM , #StarwarsOutLaws and NVidia's Zorah. And a killer cover :)

20.02.2026 01:02 β€” πŸ‘ 87    πŸ” 19    πŸ’¬ 5    πŸ“Œ 2
Preview
MRD: Using Physically Based Differentiable Rendering to Probe Vision Models for 3D Scene Understanding While deep learning methods have achieved impressive success in many vision benchmarks, it remains difficult to understand and explain the representations and decisions of these models. Though vision ...

Summary:

arxiv.org/abs/2512.123...
Contains the correct citation (w/o appendix and revised table).

arxiv.org/abs/2512.123...
arxiv.org/abs/2512.123...
Contain the incorrect citation.

Version 4 is on its way with the fixes.

19.02.2026 11:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

100% agreed. I had everything in check with the first version I uploaded. I am usually more cautious around these things. I had split the submission to another journal and didn't copy the git folder, so I did not see the applied changes in the end. Otherwise this would've been an easy find with git.

19.02.2026 10:55 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Fortunately, only one citation was affected. Still, kind of mad about the fact that this happened. Once more, sorry to all colleagues affected.

I also added the missing dates from a few citations and fixed the entries where PO Box was a co-author (sources resolved correctly).

19.02.2026 10:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

My sincere apologies to the authors whom I failed to cite correctly. And a note of caution: Never let LLMs touch your papers, even for small fixes.

19.02.2026 09:48 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Beyond Pixels: A Differentiable Pipeline for Probing Neuronal Selectivity in 3D Visual perception relies on inference of 3D scene properties such as shape, pose, and lighting. To understand how visual sensory neurons enable robust perception, it is crucial to characterize their s...

[16] Sriram Guna Elumalai, Sree Harsha Nelaturu, Subha Nagarajan, Ines Rieger,
Bjoern Eskofier, and Andreas Maier. 2025. Beyond Texture: Generating
Interpretable Extremely-High Activation Images for Robust Vision Models.
arXiv:2501.07827 [cs.CV] should point to this: arxiv.org/abs/2510.13433

2/x

19.02.2026 09:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Unfortunately, I left Claude fixing Arxiv compilation errors. Apparently, it changed a citation in the process (that was right in v1 of the paper). I am currently reviewing all the citations now again by hand and preparing a new version that fixes these. I will also post them here:
1/x

19.02.2026 09:48 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 3    πŸ“Œ 3

Just watched the talk. Thanks for sharing and great job on integrating the RT backend in Godot. Appreciate the code samples!

18.02.2026 17:40 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Introduction to Substrate Materials | Unreal Engine 5.7
YouTube video by Unreal Engine Introduction to Substrate Materials | Unreal Engine 5.7

Not gonna lie, I'm impressed by Unreal's new layered material system (Substrate): www.youtube.com/watch?v=d1nc...

Reminds me a lot of Wenzel's and Andrea's work on layered BSDFs, but for real-time graphics. Happy to see it in Unreal.

www.cg.tuwien.ac.at/research/pub...

rgl.epfl.ch/publications...

17.02.2026 13:35 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Cheers mate!

16.02.2026 22:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks for sharing our stuff!
Stay tuned for more 😬

16.02.2026 22:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Sorry for the wait (been to China for 4 weeks between the years), but I finally managed to update the pre-print with more results and the code is now also available on: github.com/ag-perceptio...

If you run into issues, let me know.

16.02.2026 17:45 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1

The talk is scheduled for:
Talk Session: 3D Shape and Space Perception
Date/Time: Monday, May 18, 2026, 8:15 – 9:45 am
Location: Talk Room 2

10.02.2026 12:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
MRD: Using Physically Based Differentiable Rendering to Probe Vision Models for 3D Scene Understanding While deep learning methods have achieved impressive success in many vision benchmarks, it remains difficult to understand and explain the representations and decisions of these models. Though vision ...

Happy to announce that I will be giving a remote talk @vssmtg.bsky.social this year. I will be presenting our recent pre-print MRD: Metamers rendered differentiably (arxiv.org/abs/2512.12307).

Happy to take questions before the presentation in May, or live during the online Q&A.

#VSS2026 #VSS26

10.02.2026 12:34 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0