I will be presenting in the 10.30 session in room 208-209. I will share thoughts about implicit/generative and explicit appearance representations. See you there if that sounds interesting!
13.08.2025 16:39 โ ๐ 2 ๐ 1 ๐ฌ 0 ๐ 0@mishok43.bsky.social
CG&AI PhD Student @KIT Now at Reality Lab, Meta ex-EagleDynamics, ex-WellDone Games mishok43.com
I will be presenting in the 10.30 session in room 208-209. I will share thoughts about implicit/generative and explicit appearance representations. See you there if that sounds interesting!
13.08.2025 16:39 โ ๐ 2 ๐ 1 ๐ฌ 0 ๐ 0In just less than an hour we'll present "Neural Two-Level Monte Carlo Real-Time Rendering" on SIGGRAPH 2025 at "Best of Eurographics" session
Please join us, it'll be fun! ๐
๐ Room 208-209
๐10:30-11:30
Physically-based differentiable rendering enables inverse rendering, but handling visibility is hard. Our SIGGRAPH
2025 paper uses quadrics to importance sample silhouette edges--outperforming all existing unidirectional differentiable path tracers.
momentsingraphics.de/Siggraph2025...
Done, thank you so much for the info!
28.07.2025 13:52 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0And it's a fantastic coincidence that I'm already next to Vancouver and don't need to suffer from jet lag the 2nd time... a horrible thing
28.07.2025 04:53 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Session's time and location for your schedule:
s2025.conference-schedule.org/presentation...
Big news! The Eurographics Association invited us to present our work "Neural Two-Level Monte Carlo Real-Time Rendering" at #SIGGRAPH2025 ๐
Super honored - my first SIGGRAPH!
Letโs discuss neural & real-time rendering, grab a coffee, or just hang out - feel free to leave a DM
Just joined RealityLabs at Meta to do some real-time neural rendering research. Unfortunately without a 9- digits compensation package, but WIP ๐
I'm in the Redmond office, but already visited Seattle. If you wanna grab โ - DMs are opened
Currently, the issue with efficient spatial encoding is more or less resolved (iNGP or the new GATE by @boksajak.bsky.social )
But it's not the case for the directional domain at all. Despite its importance for the unbiased rendering, Cache-Based Resampling and 2-Level Monte-Carlo Estimator
To be completely honest, it's a equal-time-comparison:
3SPP vs 1SPP+25 Neural Resamples
I believe we should invest more resources in more rapid adaptivity of neural caches and more aggressive quantizations, so we could deliver it to production real-time rendering
It's a cool work which deserve a lot attentions from real-time rendering community!
Previously I've got a little bit of time to conduct similar experiments on top of our NIRC, as each additional neural sample costs just pure tensor FLOPs ~ 1.5ms on 4080
1 spp vs 1 spp + 25 cache resamples
I found it cool to hear the motivation for High-Frequency Learnable Encoding for NRC\NIRC\Neural Ambient Occlusion from the perspective of Kernel Machines!
Classics ๐
Hmmm, 1 min sounds amazing, I didn't expect such speed!
06.06.2025 09:57 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Congrats to you and to the whole team!
I've got a question: basically the volumetric blocks don't have any transparency, do they? And you rely on pruning to make them look right because of it?
Thanks ๐ฅณ
17.05.2025 16:34 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0I was very honoured to receive one of the two Eurographics Young Researcher Award 2025 yesterday!
This is the combination of the work of many people, mentors, collaborators, students, friends who trusted me and taught me so much along the way!
Huge congrats, Valentin!
100% Deserved
If you've missed the paper, just check it out:
mishok43.github.io/nirc/
And feel free to reach out to me, if you wanna help me with pushing forward the neural rendering for real-time applications
We've received an "Honorable Mention" at the Eurographics 2025 for our work on "Neural Two-Level Monte Carlo Real-Time Rendering" in London! ๐ฅณ
Huge thanks to everyone who supported me along the way, and to the EG chairs, committee, and organizers for this recognition
That's what I mean by the lack of compute
15.05.2025 11:00 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0If you're interested in something like AlphaEvolve but focused on CG, GameDev, or offline rendering, feel free to reach out. Iโve been leading research in this space with strong results so far.
But we need support with compute, expertise and even maybe with engineering
And of course, huge thanks to my amazing co-authors โ Dmitrii Klepikov, Johannes Hanika, Carsten Dachsbacher โ and industry friends @kaplanyan.bsky.social , Sebastian Herholz, @momentsingraphics.bsky.social, who helped me on the way!
11.05.2025 15:55 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0But there are so many other cool questions that we tried to cover
So please check out our webpage, demo videos, and paper itself: mishok43.github.io/nirc/
๐Iโll be at Eurographics next week in London โ if you're around and want to talk rendering, AI, or just grab coffee, DM me!
In equal-time comparisons, NIRC achieves surprisingly cool results both in the biased and unbiased cases
But yeah... variance may increase next to foliage, brush, trees๐ฟ โ still the eternal pain in CG ๐
Using Two-Level Monte Carlo, we can debias NIRC while still cutting variance โ thanks to fast cache sampling - dozens of times for the cost of 1 real path
It works like (N)CV, but doesn't introduce any architectural constraints! No need to train Normalizing Flows on-the-fly
Another positive scalability property: deeper MLPs do improve quality here
Downside: not all scenes benefit from it (esp. with high-variance MC estimator. must be further researched)
And we got basically classical Monte-Carlo integration, but over the neural domain!
But it scales pretty well with the number of neural samples!
NIRC amortizes iNGP costs via task-reformulation: from outgoing to incident radiance
1. Use hash-grid on surface point โ get latent light rep
2. Sample incoming dirs via BSDF
3. Decode radiance using MLPs (per-dir)
The more directions, the more we leverage GPU tensor FLOPS ๐ฅ
Inspired by NRC + iNGPโs adaptivity from amazing Tomas Mรผller, Christoph Schied, Jan Novรกk, Alex Evans and et al, but found key limits:
โ Up to 70% time spent on iNGP โ memory-bound
โ MLP depth โ sign. better quality โ poor FLOPs scaling
โ Biased for specular & detailed BSDFs with normals
๐จ CG Paper, EG 2025
As scenes & lighting in games grow in complexity, we introduce Neural Incident Radiance Cache (NIRC) โ a real-time, online-trainable cache that:
๐ Costs just ~1ms/neural-sample for 1080p
โ๏ธ Decreases MC variance
๐ฅณ Saves on bounces
www.youtube.com/watch?v=Y791...