SniperJake945's Avatar

SniperJake945

@tearsofjake.bsky.social

Computer Graphics Investigator (CGI)

283 Followers  |  66 Following  |  65 Posts  |  Joined: 31.08.2024  |  1.9891

Latest posts by tearsofjake.bsky.social on Bluesky

Digital Iris
YouTube video by Ancient Digital Iris

"Animated Bokeh" has been on my whiteboard for a long time, and this is the result.

I thought it would just be for cheesy novelty effects, but it also does lightfield manipulation that was cooler than I expected.

www.youtube.com/watch?v=Kg_2...

07.02.2026 17:34 β€” πŸ‘ 174    πŸ” 68    πŸ’¬ 12    πŸ“Œ 8
Preview
MOPs: Motion Operators for Houdini This course provides an overview of the MOPs workflow and how it integrates with the rest of Houdini, shows concrete examples of how MOPs can be used in both motion graphics and visual effects workflo...

My MOPs course on Houdini.School is now free! If you want a start to finish course on how (almost) everything works, plus the math background, you can watch it all here: www.houdini.school/courses/mops...

29.01.2026 20:41 β€” πŸ‘ 22    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0
voronerf output

voronerf output

ground truth lego from the back

ground truth lego from the back

voronoi based NERF training isn't going incredibly well... πŸ˜‚20k iterations with 80k voronoi sites takes about 10 hours to train. And it still looks dog water.

Definitely need more sites and some better approach to accelerating sampling....

25.01.2026 21:08 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

When I accidentally start to minimize my dirichlet energy 😣

18.01.2026 18:22 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Voronoi implicit but this time in 3d. It's not fully NERF mode yet, as I'm not doing any kind of sampling along rays, this is just trying to learn volumetric data.

30,000 Voronoi sites. Probably not enough for the pighead, but it's just a fun first test.

17.01.2026 01:34 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

So the hardliners in my mind are people trying to cash in on the idea that companies will choose code assistants over employees. And they need the AI companies to succeed in that vision. And then extremist povs also gain a lot of traction on social media. So it's kinda of this insane feedback loop

14.01.2026 18:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

My conspiracy is that in order for the investment these AI companies have made to pay off they need for it to be a full on replacement. Otherwise when costs normalize, the value for a given consumer won't be there. Especially if a company still has to pay for both an engineer and a code assistant.

14.01.2026 18:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Lebronsketball 2

10.01.2026 01:48 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Lebronsketball 1

10.01.2026 01:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

*morphs your cat*

10.01.2026 00:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Gaboronoi (top) vs Anisotropic Voronoi (bottom)

Gaboronoi (top) vs Anisotropic Voronoi (bottom)

Gaboronoi (top) vs Anisotropic Voronoi (bottom), with 5000 Sites. I feel like the anisotropic voronoi's artifacts are more aesthetically pleasing. Gaboronoi is faster to train by a lot!

I've compiled all of my recent voronoi experiments into a collab:
colab.research.google.com/github/jaker...

04.01.2026 06:47 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
example gaboronoi

example gaboronoi

Assuming uniform weights and frequencies, and random colors and anisotropy directions, this is what an example gaboronoi diagram would look like.

04.01.2026 01:32 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Similar to this post: bsky.app/profile/tear...

we're using 3000 voronoi sites. It captures the high frequency info worse than the anisotropic voronoi which is interesting. But it's also one less parameter to learn.

04.01.2026 01:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
cat neural implicit

cat neural implicit

gaboronoi tesselation

gaboronoi tesselation

I'm once again making neural implicits of my cats. This time we're back to the voronoi. Gabor noise style.

We weight the result of the softmax at any site i by sin(F_i*(x_i-x_j)β€’u_i) where F_i is learned frequency and u_i is a learned anisotropy direction. I call it Gaboroni

04.01.2026 01:31 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
noise layers

noise layers

These are the noise layers that are all summed up to get the predicted image. each noise layer is colored with two colors, and each layer has a circular mask associated with it as well.

I scaled the values here by a factor of ten so that they're easier to see.

31.12.2025 06:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
predicted cats

predicted cats

ground truth cats

ground truth cats

Wanted to make a neural implicit that's just layers of anisotropic simplex noise. Turns out it works pretty well. With 24 layers of simplex noise each with a 96x96 texture of anisotropy data we can get this kind of result!

It's not at all a good compression method but it's fun and cool :)

31.12.2025 06:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

They like to hug now that they're older

27.12.2025 01:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Ground truth cats

Ground truth cats

Here's the aforementioned paper: sphericalvoronoi.github.io

All credit to Lucky Lyinbor for the idea to apply this to euclidean voronoi.

Also here's the ground truth image

26.12.2025 22:52 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Learned anisotropic rep

Learned anisotropic rep

Learned isotropic rep

Learned isotropic rep

Anisotropic voronoi

Anisotropic voronoi

Isotropic voronoi

Isotropic voronoi

I had the idea to incorporate anisotropy into the recent Spherical Voronoi paper. And when we apply their ideas to euclidean problems (not Spherical) the results are pretty great when anisotropy is used. This is 3000 anisotropic sites vs 3000 isotropic sites. Same learning rates for both

26.12.2025 22:49 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 2    πŸ“Œ 1

These are technically two different problems to be clear* but both are more annoying than I expected.

25.12.2025 20:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Shockingly annoying to find the points of intersection between a plane and an AABB. Even when I know the plane passes through the center of the box.

If anyone has a simple method for finding the exact distance to a plane clipped by a bounding box I'd be eternally grateful.

25.12.2025 20:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Computing the exact bijection of the optimal transport (OT) problem between very large point sets is completely untractable…

In our SIGGRAPH Asia 2025 paper: β€œBSP-OT: Sparse transport plans between discrete measures in log-linear time” we get one with typically 1% of error in a few seconds on CPU!

01.10.2025 13:55 β€” πŸ‘ 45    πŸ” 16    πŸ’¬ 1    πŸ“Œ 3

Obviously doing this through noise and no visibility tests would be far faster but it would be fun to play with wind shaping the coverage of the grass.

12.12.2025 19:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'd imagine snow covered grass is relatively similar, but on its side. Each blade could have a wind direction ray and if the ray doesn't hit any other blade of grass in the wind direction (over a certain distance), then it could be snow covered. And then combo that with some general snow coverage

12.12.2025 19:26 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For the snow on the hedge maze in Zootopia 2, we cast rays upwards from each leaf and if it didn't collide with anything we instanced a snow chunk onto it. We made a few variants of the snow chunks to add extra variation.

12.12.2025 19:22 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Right but the idea of being able to eat it (in the future) is itself a form of satisfaction. And plenty of people hoard items they want to be able to indulge in at the time of greatest satisfaction. Like someone who doesn't use their consumables in an rpg because they might need it for the next boss

12.12.2025 07:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

my FX work alone required at least 1 quadrillion of the 3.7 quadrillion rays traced on Zootopia 2. This is FACTUAL and REAL. If you are a journalist, please cite this in your article.

25.11.2025 06:47 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

In the surflets examples it feels like a trajectory/barycenter based approach almost like OT, and in these examples it feels much more like traditional SDF blending. If your source and target objects aren't right on top of each other would this blending still look reasonable? Really neat stuff!!

20.11.2025 18:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

my GOAT 🐐🐐🐐🐐

15.11.2025 01:27 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I appreciate the kind words!!! I really enjoyed hanging out with you and the jangafx team!!! Y'all are absolutely wizards πŸ§™β€β™‚οΈπŸ§™β€β™‚οΈπŸ§™β€β™‚οΈπŸ§™β€β™‚οΈ

13.08.2025 03:40 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@tearsofjake is following 20 prominent accounts