Nicholas Sharp's Avatar

Nicholas Sharp

@nmwsharp.bsky.social

3D geometry researcher: graphics, vision, 3D ML, etc | Senior Research Scientist @NVIDIA | polyscope.run and geometry-central.net | running, hockey, baking, & cheesy sci fi | opinions my own | he/him personal website: nmwsharp.com

1,136 Followers  |  93 Following  |  26 Posts  |  Joined: 15.11.2024  |  2.3993

Latest posts by nmwsharp.bsky.social on Bluesky

Code is now out! Try it for yourself here: github.com/abhimadan/st...

30.07.2025 21:42 โ€” ๐Ÿ‘ 10    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Also: this paper was recognized with a best paper award at SGP! Huge thanks to the organizers & congrats to the other awardees.

I was super lucky to work with Yousuf on this one, he's truly the mastermind behind it all!

04.07.2025 15:52 โ€” ๐Ÿ‘ 12    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Actually, Yousuf did a quick experiment which is related (though a different formulation), using @markgillespie64.bsky.social et al's Discrete Torsion Connection markjgillespie.com/Research/Dis.... You get fun spiraling log maps! (image attached)

02.07.2025 18:54 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Yeah! That diffused frame is "the most regular frame field in the sense of transport along geodesics from the source", so you get out a log map that is as-regular-as-possible, in the same sense.

You could definitely use another frame field, and you'd get "log maps" warped along that field.

02.07.2025 18:54 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ’ป Website: www.yousufsoliman.com/projects/the...
๐Ÿ“— Paper: www.yousufsoliman.com/projects/dow...
๐Ÿ”ฌ Code (C++ library): geometry-central.net/surface/algo...
๐Ÿ Code (python bindings): github.com/nmwsharp/pot...

(point cloud code not available yet, let us know if you're interested!)

02.07.2025 06:23 โ€” ๐Ÿ‘ 8    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

We give two variants of the algorithm, and show use cases for many problems like averaging values on surfaces, decaling, and stroke-aligned parameterization. It even works on point clouds!

02.07.2025 06:23 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Instead of the usual VxV scalar Laplacian, or a 2Vx2V vector Laplacian, we build a 3Vx3V homogenous "affine" Laplacian! This Laplacian allows new algorithms for simpler and more accurate computation of the logarithmic map, since it captures rotation and translation at once.

02.07.2025 06:23 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Previously in "The Vector Heat Method", we computed log maps with short-time heat flow, via a vector-valued Laplace matrix rotating between adjacent vertex tangent spaces.

The big new idea is to rotate **and translate** vectors, by working homogenous coordinates.

02.07.2025 06:23 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Logarithmic maps are incredibly useful for algorithms on surfaces--they're local 2D coordinates centered at a given source.

Yousuf Soliman and I found a better way to compute log maps w/ fast short-time heat flow in "The Affine Heat Method" presented @ SGP2025 today! ๐Ÿงต

02.07.2025 06:23 โ€” ๐Ÿ‘ 64    ๐Ÿ” 14    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2
Preview
Open Letter to the SIGGRAPH Leadership RE: Call for SIGGRAPH Asia to relocate from Malaysia and commit to a venue selection process that safeguards LGBTQ+ and other at-risk communities. To the SIGGRAPH Leadership: SIGGRAPH Executive Commit...

Holding SIGGRAPH Asia 2026 in Malaysia is a slap in the face to the rights of LGBTQ+ people. Especially now, when underrepresented people need as much support as we can possibly give them ! Angry like me ? Sign this open letter to let them know. ๐Ÿณ๏ธโ€โšง๏ธ๐Ÿณ๏ธโ€๐ŸŒˆ

docs.google.com/document/d/1...

18.06.2025 14:57 โ€” ๐Ÿ‘ 29    ๐Ÿ” 15    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

Sampling points on an implicit surface is surprisingly tricky, but we know how to cast rays against implicit surfaces! There's a classic relationship between line-intersections and surface-sampling, which turns out to be quite useful for geometry processing.

10.06.2025 18:01 โ€” ๐Ÿ‘ 22    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Thank you! There's definitely a low-frequency bias when stochastic preconditioning is enabled, but we only use it for the first ~half of training, then train as-usual. The hypothesis is that the bias in the 1st half helps escape bad minima, then we fit high-freqs in the 2nd half. Coarse to fine!

08.06.2025 05:57 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

My childโ€™s doll and tools I captured as 3D Gaussians, turned digital with collisions and dynamics. We are getting closer to bridging the gap between the world we can touch and digital 3D. Experience the bleeding edge at #NVIDIA Kaolin hands-on lab, #CVPR2025! Wed, 8-noon. tinyurl.com/nv-kaolin-cv...

06.06.2025 15:05 โ€” ๐Ÿ‘ 11    ๐Ÿ” 2    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

Check out Abhishek's research!

I was honestly surprised by this result: classic Barnes-Hut already builds a good spatial hierarchy for approximating kernel summations, but you can do even better by adding some stochastic sampling, for significant speedups on the GPU @ matching average error.

05.06.2025 21:58 โ€” ๐Ÿ‘ 15    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Ah yes absolutely. That's a great example, we totally should have cited it!

When we looked around we found mannnnnny various "coarse-to-fine" like schemes appearing in the context of particular problems or architectures. As you say, what most excited us here is having simple+general option.

04.06.2025 22:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Thank you for the kind words :) The technique is very much in-the-vein of lots of related ideas in ML, graphics, and elsewhere, but hopefully directly studying it & sharing is useful to the community!

04.06.2025 22:10 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We did not try it w/ the Gaussians in this project (we really focused on the "query an Eulerian field" setting, which is not quite how Gaussian rendering works).

There are some very cool projects doing related things in that setting:
- ubc-vision.github.io/3dgs-mcmc/
- diglib.eg.org/items/b8ace7...

04.06.2025 22:06 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Tagging @selenaling.bsky.social and @merlin.ninja, who are both on here it turns out! ๐Ÿ˜

03.06.2025 01:19 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Stochastic Preconditioning for Neural Field Optimization Stochastic Preconditioning for Neural Field Optimization

website: research.nvidia.com/labs/toronto...
arxiv: arxiv.org/abs/2505.20473
code: github.com/iszihan/stoc...

Kudos go to Selena Ling who is the lead author of this work, during her internship with us at NVIDIA. Reach out to Selena or myself if you have any questions!

03.06.2025 00:43 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Closing thought: In geometry, half our algorithms are "just" Laplacians/smoothness/heat flow under the hood. In ML, half our techniques are "just" adding noise in the right place. Unsurprisingly, these two tools work great together in this project. I think there's a lot more to do in this vein!

03.06.2025 00:43 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Post image

Geometric initialization is a commonly-used technique to accelerate SDF field fitting, yet it often results in disastrous artifacts for non-object centric scenes. Stochastic preconditioning also helps to avoid floaters both with and without geometric initialization.

03.06.2025 00:43 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Neural field training can be sensitive to changes to hyperparameters. Stochastic preconditioning makes training more robust to hyperparameter choices, shown here in a histogram of PSNRs from fitting preconditioned and non-preconditioned fields across a range of hyperparameters.

03.06.2025 00:43 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We argue that this is a quick and easy form of coarse-to-fine optimization, applicable to nearly any objective or field representation. It matches or outperforms custom designed polices and staged coarse-to-fine schemes.

03.06.2025 00:43 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

Surprisingly, optimizing this blurred field to fit the objective greatly improves convergence, and in the end we anneal ๐›ผ to 0 and are left with an ordinary un-blurred field.

03.06.2025 00:43 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

And implementing our method requires changing just a few lines of code!

03.06.2025 00:43 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Itโ€™s as simple as perturbing query locations according to a normal distribution. This produces a stochastic estimate of the blurred neural field, with the level of blur proportional to a scale parameter ๐›ผ.

03.06.2025 00:43 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Selena's #Siggraph25 work found a simple, nearly one-line change that greatly eases neural field optimization for a wide variety of existing representations.

โ€œStochastic Preconditioning for Neural Field Optimizationโ€ by Selena Ling, Merlin Nimier-David, Alec Jacobson, & me.

03.06.2025 00:43 โ€” ๐Ÿ‘ 56    ๐Ÿ” 8    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 1
Post image

Fun new paper at #SIGGRAPH2025:

What if instead of two 6-sided dice, you could roll a single "funky-shaped" die that gives the same statistics (e.g, 7 is twice as likely as 4 or 10).

Or make fair dice in any shapeโ€”e.g., dragons rather than cubes?

That's exactly what we do! 1/n

21.05.2025 16:29 โ€” ๐Ÿ‘ 152    ๐Ÿ” 46    ๐Ÿ’ฌ 8    ๐Ÿ“Œ 6
SGP 2025 - Submit page

The Symposium on Geometry Processing is an amazing venue for geometry research: meshes, point clouds, neural fields, 3D ML, etc. Reviews are quick and high-quality.

The deadline is in ~10 days. Consider submitting your work, I'm planning to submit!

sgp2025.my.canva.site/submit-page-...

01.04.2025 18:42 โ€” ๐Ÿ‘ 43    ๐Ÿ” 10    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Hi BlueSky! I'm trying to get back to a "write" relationship with social media after hiding from it for a while.

I like geometry research, useful code, pierogies [sic], triangles, outdoorsy life, etc. I mainly post about research/software, but glad to chat about anything.

19.03.2025 23:35 โ€” ๐Ÿ‘ 50    ๐Ÿ” 1    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 0

@nmwsharp is following 20 prominent accounts