Godspeed Michael.
05.11.2025 03:11 β π 2 π 0 π¬ 0 π 0@keenancrane.bsky.social
Digital Geometer, Associate Professor of Computer Science & Robotics at Carnegie Mellon University. There are four lights. https://www.cs.cmu.edu/~kmcrane/
Godspeed Michael.
05.11.2025 03:11 β π 2 π 0 π¬ 0 π 0Fantastic video from @SciShow about our work that turns any shape into fair dice:
youtu.be/-gp7AbYD9NI?...
Get all the details on Hossein Baktashβs page here: hbaktash.github.io
Hit me back in about 400k years*.
(*Approximate age of man-made fire.)
β¦point of a flow: itβs obtained by minimizing the (square of the) βenclosed volumeβ, plus a regularity term that prevents self-intersections.
So, when gradient flows are concatenated, the eversion follows a βUβ in the energy landscape rather than a ββ©β
I didnβt look much into the history of midsurfaces for this eversion, but am curious to know what has been said by Gardner and others. This one is different in spirit from midsurfaces used for sphere eversion (like Kusnerβs halfway surface) in the sense that itβs a stable rather than unstableβ¦
22.10.2025 05:23 β π 0 π 0 π¬ 1 π 0Very happy to see that NVIDIA is still making demos. π©ποΈ
22.10.2025 05:06 β π 0 π 0 π¬ 0 π 0Out of curiosity, did you consider (or try) GLB/GLTF? (Also supported by Finder viewer.)
21.10.2025 15:15 β π 0 π 0 π¬ 1 π 0Holy crap. What? Why? Who did thatβ¦? Amazing.
11.10.2025 17:28 β π 2 π 0 π¬ 1 π 0βFair diceβ might make you think of perfect cubes with equal frequencies (say, 1/6 on all sides) π²
But βfairβ really just means you get the frequencies you expect (say, 1/4, 1/4 & 1/2)
We can now design fair dice with any frequenciesβand any shape! π
hbaktash.github.io/projects/put...
Nicely produced clip by Matt Wein and Marylee Williams about our recent dice design project at @scsatcmu.bsky.social and @adobe.com
youtube.com/shorts/jD0ag...
π² π₯ π πͺ
Tangent-point energy works for (2).
To incorporate (1) I might (strongly) penalize the distance from each data point p to the *closest* point on the curve. This encourages at least one point of the curve to pass through each data point, without pulling on the whole curve.
Thanks for the thought-provoking example. π
19.09.2025 13:29 β π 1 π 0 π¬ 0 π 0Reminds me of the Kahneman and Tversky experiments (βSteve is more likely to be a librarian than a farmer.β) If LLMs are trained on human-generated text, it doesnβt seem reasonable to expect them to be smarter than the average text-generating human. (Though they sometimes are anyway.)
19.09.2025 13:28 β π 1 π 0 π¬ 1 π 0On the other hand, I was too dumb to recognize the subtlety on first glance. So maybe the model is βjust as bad as a human?β
19.09.2025 13:27 β π 1 π 0 π¬ 1 π 0So, in the the absence of any priors or additional information, 1/3 is a reasonable-ish approximation. But I agree it would be far better if the model simply said βthatβs hard to answer because there are many ambiguous factorsβ (as I have).
19.09.2025 13:26 β π 1 π 0 π¬ 1 π 0This oneβs not so clear cut: βbabyβ is an ambiguous age range, and a baby can be a twin or triplet, born in any order. Even a newborn could have younger step siblings in rare cases.
Weβre also presuming itβs a human baby, whereas other species have different life spans.
Not seeing it. Whatβs wrong with this answer? (There are six possible permutations, but the other two siblings are interchangeableβ¦)
17.09.2025 21:47 β π 0 π 0 π¬ 1 π 0I adapted Unicodeit! (See the acknowledgment section on GitHub; also meant to mention that in the footer).
I had been using your website for years, but wanted something more integrated.
Thank you for contributing to open source. π
I got tired of mashing together tools to write long threads with π«π’ππ‘ ππππππ‘π‘πππ and β³Ξ±β ββso I wrote Laππ€πππ‘!
It converts Markdown and LaTeX to Unicode that can be used in βtweetsβ, and automatically splits long threads. Try it out!
keenancrane.github.io/LaTweet/
(More seriously: if the geometry of the apples was well-captured by the artist, and the color is unique to that geometry, I would be willing to bet the answer is βyes.β)
06.09.2025 23:09 β π 1 π 0 π¬ 0 π 0If it began life as a drawing, is that question even well-posed?
06.09.2025 23:04 β π 1 π 0 π¬ 1 π 0Oh, you wrote a book on this stuff. I guess I didn't need to be quite so didactic in my response! ;-)
06.09.2025 21:51 β π 4 π 0 π¬ 0 π 0(But I take your point: it's hard to get all these different nuances across precisely in diagrams. That's why we also have mathematical notation to go along with the diagrams! :-) )
06.09.2025 21:50 β π 0 π 0 π¬ 0 π 0Well, f maps *any* point of the data space to the latent space, and g maps *any* point of the latent space to the data space. I.e.,
f : ββΏ β βα΅,
g : βα΅ β ββΏ.
The point x is just one example. So it might in fact be misleading to imply that f gets applied only to x, or that ends only at xΜ.
P.S. I should also mention that these diagrams were significantly improved via feedback from many folks from here and elsewhere.
Hopefully they account for some of the gripesβif not, I'm ready for the next batch! π
bsky.app/profile/keen...
Of course, there will be those who say that the representation diagram is βobvious,β and βthat's what everyone has in their head anyway.β
If soβ¦ good for you! If not, I hope this alternative picture provides some useful insight as you hack in this space. π
[End π§΅]
If you want to use or repurpose these diagrams, the source files (as PDF) can be found at
cs.cmu.edu/~kmcrane/Aut...
(Licensed under CC0 1.0 Universal)
Likewise, here's a simpler βimplementationβ diagram, that still retains the most important idea of an *auto*-encoder, namely, that you're comparing the output against *itself*.
06.09.2025 21:20 β π 1 π 0 π¬ 1 π 0Personally, I find both of these diagrams a little bit crowdedβhere's a simpler βrepresentationβ diagram, with fewer annotations (that might anyway be better explained in accompanying text).
06.09.2025 21:20 β π 1 π 0 π¬ 1 π 0Finally, a natural question raised by this picture is: how do I sample/generate new latents z? For a βvanillaβ autoencoder, there's no simple a priori description of the high-density regions.
This situation motivates *variational* autoencoders (which are a whole other storyβ¦).