βFair diceβ might make you think of perfect cubes with equal frequencies (say, 1/6 on all sides) π²
But βfairβ really just means you get the frequencies you expect (say, 1/4, 1/4 & 1/2)
We can now design fair dice with any frequenciesβand any shape! π
hbaktash.github.io/projects/put...
25.09.2025 13:39 β π 18 π 5 π¬ 0 π 2
YouTube video by CMU School of Computer Science
Rethinking Fair Dice
Nicely produced clip by Matt Wein and Marylee Williams about our recent dice design project at @scsatcmu.bsky.social and @adobe.com
youtube.com/shorts/jD0ag...
π² π₯ π πͺ
24.09.2025 15:39 β π 15 π 3 π¬ 1 π 0
Tangent-point energy works for (2).
To incorporate (1) I might (strongly) penalize the distance from each data point p to the *closest* point on the curve. This encourages at least one point of the curve to pass through each data point, without pulling on the whole curve.
24.09.2025 00:33 β π 2 π 0 π¬ 0 π 0
Thanks for the thought-provoking example. π
19.09.2025 13:29 β π 1 π 0 π¬ 0 π 0
Reminds me of the Kahneman and Tversky experiments (βSteve is more likely to be a librarian than a farmer.β) If LLMs are trained on human-generated text, it doesnβt seem reasonable to expect them to be smarter than the average text-generating human. (Though they sometimes are anyway.)
19.09.2025 13:28 β π 1 π 0 π¬ 1 π 0
On the other hand, I was too dumb to recognize the subtlety on first glance. So maybe the model is βjust as bad as a human?β
19.09.2025 13:27 β π 1 π 0 π¬ 1 π 0
So, in the the absence of any priors or additional information, 1/3 is a reasonable-ish approximation. But I agree it would be far better if the model simply said βthatβs hard to answer because there are many ambiguous factorsβ (as I have).
19.09.2025 13:26 β π 1 π 0 π¬ 1 π 0
This oneβs not so clear cut: βbabyβ is an ambiguous age range, and a baby can be a twin or triplet, born in any order. Even a newborn could have younger step siblings in rare cases.
Weβre also presuming itβs a human baby, whereas other species have different life spans.
19.09.2025 13:26 β π 1 π 0 π¬ 1 π 0
Not seeing it. Whatβs wrong with this answer? (There are six possible permutations, but the other two siblings are interchangeableβ¦)
17.09.2025 21:47 β π 0 π 0 π¬ 1 π 0
I adapted Unicodeit! (See the acknowledgment section on GitHub; also meant to mention that in the footer).
I had been using your website for years, but wanted something more integrated.
Thank you for contributing to open source. π
11.09.2025 13:57 β π 2 π 0 π¬ 1 π 0
I got tired of mashing together tools to write long threads with π«π’ππ‘ ππππππ‘π‘πππ and β³Ξ±β ββso I wrote Laππ€πππ‘!
It converts Markdown and LaTeX to Unicode that can be used in βtweetsβ, and automatically splits long threads. Try it out!
keenancrane.github.io/LaTweet/
11.09.2025 13:28 β π 89 π 17 π¬ 3 π 3
(More seriously: if the geometry of the apples was well-captured by the artist, and the color is unique to that geometry, I would be willing to bet the answer is βyes.β)
06.09.2025 23:09 β π 1 π 0 π¬ 0 π 0
If it began life as a drawing, is that question even well-posed?
06.09.2025 23:04 β π 1 π 0 π¬ 1 π 0
Oh, you wrote a book on this stuff. I guess I didn't need to be quite so didactic in my response! ;-)
06.09.2025 21:51 β π 4 π 0 π¬ 0 π 0
(But I take your point: it's hard to get all these different nuances across precisely in diagrams. That's why we also have mathematical notation to go along with the diagrams! :-) )
06.09.2025 21:50 β π 0 π 0 π¬ 0 π 0
Well, f maps *any* point of the data space to the latent space, and g maps *any* point of the latent space to the data space. I.e.,
f : ββΏ β βα΅,
g : βα΅ β ββΏ.
The point x is just one example. So it might in fact be misleading to imply that f gets applied only to x, or that ends only at xΜ.
06.09.2025 21:49 β π 1 π 0 π¬ 3 π 0
P.S. I should also mention that these diagrams were significantly improved via feedback from many folks from here and elsewhere.
Hopefully they account for some of the gripesβif not, I'm ready for the next batch! π
bsky.app/profile/keen...
06.09.2025 21:20 β π 2 π 0 π¬ 1 π 0
Of course, there will be those who say that the representation diagram is βobvious,β and βthat's what everyone has in their head anyway.β
If soβ¦ good for you! If not, I hope this alternative picture provides some useful insight as you hack in this space. π
[End π§΅]
06.09.2025 21:20 β π 3 π 0 π¬ 1 π 0
If you want to use or repurpose these diagrams, the source files (as PDF) can be found at
cs.cmu.edu/~kmcrane/Aut...
(Licensed under CC0 1.0 Universal)
06.09.2025 21:20 β π 5 π 0 π¬ 1 π 0
Likewise, here's a simpler βimplementationβ diagram, that still retains the most important idea of an *auto*-encoder, namely, that you're comparing the output against *itself*.
06.09.2025 21:20 β π 1 π 0 π¬ 1 π 0
Personally, I find both of these diagrams a little bit crowdedβhere's a simpler βrepresentationβ diagram, with fewer annotations (that might anyway be better explained in accompanying text).
06.09.2025 21:20 β π 1 π 0 π¬ 1 π 0
Finally, a natural question raised by this picture is: how do I sample/generate new latents z? For a βvanillaβ autoencoder, there's no simple a priori description of the high-density regions.
This situation motivates *variational* autoencoders (which are a whole other storyβ¦).
06.09.2025 21:20 β π 2 π 0 π¬ 1 π 0
It should also be clear that, unless the reconstruction loss is exactly zero, the learned manifold M only approximates (rather than interpolates) the given data. For instance, x does not sit on M, even though xΜ does.
(If M does interpolate all xα΅’, you're probably overfitting)
06.09.2025 21:20 β π 3 π 0 π¬ 1 π 0
Another thing made clear by this picture is that, no matter what the true dimension of the data might be, the manifold M predicted by the decoder generically has the same dimension as the latent space: it's the image of R^k under g.
So, the latent dimension is itself a prior.
06.09.2025 21:20 β π 3 π 0 π¬ 1 π 0
In regions where we don't have many samples, the decoder g isn't reliable: we're basically extrapolating (i.e., guessing) what the true data manifold looks like.
The diagram suggests this idea by βcutting offβ the manifoldβbut in reality thereβs no clear, hard cutoff.
06.09.2025 21:20 β π 2 π 0 π¬ 1 π 0
Here's a way of visualizing the maps *defined by* an autoencoder.
The encoder f maps high-dimensional data x to low-dimensional latents z. The decoder tries to map z back to x. We *always* learn a k-dimensional submanifold M, which is reliable only where we have many samples z.
06.09.2025 21:20 β π 3 π 0 π¬ 1 π 0
This picture is great if you want to simply close your eyes and implement something.
But suppose your implementation doesn't workβor you want to squeeze more performance out of your data.
Is there another picture that helps you think about what's going on?
(Obviously: yes!)
06.09.2025 21:20 β π 1 π 0 π¬ 1 π 0
With autoencoders, the first (and last) picture we see often looks like this one: a network architecture diagram, where inputs get βcompressedβ, then decoded.
If we're lucky, someone bothers to draw arrows that illustrate the main point: we want the output to look like the input!
06.09.2025 21:20 β π 1 π 0 π¬ 1 π 0
A similar thing happens when (many) people learn linear algebra:
They confuse the representation (matrices) with the objects represented by those matrices (linear maps⦠or is it a quadratic form?)
06.09.2025 21:20 β π 3 π 0 π¬ 1 π 0
βEveryone knowsβ what an autoencoder isβ¦ but there's an important complementary picture missing from most introductory material.
In short: we emphasize how autoencoders are implementedβbut not always what they represent (and some of the implications of that representation).π§΅
06.09.2025 21:20 β π 68 π 10 π¬ 2 π 1
I write, draw, and make music. Also computer and math stuff. Seeker of wisdom, creativity, kindness, and roasted garlic π³οΈβπ π¨π¦
Associate Professor in EECS at MIT. Neural nets, generative models, representation learning, computer vision, robotics, cog sci, AI.
https://web.mit.edu/phillipi/
she/her π³οΈββ§οΈπ³οΈβπ
Mathematician in metro Vancouver, working on VFX software in simulation/numerics/geometry. You may know me as the inventor of FLIP for incompressible flow, curl-noise, a very simple fast Poisson disc algorithm, my fluids book, cloth stuffβ¦
math & brains @EPFL & MPI
interested in intelligence
tries to make microscopes smarter Β· bioimage analysis, optogenetics, ml, 3d printing, open science Β· phd student in cellular signalling dynamics @PertzLab
https://www.mlb.com/giants
San Francisco, CA
The official account of the San Francisco Giants
Associate Professor @brownvc.bsky.socialβ¬
Affiliate Faculty @uwcse.bsky.social
Chair βͺ@wigraph.bsky.socialβ¬
Artist, Teacher, Co-Founder of Tile Farm βlet me know if you want to use it in your classroom or just for fun!
https://tilefarm.com
Theoretical and computational biophysicist at Carnegie Mellon University. Loves lipid membranes, music, and art. (he/him) π©πͺπΊπΈ
Mathematician and Computer Scientist, Smith College, USA.
https://cs.smith.edu/~jorourke/
Polyhedron displayed in banner has max volume of all foldings from a square.
Assistant Professor, MIT | Co-founder & Chair, Climate Change AI | MIT TR35, TIME100 AI | she/they
Associate Prof @HarvardMed. Microbial evolution, antibiotic resistance, mobile genetic elements, algorithms, phages, molecular biotech, etc. Basic research is the engine of progress.
baymlab.hms.harvard.edu
Tech journalist and author, who increasingly also talks on TV and radio. Interested in the sparks that happen when the online and offline worlds collide
@stokel on the other place. Buy my book: How AI Ate the World!
KΔnaka maoli & Black. Mathematician. Usually tired (not sleepy). Forever dreaming of home (Moku o Keawe).
Cozy random passages from Arnold Lobel's Frog and Toad books. Posts auto-delete.
R&D Scientist at Ubisoft La Forge focusing on light transport and fluid simulations. I like mathematics, physics, climbing, progressive musics and reading horror / fantasy / sci-fi books!
AI professor. Director, Foundations of Cooperative AI Lab at Carnegie Mellon. Head of Technical AI Engagement, Institute for Ethics in AI (Oxford). Author, "Moral AI - And How We Get There."
https://www.cs.cmu.edu/~conitzer/