Russ Tedrake's recent talk about the robotics/AI work at TRI is soooo good www.youtube.com/watch?v=TN1M...
06.07.2025 01:35 β π 0 π 0 π¬ 0 π 0@jessemichel.bsky.social
Russ Tedrake's recent talk about the robotics/AI work at TRI is soooo good www.youtube.com/watch?v=TN1M...
06.07.2025 01:35 β π 0 π 0 π¬ 0 π 0It would be so nice if there was more competition for WiFi.
Unfortunately, there's a monopoly in Cambridge. Xfinity is the only fast WiFi. They have these random price hikes (a high baseline price with "promotions" that randomly expire). Their customer support is terrible.
Today marks the second time that FaΓ di Bruno's formula was super useful to me.
en.wikipedia.org/wiki/Fa%C3%A...
It expresses a higher-order derivative of a composition of functions and uses the Bell polynomials. It came up in a distribution theory proof for my dissertation.
Cursor tab needs to stop repeatedly trying to modify the same line. If I rejected the last three recommendations, the fourth isn't going to change my mind...
03.06.2025 15:23 β π 0 π 0 π¬ 0 π 0Screw theory, differential kinematics, and singularities.
These are all topics that I'm interested in.
Cursor repeated tries to delete everything in my dissertation. It really understands me.
25.04.2025 22:47 β π 1 π 0 π¬ 0 π 0LLMs are excellent at choosing notation for math. They tend to be very concise and elegant.
Unfortunately, they also commonly make many mistakes in distribution theory :(.
Not to brag, but Elon has lost orders of magnitude more money than I have.
23.04.2025 21:20 β π 1 π 0 π¬ 0 π 0The audio and video aren't great early on, but improve later in the video. It has a pretty pedagogical introduction to distribution theory and why it's relevant to differentiable rendering.
18.04.2025 02:05 β π 0 π 0 π¬ 0 π 0I found a recording of my talk on differentiating parametric discontinuities from OOPSLA:
www.youtube.com/live/ltA6hQA...
I explain some of the math techniques (e.g., distribution theory) and PL tools (e.g., formal semantics) behind developing a language for differentiable rendering.
gpt-4o makes for a solid technical editor. It finds missing words, repeated words, and issues with parallelism. I don't know how many mistakes it didn't find and I'm sure that there were definitely some false positives.
05.04.2025 04:27 β π 0 π 0 π¬ 0 π 0Why is accepting/rejecting AI-generated edits to a file so buggy in VSCode? There are so many cases where it duplicates lines or text disappears.
05.04.2025 03:14 β π 1 π 0 π¬ 0 π 0The biggest upgrade to MIT ever: baby bananas in the banana lounge.
04.04.2025 18:35 β π 0 π 0 π¬ 0 π 0I put a calls to an LLM in a loop and if it gives an answer (rather than a tool call) in too few iterations, I tell it to analyze what it did and then improve the results.
Does that mean I turned a regular LLM into an β¨ LLM with reasoning β¨?
There are two ways to interpret this job listing:
Postdoctoral Appointee - High Pressure Research
Life hack: use a travel backpack. Flights seem to only require checking carry ons.
11.03.2025 13:16 β π 0 π 0 π¬ 0 π 0I'll be in the Bay Area from March 11th-21st to catch up with friends and to look for post-PhD employment! DM me if you're interested!
09.03.2025 00:50 β π 0 π 0 π¬ 0 π 0I updated an API and then propagated the changes by hand to a single file. I then wanted a model to do a similar updated to two other files. o3-mini didn't do super well on this multi-file editing task, but Claude 3.7 Sonnet Thinking did a great job (still didn't 1-shot it).
06.03.2025 20:55 β π 0 π 0 π¬ 0 π 0E.g., JAX's `ravel_pytree` function breaks down a pytree (basically any reasonable data structure) into a flat array and a function that let's you recover the original pytree from the 1D array.
docs.jax.dev/en/latest/_a...
JAX just casually solves some really nasty problems that seem to be far more general than automatic differentiation and are extremely useful for turning imperative code into functional code in Python.
06.03.2025 07:18 β π 0 π 0 π¬ 1 π 0In my experience writing JAX code, o3-mini consistently produces high quality output. Much better than Sonnet 3.7, Sonnet 3.7 Thinking, o1, etc.
I'm doing custom AD for a primitive, which I think is neither a standard thing nor an extremely unusual thing to do.
I haven't heard much about ChatGPT-Wolfram online, but it's actually quite good. I sometimes have to prod it to show more work, but it can do quite intricate calculations:
chatgpt.com/g/g-0S5FXLyF...
It's sad though because it means that research tends to run deeper rather than wider. Prioritizing exploit over explore.
28.02.2025 14:15 β π 0 π 0 π¬ 0 π 0Graduate students seem to consistently undervalue section 1 and 2 level contributions that establish a new research direction.
This makes sense because the large majority of their day-to-day work is in the interior of the paper and PIs often write those sections.
Excited to share our work with friends from MIT/Google on Learned Asynchronous Decoding! LLM responses often contain chunks of tokens that are semantically independent. What if we can train LLMs to identify such chunks and decode them in parallel, thereby speeding up inference? 1/N
27.02.2025 00:38 β π 16 π 9 π¬ 1 π 1Which do you prefer, Stokes' theorem or Stokes flow?
I'm personally a fan of Stokes' theorem, but different Stokes for different folks.
I've learned that there more instances in the real world where I should be programming (or if I knew the right libraries and was 100x faster at coding I'd be more efficient).
20.02.2025 20:36 β π 0 π 0 π¬ 0 π 0A lesson I've learned from using LLMs is that the ability to write little scripts quickly and to execute them is such a superpower. Like, if you ask a o3-mini to could the number of "r"s in strawberry, it just writes a little script and runs it.
20.02.2025 20:36 β π 0 π 0 π¬ 1 π 0o3-mini seems to have a reasonable grasp of differential geometry! Genuinely a huge leap forward over 4o from my experience
12.02.2025 19:06 β π 0 π 0 π¬ 0 π 0"I do feel seen, but not in the way I want to be seen." --a friend
11.02.2025 15:58 β π 0 π 0 π¬ 0 π 0