The "kombucha girl" meme, where the left panel (with her frowning) stating "According to the proposition" and the right one (showing her interested face) stating "Lemma tell you"
Mathematical writing is my passion.
11.01.2026 07:13 β π 53 π 6 π¬ 0 π 0
like this?
12.11.2025 10:09 β π 0 π 0 π¬ 1 π 0
Added a new symbols menu - let me know if I missed any of your favourite LaTeX commands!
11.11.2025 00:02 β π 4 π 0 π¬ 2 π 0
I can't tell how much interest there is. But messages like this definitely encourage me to continue it!
24.04.2025 06:29 β π 1 π 0 π¬ 0 π 0
LaTeX to Image
Effortlessly convert LaTeX math equations into high-quality images (PNG, JPEG, SVG).
I needed an easy way to make high resolution equations to post on Bluesky, so I made this: thomasahle.com/latex2png
16.03.2025 10:15 β π 4 π 0 π¬ 0 π 1
> If NATO hadn't been trying to expand there, there would have been no war.
There would.
> If NATO stops trying to expand into Ukraine, the war ends.
It wouldn't.
> If the US stops sending weapons and fomenting anti-Russian sentiment, the war ends.
This war is about territory not sentiment.
19.02.2025 21:19 β π 3 π 0 π¬ 0 π 0
Isserlis' (or Wick's) theorem is one of the strongest tools to handle High Dimensional Gaussians.
Turns out it generalizes to _every distribution_ using cumulant tensors!
That's higher order variance, skewness, kurtosis, etc.
19.02.2025 21:14 β π 1 π 0 π¬ 1 π 0
I added a Playground to tensorcookbook.com for when you need that Matrix or Tensor Derivative in a hurry.
Hopefully it can also be a way to help people become familiar with tensor diagrams.
18.02.2025 08:01 β π 1 π 0 π¬ 0 π 0
Now we're just waiting for a ZkiT model
12.02.2025 08:57 β π 1 π 0 π¬ 0 π 0
Now live in a new Functions chapter in tensorcookbook.com
09.02.2025 11:24 β π 0 π 0 π¬ 0 π 0
Some sketches for the next chapter
06.02.2025 10:28 β π 0 π 0 π¬ 0 π 1
I added code execution to tensorcookbook.com so you can try tensorgrad's automatic tensor algebra without installing anything.
04.02.2025 15:58 β π 0 π 0 π¬ 0 π 0
Tensor Product Attention illustrated with Tensor Diagrams
18.01.2025 14:00 β π 5 π 0 π¬ 1 π 0
Neat one-page proof of "Stirling's bound"
(n/e)βΏβ{2Ο n} β€ n! β€ (n/e)βΏ(β{2Ο n}+1)
Inspired by the discussion on mathoverflow.net/a/458011/5429. Just had to keep hitting it with logarithmic inequalities...
18.12.2024 12:37 β π 1 π 0 π¬ 0 π 0
Yes please!
13.12.2024 07:43 β π 0 π 0 π¬ 0 π 0
Poisson Probability Puzzle:
Let X ~ Poisson(π); Z = (X - π)/βπ; Y ~ Normal(0, 1).
How close is E[|X|^k] is to E[|Y|^k]?
Say we connect π and k by π = c kΒ³, what is now the limit E[|X|^k]/E[|Y|^k] as k β β?
This was harder to solve than expected, but the answer was surprisingly pretty π»
12.12.2024 23:41 β π 2 π 0 π¬ 1 π 0
"Central Limit Theorem" for the Poisson Distribution
11.12.2024 10:45 β π 1 π 0 π¬ 0 π 1
Can you refer me to the openai forum?
03.12.2024 06:28 β π 0 π 0 π¬ 1 π 0
History Heuristic - Chessprogramming wiki
For more information on history heuristics in chess, see www.chessprogramming.org/History_Heur...
03.12.2024 05:42 β π 0 π 0 π¬ 0 π 0
History Heuristic - Chessprogramming wiki
near future.
Time will tell if they'll update the entire network, or a smaller LoRA or side network.
Even chatbots like o1 could use TTT as an alternative to in context learning.
5/5
03.12.2024 05:42 β π 1 π 0 π¬ 1 π 0
while searching. If two subtrees are conceptually similar, it has to do all the work twice.
Test Time Training fixes this!
If AlphaZero updated its weights while searching, it could transfer learnings between the subtrees!
I'm sure we'll start seeing a lot of TTT architectures in the near...
4/5
03.12.2024 05:42 β π 0 π 0 π¬ 1 π 0
Obviously having a pretrained cnt[from][to] array wouldn't be helpful at all in chess, as moves may be good or bad entirely dependent on the position.
But because the butterfly table is reset at every search, it encodes "local information".
AlphaZero meanwhile doesn't learn anything while...
3/5
03.12.2024 05:42 β π 0 π 0 π¬ 1 π 0
Chess engines like Stockfish will keep a so-called butterfly board, keeping track of how often a move was chosen in the search tree. _Independently of the position_.
This is data is considered elsewhere in the search tree to decide how much time to spend considering the move.
Why do this?
2/5
03.12.2024 05:42 β π 0 π 0 π¬ 1 π 0
Test Time Training promises to finally unify learning and search. As always, chess is a good place to study such ideas:
AlphaZero generalized and simplified most of the tricks in chess engines like Stockfish, but one category is missing: history heuristics...
1/5
03.12.2024 05:42 β π 2 π 0 π¬ 1 π 0
Your o1 supports images?
30.11.2024 19:40 β π 1 π 0 π¬ 1 π 0
Making a wiki style website is a good way to do this, while encouraging others from. The community to contribute and keep it updated.
In fact, writing good Wikipedia articles for your field might be the best way to spread this knowledge.
30.11.2024 15:51 β π 1 π 0 π¬ 0 π 0
Clever use of the KV-cache: Writing in the margins (arxiv.org/abs/2408.14906) at Neurips next week.
By "taking notes" as you read, ypu reduce the complexity from N^3 (N tokens at N^2 cost) to N^3/3 (1+4+9+...+N^2).
29.11.2024 16:27 β π 1 π 0 π¬ 0 π 0
Professor a NYU; Chief AI Scientist at Meta.
Researcher in AI, Machine Learning, Robotics, etc.
ACM Turing Award Laureate.
http://yann.lecun.com
Research Scientist at Nvidia. Board Gamer. Austin, TX. My Bluesky focuses on board games. https://www.linkedin.com/in/npfet/
Work: quantum learning and control. Play: writing words and code.
#1 best-selling science author for kids.
https://csferrie.com
AI Dialogue Facilitator
Ph.D. in Physics and Mathematics
Data Scientist since 1978.
Chicago, IL, since 2003, US Citizen.
"ΠάνΟΟΞ½ ΟΟΞ·ΞΌΞ¬ΟΟΞ½ ΞΌΞΟΟΞΏΞ½ αΌΞ½ΞΈΟΟΟΞΏΟ, ΟαΏΆΞ½ ΞΌα½²Ξ½ α½Ξ½ΟΟΞ½ α½‘Ο αΌΟΟΞΉ, ΟαΏΆΞ½ Ξ΄α½² ΞΏα½ΞΊ α½Ξ½ΟΟΞ½ α½‘Ο ΞΏα½ΞΊ αΌΟΟΞΉ."
In the past Twitter - @alxfed
- Director @adalovelaceinst.bsky.social:ensuring data & AI work for ppl & society
- Stint in government - led #NationalDataStrategy; roles in Cabinet Office, ONS & MHCLG
- Charity roles inc. Samaritans Trustee; staff @ The RSA, Centrepoint, ParkinsonsUK
Princeton computer science prof. I write about the societal impact of AI, tech ethics, & social media platforms. https://www.cs.princeton.edu/~arvindn/
BOOK: AI Snake Oil. https://www.aisnakeoil.com/
Independent AI researcher, creator of datasette.io and llm.datasette.io, building open source tools for data journalism, writing about a lot of stuff at https://simonwillison.net/
Econ PhD Student @MIT | Prev writer @TheEconomist | Find me at #wadecounty | arjunramani.com
Chief Scientist at the UK AI Security Institute (AISI). Previously DeepMind, OpenAI, Google Brain, etc.
Head of Sci/cofounder at futurehouse.org. Prof of chem eng at UofR (on sabbatical). Automating science with AI and robots in biology. Corvid enthusiast
Globally ranked top 20 forecaster, former data scientist
As seen on TV! The Daily Show, Good Morning America
Protecting liberty and prosperity in the age of superintelligence
how shall we live together?
societal impacts researcher at Anthropic
saffronhuang.com
Cofounder CEO, Perplexity.ai
I build & teach AI stuff. Building @TakeoffAI. Learn to code & build apps with AI in our new Cursor & app courses on http://JoinTakeoff.com/courses.
friendly deep sea dweller