How do tokens evolve as they are processed by a deep Transformer?
With Josรฉ A. Carrillo, @gabrielpeyre.bsky.social and @pierreablin.bsky.social, we tackle this in our new preprint: A Unified Perspective on the Dynamics of Deep Transformers arxiv.org/abs/2501.18322
ML and PDE lovers, check it out!
31.01.2025 16:56 โ ๐ 96 ๐ 16 ๐ฌ 2 ๐ 0
Softmax is also the exact formula for a label distribution p(y|x) under Bayes rule if class distributions p(x|y) have exponential family form (equivariant if Gaussian), so it can have a deeper rationale in a probabilistic model of the data (than a one-hot relaxation).
17.01.2025 09:57 โ ๐ 5 ๐ 0 ๐ฌ 0 ๐ 0
Sorry, more a question re the OP. Just looking to understand the context.
29.12.2024 04:38 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
Can you give some examples of the kind of papers youโre referring to?
29.12.2024 00:44 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
And of course this all builds on the seminal work of @wellingmax.bsky.social, @dpkingma.bsky.social, Irina Higgins, Chris Burgess et al.
19.12.2024 15:03 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
sorry, @benmpoole.bsky.social (fat fingers..)
18.12.2024 17:07 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Any constructive feedback, discussion or future collaboration more than welcome!
Full paper: arxiv.org/pdf/2410.22559
18.12.2024 16:57 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
Building on this, we clarify the connection between diagonal covariance and Jacobian orthogonality and explain how disentanglement follows, ultimately defining disentanglement as factorising the data distribution into statistically independent components
18.12.2024 16:57 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
We focus on VAEs, used as building blocks of SOTA diffusion models. Recent works by Rolinek et al. and Kumar & @benmpoole.bsy.social suggest that disentanglement arises because diagonal posterior covariance matrices promote column-orthogonality in the decoderโs Jacobian matrix.
18.12.2024 16:57 โ ๐ 0 ๐ 0 ๐ฌ 2 ๐ 0
While disentanglement is often linked to different models whose popularity may ebb & flow, we show that the phenomenon itself relates to the dataโs latent structure and is more fundamental than any model that may expose it.
18.12.2024 16:57 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Machine learning has made incredible breakthroughs, but our theoretical understanding lags behind.
We take a step towards unravelling its mystery by explaining why the phenomenon of disentanglement arises in generative latent variable models.
Blog post: carl-allen.github.io/theory/2024/...
18.12.2024 16:57 โ ๐ 18 ๐ 4 ๐ฌ 1 ๐ 1
Maybe give it time. Rome, a day, etc..
18.12.2024 10:33 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Yup sure, the curve has to kick in at some point. I guess โlawโ sounds cooler than linear-ish graph. Maybe it started out as an acronym โLinear for A Whileโ.. ๐คทโโ๏ธ
15.12.2024 13:57 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 1
I guess as complexity increases math->phys->chem->bio->โฆ Itโs inevitable that โtheory-drivenโ tends to โtheory-inspiredโ. ML seems a bit tangential tho since experimenting is relatively consequence free and you donโt need to deeply theorise, more iterate. So theory is deprioritised and lags for now
15.12.2024 08:16 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0
But doesnโt theory follow empirics in all of science.. until it doesnโt? Except that in most sciences you canโt endlessly experiment for cost/risk/melting your face off reasons. But ML keeps going, making it a tricky moving/expanding target to try to explain/get ahead of.. I think itโll happen tho.
14.12.2024 18:47 โ ๐ 1 ๐ 0 ๐ฌ 2 ๐ 0
The last KL is nice as itโs clear that the objective is optimised when the model and posteriors match as well as possible. The earlier KL is nice as it contains the data distribution and all explicitly modelled distributions, so maximising ELBO can be seen intuitively as bringing them all โin lineโ.
05.12.2024 15:41 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
I think an intuitive view is that:
- max likelihood minimises
KL[p(x)||pโ(x)] (pโ(x)=model)
- max ELBO minimises
KL[p(x)q(z|x) || pโ(x|z)pโ(z)]
So brings together 2 models of the joint. (where pโ(x)=\int pโ(x|z)pโ(z))
Can rearrange in diff ways, eg as
KL[p(x)q(z|x) || pโ(x)pโ(z|x)]
(or as in VAE)
05.12.2024 15:36 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Ha me too, exactly that..
03.12.2024 22:36 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
In the binary case, both look the same: sigmoid might be a good model of how y becomes more likely (in future) as x increases. But sigmoid is also 2-case softmax so models Bayes rule for 2 classes of (exp-fam) x|y. The causality between x and y are very different, which "p(y|x)" doesn't capture.
02.12.2024 08:26 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
I think this comes down to the model behind p(x,y). If features of x cause y, e.g. aspects of a website (x) -> clicks (y); age/health -> disease, then p(y|x) is a (regression) fn of x. But if x|y is a distrib'n of different y's (e.g. cats) then p(y|x) is given by Bayes rule (squint at softmax).
02.12.2024 08:20 โ ๐ 7 ๐ 1 ๐ฌ 1 ๐ 0
Pls add me thanks!
29.11.2024 15:53 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
If few-shot transfer is ur thing!
28.11.2024 17:07 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Could you pls add me? Thanks!
26.11.2024 07:13 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Yep, could maybe work. The accepted-to-RR bar would need to be high to maintain value, but โshininessโ test cld be deferred. Think thereโs still a separate issue of โhighly irresponsibleโ reviews that needs addressing either way (as at #CVPR2025). We canโt just whinge & doing absolutely nothing!
24.11.2024 23:00 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Definitely something to be said for RR, as main confs are effectively a lumpy version. But if acceptance to main confs is still the metric for recruiters etc, RR acceptance may not mean so much and the issue of the subjective criteria for what gets accepted to confs remainsโฆ?
24.11.2024 20:40 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Youโd think that just the threat of it (& the occasional AC pointed reminder) would be enough to rarely, if ever, need to enforce it. If so, how much itโs used wouldnโt reflect success.
Youโd maybe need a survey of review quality from authors/ACsโฆ or an analysis of the anger on here! ๐คฌ๐คฃ
24.11.2024 20:11 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
go.bsky.app/PFpnqeM
23.11.2024 11:08 โ ๐ 34 ๐ 17 ๐ฌ 7 ๐ 0
Some conferences already give free entry to top reviewers. But this prob just rewards those who would anyway give good reviews.
23.11.2024 21:01 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
CVPR 2025 Changes
This is happening at #CVPR2025:
โIf a reviewer is flagged by an AC as โhighly irresponsibleโ, their paper submissions will be desk rejected per discretion of the PCsโ.
Canโt see why all confs donโt do this, especially if making all authors review.
(pt 2) cvpr.thecvf.com/Conferences/...
23.11.2024 20:56 โ ๐ 15 ๐ 1 ๐ฌ 1 ๐ 1
Media platform covering global conflict zones. Focus on the Ukraine-Russia war. Consider supporting us once via buymeacoffee.com/noelreports or by becoming a member via patreon.com/NOELREPORTS.
Professor a NYU; Chief AI Scientist at Meta.
Researcher in AI, Machine Learning, Robotics, etc.
ACM Turing Award Laureate.
http://yann.lecun.com
PhD student at University of Alberta. Interested in reinforcement learning, imitation learning, machine learning theory, and robotics
https://chanb.github.io/
Penn CS PhD student and IBM PhD Fellow studying strategic algorithmic interaction. Calibration, commitment, collusion, collaboration. She/her. Nataliecollina.com
Machine Learning @ University of Edinburgh | AI4Science | optimization | numerics | networks | co-founder @ MiniML.ai | ftudisco.gitlab.io
PhD student at @cmurobotics.bsky.social working on interactive algorithms for agentic alignment (e.g. imitation/RLHF). no model is an island. https://gokul.dev/.
Organic machine turning tea into theorems โ๏ธ
AI @ Microsoft Research โก๏ธ Goal: Teach models (and humans) to reason better
Letโs connect re: AI for social good, graphs & network dynamics, discrete math, logic ๐งฉ, ๐ฅพ,๐จ
Organizing for democracy.๐ฝ
www.rlaw.me
doing a phd in RL/online learning on questions related to exploration and adaptivity
> https://antoine-moulin.github.io/
Assistant Professor @Dept. Of Computer Science, University of Copenhagen, Ex Postdoc @MPI-IS, ETHZ, PhD @University of Oxford, B.Tech @CSE,IITK.
ML & Privacy Prof at the University of Melbourne, Australia. Deputy Dean Research. Prev Microsoft Research, Berkeley EECS PhD. @bipr on the X bird site. He/him.
Postdoc at UW CSE. Differential privacy, memorization in ML, and learning theory.
Postdoc researcher at IDEAL Institute in Chicago, hosted by UIC and TTIC.
My research interests are in machine learning theory, data-driven sequential decision-making, and theoretical computer science.
https://www.idanattias.com/
Computational Statistics and Machine Learning (CSML) Lab | PI: Massimiliano Pontil | Webpage: csml.iit.it | Active research lines: Learning theory, ML for dynamical systems, ML for science, and optimization.
Researcher @PontilGroup.bsky.social| Ph.D. Student @ellis.eu, @Polytechnique, and @UniGenova.
Interested in (deep) learning theory and others.
Post-doctoral Fellow @ Vector Institute, Toronto
Researcher at Google. Improving LLM factuality, RAG and multimodal alignment and evaluation. San Diego. he/him โ๏ธ๐ฑ๐ง๐ป๐ Prev UCSD, MSR, UW, UIUC.
CS prof at Penn, Amazon Scholar in AWS. Interested in ML theory and related topics, as well as photography and Gilbert and Sullivan. Website: www.cis.upenn.edu/~mkearns
PhD at Machine Learning Department, Carnegie Mellon University | Interactive Decision Making | https://yudasong.github.io