This is joint work with wonderful collaborators @leenacvankadara.bsky.social , @cevherlions.bsky.social and Jin Xu during our time at Amazon.
๐งต 10/10
@mohaas.bsky.social
IMPRS-IS PhD student @ University of Tรผbingen with Ulrike von Luxburg and Bedartha Goswami. Mostly thinking about deep learning theory. Also interested in ML for climate science. mohawastaken.github.io
This is joint work with wonderful collaborators @leenacvankadara.bsky.social , @cevherlions.bsky.social and Jin Xu during our time at Amazon.
๐งต 10/10
How achieve correct scaling with arbitrary gradient-based perturbation rules? ๐ค
โจIn ๐P, scale perturbations like updates in every layer.โจ
๐กGradients and incoming activations generally scale LLN-like, as they are correlated.
โก๏ธ Perturbations and updates have similar scaling properties.
๐งต 9/10
In experiments across MLPs and ResNets on CIFAR10 and ViTs on ImageNet1K, we show that ๐Pยฒ indeed jointly transfers optimal learning rate and perturbation radius across model scales and can improve training stability and generalization.
๐งต 8/10
... there exists a โจuniqueโจ parameterization with layerwise perturbation scaling that fulfills all of our constraints:
(1) stability,
(2) feature learning in all layers,
(3) effective perturbations in all layers.
We call it the โจMaximal Update and Perturbation Parameterization (๐Pยฒ)โจ.
๐งต 7/10
Hence, we study arbitrary layerwise learning rate, initialization variance and perturbation scaling.
For us, an ideal parametrization should fulfill: updates and perturbations of all weights should have a non-vanishing and non-exploding effect on the output function. ๐ก
We show that ...
๐งต 6/10
... we show that ๐P is not able to consistently improve generalization or to transfer SAM's perturbation radius, because it effectively only perturbs the last layer. โ
๐กSo we need to allow layerwise perturbation scaling!
๐งต 5/10
The maximal update parametrization (๐P) by arxiv.org/abs/2011.14522 is a layerwise scaling rule of learning rates and initialization variances that yields width-independent dynamics and learning rate transfer for SGD and Adam in common architectures. But for standard SAM, ...
๐งต 4/10
SAM and model scale are widely observed to improve generalization across datasets and architectures. But can we understand how to optimally scale in a principled way? ๐๐ค
๐งต 3/10
Short thread for effective SAM scaling here.
arxiv.org/pdf/2411.00075
A thread on our Mamba scaling will be coming soon by
๐ @leenacvankadara.bsky.social
๐งต2/10
Stable model scaling with width-independent dynamics?
Thrilled to present 2 papers at #NeurIPS ๐ that study width-scaling in Sharpness Aware Minimization (SAM) (Th 16:30, #2104) and in Mamba (Fr 11, #7110). Our scaling rules stabilize training and transfer optimal hyperparams across scales.
๐งต 1/10
I'd love to be added :)
28.11.2024 19:28 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0