me trying to cut my ICML rebuttal down to <5000 characters
31.03.2025 12:17 β π 4 π 1 π¬ 0 π 1me trying to cut my ICML rebuttal down to <5000 characters
31.03.2025 12:17 β π 4 π 1 π¬ 0 π 1couldnβt agree more!
23.11.2024 17:05 β π 3 π 0 π¬ 0 π 0ππππππ
20.11.2024 22:03 β π 1 π 0 π¬ 0 π 0
If people knew how much of my PhD has consisted of reading about something new, referencing back to Elements of Statistical Learning, and simply writing down what I learnedβ¦
It feels like a cheat code!
Part 2: Why do boosted trees outperform deep learning on tabular data??
@alanjeffares.bsky.social & I suspected that answers to this are obfuscated by the 2 being considered very different algsπ€
Instead we show they are more similar than youβd think β making their diffs smaller but predictive!π§΅1/n
language is always evolving but if we ever reach a definition of βcoolβ that includes regression smoothers we will know our species has lost its way
20.11.2024 08:40 β π 1 π 0 π¬ 1 π 0iβm too lazy to make a thinly-veiled self-promotion βstarter packβ, so if you could all add me anyway that would be greatβ¦
19.11.2024 16:27 β π 7 π 0 π¬ 0 π 0iβm starting a meta grumpy list for people that are grumpy about this list
18.11.2024 22:12 β π 4 π 0 π¬ 1 π 0
From double descent to grokking, deep learning sometimes works in unpredictable ways.. or does it?
For NeurIPS(my final PhD paper!), @alanjeffares.bsky.social & I explored if&how smart linearisation can help us better understand&predict numerous odd deep learning phenomena β and learned a lot..π§΅1/n
and all of a sudden, my feed changed from musk and outrage to matrices and optimisersβ¦
17.11.2024 17:27 β π 41 π 1 π¬ 0 π 0