Leon Rofagha's Avatar

Leon Rofagha

@leonrofa.bsky.social

σ-algebra hobbyist

116 Followers  |  157 Following  |  88 Posts  |  Joined: 04.11.2024  |  2.2433

Latest posts by leonrofa.bsky.social on Bluesky

@shubhendu.bsky.social I just thought of 2 :-) Bombastic claims as a negative signal? And mentions of simplicity or naturalness as a positive one? What do you reckon?

09.10.2025 19:44 — 👍 0    🔁 0    💬 0    📌 0

The keywords strategy works reasonably well for stat.ML and stat.ME but seemingly less so for math.ST because I find that the titles are a lot more “obscure”

09.10.2025 05:31 — 👍 0    🔁 0    💬 1    📌 0

Are there nice heuristics you can recommend for arXiv? :-) I’ve mostly been relying on name recognition and keywords related to topics I’m interested in which feels unfortunate

09.10.2025 05:28 — 👍 0    🔁 0    💬 2    📌 0

It's a great discussion because we all have different interpretation of what this means and what the consequences are.

Myself, I have absolutely no doubt that scaling works. If you have all the videos in the world and are able to train a model that can recall and merge any of them, then for sure...

03.10.2025 10:01 — 👍 12    🔁 4    💬 1    📌 6
IT WAS JUST AN ACCIDENT - Official Trailer - In Theaters October 15
YouTube video by NEON IT WAS JUST AN ACCIDENT - Official Trailer - In Theaters October 15

Beautiful film!

02.10.2025 07:24 — 👍 0    🔁 0    💬 0    📌 0
Preview
The desperate search for superstar talent Too much potential goes to waste

Otherwise an interesting article

27.09.2025 08:53 — 👍 0    🔁 0    💬 0    📌 0
Post image

So what? There are 50 of the former versus 1100 of the latter

27.09.2025 08:51 — 👍 0    🔁 0    💬 1    📌 0

1 quinquennat acheté, 1 quinquennat offert

25.09.2025 12:50 — 👍 38    🔁 10    💬 0    📌 0
Post image

The American mind cannot comprehend

25.09.2025 11:11 — 👍 24    🔁 3    💬 0    📌 2

“Instead, it seems likely that the way that model class is parametrized leads to a loss landscape that is challenging for our standard gradient-based learning algorithms or that they do not profit from phenomena that lead to good generalisation in over-parametrized models.” !

22.09.2025 17:49 — 👍 2    🔁 0    💬 0    📌 0

I’ll take a look, many thanks!

21.09.2025 20:55 — 👍 0    🔁 0    💬 0    📌 0

Interesting, thanks! Do you have any rules of thumb to gauge how useful something will actually be to your research? Or is the best way to proceed to just dive in and stop if the payoff seems too distant?

21.09.2025 11:23 — 👍 0    🔁 0    💬 1    📌 0

Many somewhat related questions to which I guess the answer will invariably be “it depends” but I’d be happy to hear any thoughts :-)

21.09.2025 11:05 — 👍 0    🔁 0    💬 0    📌 0

Also, how am I supposed to allocate my time between learning (1) new tools versus (2) new areas of application?

21.09.2025 10:51 — 👍 0    🔁 0    💬 1    📌 0

Along the “purity continuum”, where will my efforts be most rewarded? A second course in measure theory or getting better at inequalities?

21.09.2025 10:45 — 👍 0    🔁 0    💬 1    📌 0

Relatedly, is it better to gain real depth in one or a few areas or am I better off getting an overview of many areas?

21.09.2025 10:41 — 👍 0    🔁 0    💬 1    📌 0

Surely I must choose my battles given the breadth and depth of developments in many areas relevant to statistics research? It also seems that certain toolkits are just more useful than others? Or are they just more widely known or used?

21.09.2025 10:39 — 👍 0    🔁 0    💬 1    📌 0

It’s a question that’s been vexing me for some time

21.09.2025 10:39 — 👍 0    🔁 0    💬 1    📌 0

When learning new mathematics as tools for applied mathematics research, how am I supposed to choose what to learn?

21.09.2025 10:35 — 👍 1    🔁 0    💬 2    📌 0

Can you recommend some papers in this area? Seems interesting!

21.09.2025 08:14 — 👍 1    🔁 0    💬 1    📌 0

Can someone shed light on how reasonable (and common) this scenario is? It’s only one outlier but the authors don’t mention its magnitude

17.09.2025 19:27 — 👍 0    🔁 0    💬 0    📌 0
Preview
Robust machine learning by median-of-means: Theory and practice Median-of-means (MOM) based procedures have been recently introduced in learning theory (Lugosi and Mendelson (2019); Lecué and Lerasle (2017)). These estimators outperform classical least-squares estimators when data are heavy-tailed and/or are corrupted. None of these procedures can be implemented, which is the major issue of current MOM procedures (Ann. Statist. 47 (2019) 783–794). In this paper, we introduce minmax MOM estimators and show that they achieve the same sub-Gaussian deviation bounds as the alternatives (Lugosi and Mendelson (2019); Lecué and Lerasle (2017)), both in small and high-dimensional statistics. In particular, these estimators are efficient under moments assumptions on data that may have been corrupted by a few outliers. Besides these theoretical guarantees, the definition of minmax MOM estimators suggests simple and systematic modifications of standard algorithms used to approximate least-squares estimators and their regularized versions. As a proof of concept, we perform an extensive simulation study of these algorithms for robust versions of the LASSO.

doi.org/10.1214/19-A...

17.09.2025 15:59 — 👍 2    🔁 1    💬 1    📌 1
Post image

😵‍💫

17.09.2025 15:58 — 👍 1    🔁 0    💬 1    📌 0

Two-column papers are the bane of my existence

17.09.2025 13:17 — 👍 0    🔁 0    💬 0    📌 0

You’ll be glad to know it’s a Belgian chain ;)

15.09.2025 21:02 — 👍 0    🔁 0    💬 0    📌 0

1/18 Is China's economy collapsing or dominating? Meg Rithmire's new piece in Current History argues it's doing both simultaneously—and that's the problem.

02.09.2025 19:58 — 👍 35    🔁 16    💬 3    📌 2

it's crazy that so many people go through life experiencing math as one big pissing contest (god knows I used to feel that way in math class) when it can be such a collaborative and generous endeavour

29.08.2025 16:22 — 👍 5    🔁 1    💬 0    📌 0
Preview
Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave Longtime acolytes are sidelined as Big Tech chief directs biggest leadership reorganisation in two decades

“Another […] former OpenAI researcher went through Meta’s onboarding process but never showed up for his first day”. Too good

29.08.2025 15:19 — 👍 1    🔁 0    💬 0    📌 0
Preview
The wrong way to end a war Dark lessons from history that explain Vladimir Putin’s “peacemaking”

Ending conflicts is hard, especially when belligerents use a peace process to advance war aims by other means. My column, The Telegram, on historical lessons from Korea and Bosnia for the Ukraine war - and how Putin grasps them better than Trump

economist.com/internationa...
from The Economist

27.08.2025 11:43 — 👍 9    🔁 7    💬 1    📌 0

I reckon this isn’t the “thinking” version?

21.08.2025 20:47 — 👍 0    🔁 0    💬 1    📌 0

@leonrofa is following 20 prominent accounts