..actually, not only standard notation, but also to be able to speak about the loss (=log-loss) used to train today's LLMs.
10.07.2025 15:19 β π 0 π 0 π¬ 0 π 0@skiandsolve.bsky.social
β·οΈ ML Theorist carving equations and mountain trails | π΄ββοΈ Biker, Climber, Adventurer | π§ Reinforcement Learning: Always seeking higher peaks, steeper walls and better policies. https://ualberta.ca/~szepesva
..actually, not only standard notation, but also to be able to speak about the loss (=log-loss) used to train today's LLMs.
10.07.2025 15:19 β π 0 π 0 π¬ 0 π 0No, it is not information retrieval. It is deducing new things from old things. You can do this by running a blind breadth-first (unintelligent) search producing all proofs of all possible statements. Just don't want errors. But this is not retrieval. It is computation.
10.07.2025 05:21 β π 1 π 0 π¬ 0 π 0Of course approximations are useful. The paper is narrowly focused on deductive reasoning which seem to require the exactness we talk about. The point is that regardless of whether you use quantum mechanics or the Newtonian one, you don't want your derivations mistake-ridden.
10.07.2025 05:19 β π 2 π 0 π¬ 0 π 0Worst-case vs. average case: yes!
But I would not necessarily connect these to minimax vs. Bayes.
Yeah, admittedly, not a focus point of the paper. How about if the model produces a single response, the loss is the zero-one loss. Then the model better choose the label with the highest probability label, which is OK. Point of having mu: Not much point, just matching standard notation..
10.07.2025 05:13 β π 0 π 0 π¬ 1 π 0I am curious about these examples.. (and yes, I can construct a few, too, but I want to add more)
10.07.2025 04:51 β π 0 π 0 π¬ 0 π 0No, this is not correct: Learning 1[A>B] interestingly has the same complexity (provably). This is because 1[A>B] is in the "orbit" of 1[A>=B]. So the symmetric learning who is being taught 1[A>B] need to figure out it is not taught 1[A>=B].
10.07.2025 04:50 β π 0 π 0 π¬ 1 π 0Maybe. I am asking for much less here from the machines. I am asking for them just to be correct (or stay silent). No intelligence, just good old fashioned computation.
09.07.2025 02:44 β π 0 π 0 π¬ 1 π 0the solution is found..
09.07.2025 02:42 β π 0 π 0 π¬ 0 π 0Yes, transformers do not have "working memory". Also, I don't believe in that using them in AR mode is powerful enough for challenging problems. In a way, without "working memory", external "loop", we say the model should solve problems by free association ad infinitum or at least until
09.07.2025 02:42 β π 1 π 0 π¬ 1 π 0On the paper: Interesting but indeed there is little in common. On the problem studied in the paper: Would not a slightly more general statistical framework solve your problem? Ie measure error differently than through the prediction loss (AR models: parameters, spectral measure, etc.).
09.07.2025 02:39 β π 0 π 0 π¬ 0 π 0Yeah, I don't see the exactness happening that much on its own through statistical learning. Neither experimentally, nor theoretically. We have an example for illustrating this: use the uniform distribution for good coverage, teach transformers to compare m-bit integers using GD. Need 2^m examples.
09.07.2025 02:39 β π 0 π 0 π¬ 3 π 0Yeah, we cite this and this was a paper that got me started on this project!
09.07.2025 02:32 β π 1 π 0 π¬ 0 π 0First position paper I ever wrote. "Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence" arxiv.org/abs/2506.23908 Background: I'd like LLMs to help me do math, but statistical learning seems inadequate to make this happen. What do you all think?
08.07.2025 02:21 β π 51 π 9 π¬ 4 π 1Our seminars are back. If you missed Max's talk, it is on YouTube and today I will host Jeongyeol from UWM who will talk about the curious case of why latent MDPs though scary at first sight might be tractable! Link to the seminar homepage:
sites.google.com/view/rltheor...
Glad to see someone remembers these:)
04.04.2025 02:05 β π 7 π 0 π¬ 0 π 0should be distinguished. The reason they should not is because they are indistinguishable. So at least those need to be collapsed. So yes, one can start with redundant models, where it will appear you could have epistemic uncertainty, but this is easy to rule out. 2/2
20.03.2025 22:49 β π 0 π 0 π¬ 0 π 0I guess with a worst-case hat on, we just all die:) In other words, indeed, the distinction is useful inasmuch as the modelling assumptions are valid. And there the mixture of two Diracs over 0 and 1 actually is a bad example, because that says that two models that are identical as distributions 1/x
20.03.2025 22:47 β π 0 π 0 π¬ 1 π 0I guess I stop here:) 5/5
20.03.2025 22:43 β π 0 π 0 π¬ 0 π 0Well, yes, to the degree that the model you use correctly reflects what's going on. Example with drug trials, randomized patient allocation. Result is effectiveness. Meaning of aleatoric and epistemic uncertainty should be clear and they help with explaining outcomes of the trial. 4/x
20.03.2025 22:41 β π 0 π 0 π¬ 1 π 0One observes 1, there is epistemic uncertainty (the model could be the first or the second). Of course, nothing is black and white like this ever. And we talk about models here. Models are.. made up.. Usual blurb about usefulness of models. Should you care about this distinction? 3/x
20.03.2025 22:35 β π 0 π 0 π¬ 1 π 0Epistemic uncertainty refers to whether given the data (and prior information), we can surely identify the data generating model. Example: Model class has two distributions; one has support {0,1}, the other has support {1}. One observes 0. There is no epistemic uncertainty. 2/X
20.03.2025 22:33 β π 0 π 0 π¬ 1 π 0I don't get this:
In the context of this terminology, data comes from a model. Aleatoric uncertainty refers to the case when this model is a Dirac! In the second case, the model is a mixture of two Dirac's. This is not a Dirac. Hence, there is aleatoric uncertainty. 1/X
This is a very significant development - more fellowships, harmonized and typically higher stipends, and international students can apply
#CanPoli
www.nserc-crsng.gc.ca/NewsDetail-D...
Dylan J. Foster, Zakaria Mhammedi, Dhruv Rohatgi: Is a Good Foundation Necessary for Efficient Reinforcement Learning? The Computational Role of the Base Model in Exploration https://arxiv.org/abs/2503.07453 https://arxiv.org/pdf/2503.07453 https://arxiv.org/html/2503.07453
11.03.2025 07:26 β π 5 π 5 π¬ 1 π 0But also we are how we act! So it's up to us all to behave so as to make statement true.
11.03.2025 16:37 β π 0 π 0 π¬ 0 π 0Who says mountain car is a toy problem? www.reddit.com/r/nonononoye...
09.03.2025 17:46 β π 6 π 0 π¬ 0 π 0Yes, another gem from Rich!
07.03.2025 02:58 β π 1 π 0 π¬ 0 π 0www.youtube.com/watch?v=9_Pe... An interview with Rich. The humility of Rich is truly inspiring: "There are no authorities in science". I wish people would listen and live by this.
06.03.2025 20:50 β π 39 π 13 π¬ 2 π 1That's all good: Bubbles join when they get up high into the blue sky:)
06.03.2025 20:42 β π 1 π 0 π¬ 0 π 0