On the other hand, I feel vindicated w.r.t. a past confusion, whereby an Adaptive MCMC paper was more-or-less celebrating replacing "adaptation by stochastic approximation" with "adaptation by reinforcement learning". Even then, my heart told me that no replacement had really occurred.
04.12.2025 14:06 β π 3 π 0 π¬ 0 π 0
It certainly makes "highly-offline" learning seem rather quaint by comparison.
04.12.2025 14:04 β π 2 π 0 π¬ 1 π 0
Reassuring to read this (www.argmin.net/p/defining-r...) presentation, since I had confused myself a bit in the past w.r.t. i) just how general the attached is (in principle), and ii) observing that "what people mean when they say they're doing RL" has (historically anyways) been rather narrower.
04.12.2025 14:03 β π 6 π 0 π¬ 1 π 0
(There are indeed some solutions which are o(N), though not necessarily super simple ones; I am yet to see anything which is O(1). In any case, the answer depends a bit on how well you know Q.)
04.12.2025 08:50 β π 0 π 0 π¬ 0 π 0
A fun problem, which I came across recently:
Suppose that you can generate random variates X ~ P by rejection sampling from some Q.
For a given integer N, let X_(N) denote the sample maximum of N independent draws from P.
Can you simulate realisations of X_(N) at a cost of o(N)? Or even O(1)?
04.12.2025 08:49 β π 4 π 0 π¬ 1 π 0
Quite fun
01.12.2025 04:28 β π 26 π 4 π¬ 2 π 0
It's a fair point! Will keep it in mind.
28.11.2025 06:43 β π 0 π 0 π¬ 1 π 0
another day volunteering at the topology museum.
27.11.2025 15:24 β π 5 π 0 π¬ 0 π 0
This project was initiated during our residence at a 2024 INI programme on "Stochastic Systems for Anomalous Diffusion" (www.newton.ac.uk/event/ssd/), wherein many presented works treated specific approaches to the 'robustness' issue. It seemed an opportune time to take stock of things.
27.11.2025 10:35 β π 5 π 0 π¬ 0 π 0
With Giorgos Vasdekis, we have written a manuscript - arxiv.org/abs/2511.21563 - which surveys the state of affairs within this literature, outlining signals for anticipating non-robustness, principles for improving robustness, and examples of contemporary methods which confront these issues.
27.11.2025 10:35 β π 15 π 1 π¬ 1 π 0
In response to this, there have been a range of proposed MCMC strategies which aim to
i) perform acceptably when these conditions hold, but
ii) degrade gracefully when these conditions start to break down,
collectively giving rise to a burgeoning literature on 'robust MCMC'.
27.11.2025 10:35 β π 6 π 0 π¬ 1 π 0
Usual MCMC algorithms are typically guaranteed to work well when used to sample from target distributions for which
i) mass is reasonably well-concentrated in the centre of the state space, and
ii) the log-density is smooth and of moderate growth.
Outside of this setting, things can go poorly.
27.11.2025 10:35 β π 33 π 6 π¬ 1 π 0
I'll ask around!
26.11.2025 18:08 β π 1 π 0 π¬ 0 π 0
On behalf of some friends, let me quickly advertise an event taking place in London, January 12-13 2026 (sites.google.com/view/lpd-tnn), with an overall focus on 'Geometric methods in probability'. Registration is free but required, and closes on December 1 (i.e. next Monday) - exciting stuff!
26.11.2025 10:57 β π 9 π 1 π¬ 1 π 0
humans in the 1960s / potentially slightly earlier
25.11.2025 10:09 β π 1 π 0 π¬ 1 π 0
(The pictured set is conjecturally optimal for the problem described)
24.11.2025 13:29 β π 2 π 0 π¬ 0 π 0
I mean, lol
24.11.2025 13:29 β π 22 π 4 π¬ 3 π 2
I'm also slightly reminded of the Beta function, but without any particular conclusions for now.
21.11.2025 18:04 β π 0 π 0 π¬ 1 π 0
I like it! It hadn't occurred to me that it is somehow 'really' a product of two (rather than three) factors, but I think that I'm on board. A fan of falling / rising factorials, in any case.
21.11.2025 18:00 β π 1 π 0 π¬ 1 π 0
cf. bsky.app/profile/scie...
21.11.2025 16:47 β π 1 π 0 π¬ 0 π 0
Tour coming to an end, as I settle in for 5 hours of train journey! Had lots of fun talking about Random Walk Metropolis, Gradient Flows, and Skill Rating in Sports (among other chats). Slides from all talks are saved at github.com/sampower88/t....
21.11.2025 16:46 β π 7 π 0 π¬ 0 π 0
π
21.11.2025 11:18 β π 5 π 0 π¬ 2 π 0
splendid
20.11.2025 22:46 β π 27 π 3 π¬ 1 π 1
Super gripping (and fun!) lectures here:
youtu.be/OHDYdmuLMW0?...
"Fourier Analysis & Beyond I" - Mini-course
- Stefan Steinerberger
19.11.2025 23:52 β π 11 π 3 π¬ 1 π 0
great stuff, right up my alley:
arxiv.org/abs/2511.11497
'A Recursive Theory of Variational State Estimation: The Dynamic Programming Approach'
- Filip Tronarp
19.11.2025 23:04 β π 10 π 1 π¬ 0 π 1
Very nice! Some of the latter part seemed natural to think of in terms of Markov kernels / channels. A friend had some recent work on epidemiological models where this conjugacy between Poisson / Multinomial and 'colouring' channels was quite computationally useful.
19.11.2025 12:02 β π 2 π 0 π¬ 1 π 0
Prompted by glancing at arxiv.org/abs/2511.14200.
19.11.2025 10:18 β π 2 π 0 π¬ 0 π 0
One application of this would be to compare the features of different biased random walks, since the convex ordering is preserved under independent summation, and allows for the control of { variance, large deviations, etc. }.
19.11.2025 10:16 β π 3 π 0 π¬ 1 π 0
However, this changes if you allow for a dilation factor. In particular, there is always some Ο = Ο(p, q) such that Z(p) is dominated by Οβ
Z(q). After a bit of book-keeping and changing variables for convenience, one can deduce the attached formula.
19.11.2025 10:16 β π 3 π 0 π¬ 1 π 0
PhD student @ Princeton econ, interested in trade, cities & the environment, recovering physicist β’ he/him β’ from Brazil β’ @lcantara_andre on twitter
Data scientist working in nuclear energy safety // PhD Candidate @ UGA
AI Scientist at Xaira Therapeutics. Previously Machine Learning PhD student - Dept. Statistics University of Oxford
Lecturer at the University of Bristol working in quantum information
We develop, apply and promote innovative statistical and data science approaches to advance biomedical science and human health.
www.mrc-bsu.cam.ac.uk
Interested in descriptive, historical & contact linguistics.
I like Caucasian and Indo-Iranic languages, and dabble (in decreasing order of oftenity) in the languages of the Pacific Northwest, the Himalayas and along the Nile. rαΊΜw fan.
Postdoc @ Princeton AI Lab
Natural and Artificial Minds
Prev: PhD @ Brown, MIT FutureTech
Website: https://annatsv.github.io/
datamancer, generative models, bayesian inference, dynamical systems, open-source software, phd with @glouppe.bsky.social
Prof in Oxford Stats. Exhausted by videos of βowningβ
asst prof @cornellbowers.bsky.social thinking about dynamics, control, machine learning
sdean.website
PhD Student working on bioimaging inverse problems with @florianjug.bsky.social at @humantechnopole.bsky.social + @tudresden.bsky.social | Prev: computer vision Hitachi R&D, Tokyo.
π: https://rayanirban.github.io/
Likes πΈποΈποΈπ and βοΈ
Postdoc @ UC Berkeley | statistics, probability, machine learning, privacy
π¨π¦
ianws.com
University of Cambridge and
Max Planck Institute for Intelligent Systems
I'm interested in amortized inference/PFNs/in-context learning for challenging probabilistic and causal problems.
https://arikreuter.github.io/
Mathematics & Evolution!
Ph.D. at University of Toronto (EEB Dept)
Alumnus IISER Mohali. He/him
I find language confusing.
conorhoughton.github.io
Academic mathematician/computer scientist, University of Birmingham, UK. AI and machine learning, causal inference, signal processing, applied mathematics, computational statistics. Ex Oxford PhD, MIT postdoc fellow.
Mathematician and podcaster. she/it. my views represent those of your employer.
Podcast: @odiumsymposium.bsky.social
Blog: https://bananasinwartime.ghost.io/