This was a nice project in collaboration with @loreloc.bsky.social and @nolovedeeplearning.bsky.social
PS: As a bonus, I wrote a small summary of the paper in my blog: adrianjav.github.io/blog/2026/os...
See you in Rio! π΄
This was a nice project in collaboration with @loreloc.bsky.social and @nolovedeeplearning.bsky.social
PS: As a bonus, I wrote a small summary of the paper in my blog: adrianjav.github.io/blog/2026/os...
See you in Rio! π΄
More importantly, we show that this comes at no cost in performance, and we can even train non-structured-decomposable squared circuits*
(* That is, circuits which cannot be efficiently squared)
As a result, we can train really large squared circuits while saving both time and memory!
At 357M parameters, we (β) use:
- 12 GiB vs 18 GiB (33% reduction!)
- 0.29ms vs 0.52ms per iteration (44% faster!)
Yes, we can!
π‘ We generalize both ideas and propose to use orthogonality constraints to parametrize *already normalized* squared circuits
That way, we completely avoid squaring them during training!
In the tensor network community, a similar issue can be avoided for specific cases using canonical forms
And in the circuit community, determinism (i.e. non-overlapping supports) makes the square tractable, although it is too restrictive...
π€ Can we go expand on these ideas?
One way of increasing the expressiveness of probabilistic circuits is to square them (multiply a circuit with itself).
π However, this imposes a quadratic cost in the circuit size, as we need to re-normalize it to ensure that it encodes a valid probability.
I am a bit late to the party, but I am happy to share that our latest work was accepted to #ICLR2026 π₯³π₯³
π How to Square Tensor Networks and Circuits Without Squaring Them
arxiv.org/abs/2512.17090
Want to use your favourite #NeSy model but afraid of the reasoning shortcuts?π«£
Fear notπͺπ»In our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!
π Tenerife Norte bate su rΓ©cord de temperatura para noviembre: 33 Β°C el dΓa 4.
β‘οΈ Supera ampliamente el registro mΓ‘ximo anterior, de 31 Β°C. Tenerife Norte tiene una serie de datos de 85 aΓ±os.
Convince me that Dagstuhl seminars are real and not AI generated π
28.10.2025 18:23 β π 1 π 0 π¬ 0 π 0
To: Reviewer 2
My name is Inigo Montoya
You killed my paper
Prepare to die
Does a smaller latent space lead to worse generation in latent diffusion models? Not necessarily! We show that LDMs are extremely robust to a wide range of compression rates (10-1000x) in the context of physics emulation.
We got lost in latent space. Join us π
Friday afternoon! Finally time to look back at a busy week, and ask oneself β "wait, what did I do, again?"
29.08.2025 07:27 β π 12 π 2 π¬ 1 π 0
We are excited to bring #EurIPS 2025 to Copenhagen in December.
Consider becoming a sponsor and support us in making this inaugural event a success! Sponsorship packages are available and can be further customized if necessary.
Reach out if you have any questions β
Info: eurips.cc/become-spons...
It's been a while, but I am happy to share that my PhD dissertation is finally available online! π
Not only it contains most of my work, but there is plenty of brand new content:
publikationen.sulb.uni-saarland.de/handle/20.50...
π§΅1/4
PS: If something, just check it out for the aesthetics π (I will release the LaTeX template soon)
publikationen.sulb.uni-saarland.de/handle/20.50...
π§΅4/4
Funny enough, I later found my perspective on soft constraints to be quite similar as that of soft inductive biases by @andrewgwils.bsky.social in one of his latest works:
arxiv.org/abs/2503.02113
π§΅3/4
Also, I did put considerably effort to frame everything under a common question:
> What biases can we add to DL optimization so that the outcome of the model is what we expected from the beginning?
π§΅2/4
It's been a while, but I am happy to share that my PhD dissertation is finally available online! π
Not only it contains most of my work, but there is plenty of brand new content:
publikationen.sulb.uni-saarland.de/handle/20.50...
π§΅1/4
This is not caused primarily by "screen time".
It's been caused by the mass eradication of third places for young people in basically every city over the past three decades - and a society that's hostile to the concept of teens socialising in public spaces.
www.theguardian.com/politics/liv...
What a nice experience! Thank you everyone who attended TPM!
Particularly those who engaged in the poster sessions, few times I had so much fun discussing my poster!
likely one of the best editions of #TPM ever!
big thanks to @poorvagarg.bsky.social @jsleland.bsky.social @javaloyml.bsky.social @zzhe.bsky.social @lennertds.bsky.social Lingyun Yao and Christoph Staudt for organizing it
and to everyone who attended it!
My maternity leave project is now somewhat out: I made a jupyterbook about the basics of ML, that I teach at TUE. You can check it out here:
sibylse.github.io/TUEML/intro....
The linA part is not yet formulated out and there are other todos, but maybe it helps someone with their own course design π
Last talk of the day for TPM!
@auai.org #TPM2025
we almost end the day (banquet incoming) with an extremely lively poster session!
23.07.2025 21:09 β π 16 π 4 π¬ 0 π 0the conference cannot officially start without a proper reception πΉ
21.07.2025 23:55 β π 13 π 4 π¬ 0 π 0
Next week I will be in π΄ Rio π΄ attending (and helping organizing) UAI 2025 and TPM 2025.
If you are also attending and want to talk science (e.g. probabilistic generative models, causality, multi-objective machine learning, etc) or just hang out, not everything is work, feel free to reach me out!
Agreed... π₯²
18.07.2025 09:02 β π 1 π 0 π¬ 0 π 0
That would be awesome!
FYI The rule was different two months ago, requiring only virtual attendance, and it was changed during the reviewing period.
web.archive.org/web/20250514...
Unrelatedly, the rule of mandatory physical attendance for authors was changed *after* the submission deadline: web.archive.org/web/20250514...
People submitted their works under some given premises, altering them a posteriori is questionable at best...