So nice! Where did you get it?
29.05.2025 23:33 — 👍 0 🔁 0 💬 0 📌 0@andre-t-martins.bsky.social
NLP/ML researcher in Lisbon
So nice! Where did you get it?
29.05.2025 23:33 — 👍 0 🔁 0 💬 0 📌 0Applications for the 2025 Lisbon Machine Learning Summer School (LxMLS) are open, with @andre-t-martins.bsky.social as one of the organizers.
LxMLS is a great opportunity to learn from top speakers and to interact with other students. You can apply for a scholarship.
Apply here:
lxmls.it.pt/2025/
Visit our poster soon at #AISTATS2025! 🚀 
@istecnico.bsky.social @itnewspt.bsky.social @ellisunitlisbon.bsky.social 
 
Made in @sardine-lab-it.bsky.social !
Our experiments show competitive / superior results in terms of coverage, efficiency, and adaptiveness compared to standard non-conformity scores based on softmax, such as InvProb and RAPS. 7/N
09.03.2025 21:31 — 👍 0 🔁 0 💬 1 📌 0As a bonus, for softmax (which is not sparse) we also obtain a new “log-margin” non-conformity score which is the log-odds ratio between the most probable class and the true one. 6/N
09.03.2025 21:31 — 👍 0 🔁 0 💬 1 📌 0For γ-entmax (which recovers sparsemax with γ=2 and softmax with γ=1), the non-conformity scores use the L_δ-norm (with δ = 1 / (γ - 1)) instead of the L_1 norm. 5/N
09.03.2025 21:31 — 👍 0 🔁 0 💬 1 📌 0The answer is yes! For sparsemax, this corresponds to a new non-conformity score which accumulates the absolute differences of logits up to the target label. 4/N
09.03.2025 21:31 — 👍 0 🔁 0 💬 1 📌 0These sparse transformations have a temperature parameter which controls the amount of sparsity. Can we use split conformal prediction to calibrate this temperature parameter and return sparse sets with coverage guarantees? 3/N
09.03.2025 21:31 — 👍 0 🔁 0 💬 1 📌 0Conformal prediction quantifies uncertainty by predicting *sets* instead of points, offering coverage guarantees. Sparse transformations (sparsemax, entmax, etc.) are softmax alternatives which return sparse probability vectors, useful to select a subset of relevant labels. 2/N
09.03.2025 21:31 — 👍 0 🔁 0 💬 1 📌 0Our upcoming #AISTATS2025 paper is out: “Sparse Activations as Conformal Predictors” with Margarida Campos, João Calém, Sophia Sklaviadis, and 
@marfig.bsky.social:
arxiv.org/abs/2502.14773. 
This paper puts together two lines of research, dynamic sparsity and conformal prediction. 🧵
@andre-t-martins.bsky.social talking about multilingual LLMs at the @priberam.bsky.social  Machine Learning Lunch Seminar in @istecnico.bsky.social,
@ellisunitlisbon.bsky.social
🎉 New paper by Saul Santos in collaboration with @tozefarinhas.bsky.social and @andre-t-martins.bsky.social!! 🎉
∞-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation
Paper: arxiv.org/abs/2501.19098