Félicitations Alice 🎉
11.10.2025 15:53 — 👍 0 🔁 0 💬 0 📌 0@peeyoushh.bsky.social
PhD student, percussionist piyushmishra12.github.io
Félicitations Alice 🎉
11.10.2025 15:53 — 👍 0 🔁 0 💬 0 📌 0Scaffolded hiPSC liver organoids recapitulating bile duct tubulogenesis and periportal architecture. https://www.biorxiv.org/content/10.1101/2025.08.28.672864v1
02.09.2025 12:30 — 👍 1 🔁 1 💬 0 📌 0Very excited by this paper by @peeyoushh.bsky.social in my lab. We demonstrated the increased robustness of LLM/transformers in tracking at scale but also their pitfalls: when the combinatorics gets simpler (and not just modeling!), stats work best. Tons of potential for hybrid approaches :).
17.01.2025 16:08 — 👍 6 🔁 3 💬 0 📌 0🚂 "Un lieu de travail ordinaire pour des gens extraordinaires". Des jeunes en situation de handicap mental se forment aux métiers de l’hospitalité à la "Gare des Étoiles" de Niolon.
26.12.2024 08:46 — 👍 10 🔁 4 💬 0 📌 0"A request made through ChatGPT, an AI-based virtual assistant, consumes 10 times the electricity of a Google Search"
This isn't a game. This is accelerated destruction via huge usage of power, water, resources etc. Stop using it - especially for messing about!!
www.unep.org/news-and-sto...
So the Bayesian approach is great for the actual smoothing, but transformers are remarkable for pruning the hypothesis-set. Can we hybridise to use the best of both worlds? Stay tuned :)
23.12.2024 16:08 — 👍 0 🔁 0 💬 0 📌 0We thus see the emergence of two regimes, one where we have a lower no. of hypotheses (where the Bayesian approach is unmatched) and another with a higher no. of hypotheses (where transformers take the lead).
23.12.2024 16:08 — 👍 0 🔁 0 💬 1 📌 0While the transformer is heavier for lower lookback, the compute of the Bayesian method increases super-exponentially on increasing lookback! This is a perfect illustration of our combinatorial challenge of tracking and how transformers could help in resolving it.
23.12.2024 16:08 — 👍 0 🔁 0 💬 1 📌 0Not only is the transformer suboptimal, it remains suboptimal when the Bayesian method is optimal (hint: AI alignment problem). Increasing the amount of data starts decreasing the accuracy!
23.12.2024 16:08 — 👍 2 🔁 0 💬 1 📌 0But what if we had a world where this was possible (i.e., short sequences of 8 time steps, hence less no. of hypotheses)? No matter how much we train the transformer, it never matches the optimal performance!
23.12.2024 16:08 — 👍 0 🔁 0 💬 1 📌 0This suggests that increasing past information of sequences leads to a better robustness for both the strategies. So if the Bayesian approach can access all the past information of the sequence, it should be optimal! But doing that is intractable for realistic scenarios!
23.12.2024 16:08 — 👍 0 🔁 0 💬 1 📌 0Transformers are robust when dealing with large information. On increasing noise (for 2 particles undergoing brownian motion for 150 timesteps) we see a prolongation in the breakpoint of accuracy in all cases. An increase in sequence lookback shows further prolongation!
23.12.2024 16:08 — 👍 0 🔁 0 💬 1 📌 0The Bayesian multiple hypothesis tracking approach is the theoretical optimal solution but it can only handle a certain amount of hypotheses before it becomes intractable. We look at where the switch happens and what we can do about it.
23.12.2024 16:08 — 👍 0 🔁 0 💬 1 📌 0We know that transformers work well. But should we just replace all our previous techniques with transformers and call it a day? (Spoiler: no)
23.12.2024 16:08 — 👍 0 🔁 0 💬 1 📌 0This year, we (Philippe Roudot and I) published my first PhD paper on the usability of transformers for tracking problems at EUSIPCO! Find the open access version here and/or continue reading the thread :)
hal.science/hal-04619330/