Academia's version of a vision board.
04.02.2026 03:32 — 👍 1 🔁 0 💬 0 📌 0@agopal42.bsky.social
Postdoc at Harvard with @yilundu.bsky.social and @gershbrain.bsky.social PhD from IDSIA with Jürgen Schmidhuber. Previously: Apple MLR, Amazon AWS AI Lab. 7\. agopal42.github.io
Academia's version of a vision board.
04.02.2026 03:32 — 👍 1 🔁 0 💬 0 📌 0With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
Goal selection through the lens of subjective functions:
arxiv.org/abs/2512.15948
I welcome any feedback on these preliminary ideas.
Paper: arxiv.org/abs/2405.17283
Code: github.com/agopal42/syncx
Joint work with Aleksandar Stanic, Jürgen Schmidhuber and Michael Mozer.
Hope to see you all at our poster at #NeurIPS2024! 10/x
Phase synchronization in SynCx towards objects is more robust compared to baselines. It can successfully separate similarly colored objects, which is a common failure mode of other synchrony models that simply rely on color as a shortcut feature for grouping. 9/x
04.12.2024 18:49 — 👍 0 🔁 0 💬 1 📌 0SynCx outperforms current state-of-the-art unsupervised synchrony-based models on standard multi-object datasets while using between 6-23x fewer model parameters compared to the baseline models. 8/x
04.12.2024 18:49 — 👍 0 🔁 0 💬 1 📌 0Our model does not need additional inductive biases (gating mechanisms), strong supervision (depth masks) and/or contrastive training as used by current state-of-the-art synchrony models to achieve phase synchronization towards objects in a fully unsupervised way. 7/x
04.12.2024 18:49 — 👍 0 🔁 0 💬 1 📌 0SynCx processes complex-valued inputs at every layer using complex-valued weights. It is trained to reconstruct the input image at every iteration using the output magnitudes. Output phases are fed back as input to the next step with input magnitudes clamped to the image. 6/x
04.12.2024 18:49 — 👍 0 🔁 0 💬 1 📌 0Hidden units in such a system must activate based on the presence of features (magnitudes) but also consider their relative phases. Matrix-vector products between complex-valued weights and complex-valued activations are a natural way to implement such functionality. 5/x
04.12.2024 18:49 — 👍 0 🔁 0 💬 1 📌 0This is a conceptual flaw in current synchrony models all of which use feedforward convolutional nets but we can solve this in an iterative fashion. Starting with random phases, hidden units compute phase updates to propagate local constraints to reach a stable configuration. 4/x
04.12.2024 18:49 — 👍 0 🔁 0 💬 1 📌 0Green and red circles highlight junctions that belong to the same and different objects respectively. Here, we cannot decide which junctions belong to which object using only the local features as they are indistinguishable from one another in the two cases. 3/x
04.12.2024 18:49 — 👍 0 🔁 0 💬 1 📌 0We argue for the importance of iterative computation (recurrence) and complex-valued weights to achieve phase synchronization in activations. To build some intuition look at the 3 shapes (T, H, and overlapping squares) made of horizontal and vertical bars. 2/x
04.12.2024 18:49 — 👍 0 🔁 0 💬 1 📌 0Excited to present "Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery" at #NeurIPS2024! Poster: #3707 4:30pm on Thursday.
TL;DR: Our model, SynCx, greatly simplifies the inductive biases and training procedures of current state-of-the-art synchrony models. Thread 👇 1/x.
Env.reset()
17.11.2024 22:38 — 👍 2 🔁 0 💬 0 📌 0