Trying to train RNNs in a biol plausible (local) way? Well, try our new method using predictive alignment. Paper just out in Nat. Com. Toshitake Asabuki deserves all the credit!
www.nature.com/articles/s41...
@ruby-sedgwick.bsky.social
Research Scientist at Xyme interested in Bayesian machine learning for biotechnology applications & causality. Previously a postdoc at Imperial College London.
Trying to train RNNs in a biol plausible (local) way? Well, try our new method using predictive alignment. Paper just out in Nat. Com. Toshitake Asabuki deserves all the credit!
www.nature.com/articles/s41...
Excited to be presenting this work at ICML!
Our Bayesian method allows for causal discovery using more flexible assumptions that better reflect real-world data.
Come chat to us: Tues 4:30pm East Exhibition Hall A-B E-1912
Paper: arxiv.org/abs/2411.10154
(8/8) Check out the paper here: arxiv.org/abs/2402.09122
It was a pleasure working with James Odgers, Chrysoula Kappatou, Ruth Misener and Sarah Filippi on this project! If you are interested in knowing more, weβd love to hear from you.
(7/8) We demonstrate this approach on a synthetic test case, a spectroscopy dataset and oil flow data. Compared to baselines like inverse linear model of coregionalisation, classical least squares and partial least squares, WS-GPLVM achieves competitive or better performance.
05.05.2025 16:05 β π 0 π 0 π¬ 1 π 0(6/8) This means not only do we get predictions of the component weights, but also a measure of uncertainty in these values. The Bayesian component weights and latent variables make the calculation of the evidence lower bound more challenging, and we show how this can be done.
05.05.2025 16:05 β π 0 π 0 π¬ 1 π 0(5/8) At the core of this approach is the idea that each pure signal depends on a latent variable, and these signals combine linearly. We also treat the component weights in a Bayesian way, allowing for the inclusion of useful priors, such as summing-to-one.
05.05.2025 16:05 β π 0 π 0 π¬ 1 π 0(4/8) This variability makes the separation task much harder. Most existing methods assume fixed pure signals. We introduce WS-GPLVM - a Bayesian nonparametric model that relaxes those assumptions.
05.05.2025 16:05 β π 0 π 0 π¬ 1 π 0(3/8) Take spectroscopy for example: following Beer-Lambertβs law, the observed spectra is a linear combination of the spectra of the pure components, but these pure component spectra vary depending on experimental conditions.
05.05.2025 16:05 β π 0 π 0 π¬ 1 π 0(2/8) In many real-world datasets, each observation is a mixture of underlying signals, with no observations of the pure signals. Think: chemical spectra, audio sources, hyperspectral images.
But what happens when the pure signals vary between samples due to some unobserved variables?
Illustrative example of WS-GPLVM. This image shows how the model can retrieve the component mixtures, latent variables and pure spectra from a data set where some spectra have known weight fractions and some don't.
New paper at #AISTATS2025: Weighted-Sum of Gaussian Process Latent Variable Models (WS-GPLVM)
We tackle a core challenge in signal separation where pure components vary nonlinearly across samples using latent variable Gaussian processes. (1/8)