Ali Behrouz's Avatar

Ali Behrouz

@alibehrouz.bsky.social

Intern @Google, Ph.D. Student @Cornell_CS. Interested in machine learning, LLM, brain, and healthcare. abehrouz.github.io

636 Followers  |  524 Following  |  21 Posts  |  Joined: 19.11.2024  |  1.5857

Latest posts by alibehrouz.bsky.social on Bluesky

Post image

Google's Titans: a new architecture with attention and a meta in-context memory that learns how to memorize at test time as presented by one of the author - @alibehrouz.bsky.social

13.01.2025 19:53 β€” πŸ‘ 70    πŸ” 18    πŸ’¬ 4    πŸ“Œ 5

I'll be at #NeurIPS2024 next week, and would be happy to chat about deep learning architectures, sequence modeling, reasoning, neuroscience, world models, graphs, and almost anything :)

I also will be presenting Chimera (a 2D SSM) on Wednesday (4:30 PM).

05.12.2024 20:43 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Paper: arxiv.org/pdf/2411.15671

03.12.2024 22:05 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Are you interested in developing your own graph sequence model based on a specific sequence model? Stay tuned for the implementation of our framework that allows you to experiment with any combination of different sequence models and different tokenizations. Here is a sample of our results:

03.12.2024 22:05 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What about hybrid models? We show that when combining recurrent models with Transformers, the model can be more efficient for some tasks:

03.12.2024 22:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Finally, using HAC and what we learned from theoretical results, we present the GSM++ model that uses HAC to tokenize the graph, a GCN to encode local neighborhoods, and a hybrid architecture of SSM+Transformer for learning global dependencies.

03.12.2024 22:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Based on the linear sensitivity of linear RNNs, this means that using HAC, we have high sensitivity with relevant nodes and low sensitivity with irrelevant nodes.

03.12.2024 22:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Interestingly, these results indicate that properly ordering nodes and losing permutation equivariant is not bad for some graph tasks. But how we can properly order nodes? We present a tokenization based on hierarchical affinity clustering (HAC) that puts highly dependent nodes close to each other.

03.12.2024 22:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What about connectivity tasks? Attempts to improve the quadratic cost with recurrent models result in a lack of parameter-efficient connectivity solutions. Again with proper ordering, recurrent models are efficient for these tasks! What is proper ordering? Graph needs to have a small node locality:

03.12.2024 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We motivate this by sensitivity of SSMs, which is linear with resp. to tokens’ distance: Similar to causal transformers, SSMs suffer from representational collapse when increasing the number of layers. So how can have a model that has good sensitivity but is robust to representational collapse?

03.12.2024 22:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This recurrent nature, however, is usually known as an undesirable property for graph tasks. We show that ordering nodes is not that bad for some graph tasks, and even by using proper ordering, you can improve the performance.

03.12.2024 22:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This framework allows us to answer: What Sequence Model is Right for Your Task? We realize that the answer to this depends on the type of the problem. For counting tasks, Transformers are unable to count without proper positional encoding, but we show that this is a simple task for recurrent models!

03.12.2024 21:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We present a simple framework that includes many existing graph sequence models based on three simple steps: (1) tokenize the graph into a set of sequences, (2) local encoding that encodes tokens, and (3) global encoding that uses a sequence model to learn long-range dependencies.

03.12.2024 21:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🌟 Best of Both Worlds!

❓Have you ever wondered why hybrid models (RNN + Transformers) are powerful? We answer this through the lens of circuit complexity and graphs!

Excited to share our work on understanding Graph Sequence Models (GSM), which allows the use of any sequence model for graphs.

03.12.2024 21:57 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Chimera: Effectively Modeling Multivariate Time Series with... Modeling multivariate time series is a well-established problem with a wide range of applications from healthcare to financial markets. It, however, is challenging as it requires methods to (1)...

The main focus of this paper was on time series data, but the inductive 2D bias in our selective 2D SSM is potentially useful for images, videos, and speech, avoiding using different scans.

Paper: openreview.net/forum?id=ncY...

20.11.2024 01:49 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

We show the importance of data-dependency in Chimera by a case study on image classification based on the brain response of a subject. We further show the selection mechanism in the S6 block and Mamba can be seen in Chimera but in both dimensions of time and variates: (7/8)

20.11.2024 01:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Using this 2D SSM, we present Chimera, a three-headed architecture that is capable of learning both long-term progression and seasonal patterns, using different discretization processes: (6/8)

20.11.2024 01:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We further discuss how S4ND-like extension of Mamba and Mamba-2 are the special cases of our 2D SSMs when restricting the transition matrices: (5/8)

20.11.2024 01:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Similar to S6, to enhance the power of 2D SSM, we let its parameters be the function of the input, resulting in losing convolution form. We show this 2D recurrence can be done using a parallel 2D scan based on a new associative operator, resulting in fast parallelizable training.

20.11.2024 01:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Intuitively, the first and second hidden states carry information along the 'time' and 'variate' dimensions. A_2 and A_3 control how much information from the other dimension should be combined, providing full control of information flow from other variates, if are informative. (3/8)

20.11.2024 01:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

2D SSMs are based on linear Partial Differential Equation (PDE) with two variables (here are time and time series's variates). Using ZOH, we discretize the PDE, resulting in a 2D recurrence formula as follows: (2/8)

20.11.2024 01:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Are univariate SSMs effective when there are 2D dependencies?

✨In our #NeurIPS paper we show how to effectively model multivariate time series by input-dependent (selective) 2-dimensional state space models with fast training using a 2D parallel scan. (1/8)

20.11.2024 01:43 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@alibehrouz is following 20 prominent accounts