Paul Soulos's Avatar

Paul Soulos

@paulsoulos.bsky.social

Computational Cognitive Science @JhuCogsci researching neurosymbolic methods. Previously wearable engineering @fitbit and @Google.

1,096 Followers  |  383 Following  |  21 Posts  |  Joined: 25.09.2023  |  1.9502

Latest posts by paulsoulos.bsky.social on Bluesky

While both robotics and LM can be cast as next-token prediction, the token distribution for computer agents seems more like abstract motor programs (robotics) vs. language. This puts computer use on the trajectory of robotics which is slower than LLMs. 2/2

30.05.2025 15:21 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Intriguing prediction from
Trenton Bricken & @sholto-douglas.bsky.social on @dwarkesh.bsky.social's podcast: computer use agents "solved" in ~10 months ๐Ÿ–ฑ๏ธโŒจ๏ธ. This feels highly optimistic. I think that computer use is closer to robotics than language modeling. 1/2

30.05.2025 15:21 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Iโ€™m presenting this work at 11a PT today in East Exhibit Hall at poster #4009. Come by and chat!

11.12.2024 18:01 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Compositional Generalization Across Distributional Shifts with... Neural networks continue to struggle with compositional generalization, and this issue is exacerbated by a lack of massive pre-training. One successful approach for developing neural systems which...

๐Ÿ“œ Check out our paper for all of the details and results. openreview.net/forum?id=fOQ....

09.12.2024 15:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿ“… You can find me at the following presentations:

- Poster Session 1 East #4009 on Wednesday, December 11, from 11a-2p PST.
- System 2 Reasoning Workshop Spotlight Oral Talk on Sunday, December 15, from 9:30-10a PST.
- System 2 Reasoning Workshop poster sessions on Sunday, December 15.

09.12.2024 15:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿ“ˆDTM and sDTM operate on trees, and we introduce a very simple and dataset independent method to embed sequence inputs and outputs as trees. Across a variety of datasets and test time distributional shifts, sDTM outperforms fully neural and hybrid neurosymbolic models.

09.12.2024 15:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐ŸŒณWe introduce the Sparse Differentiable Tree Machine (sDTM), an extension of (DTM) that introduces a new way to represent trees in vector space. Sparse Coordinate Trees (SCT) reduce the parameter count and memory usage over the previous DTM by an order of magnitude and lead to a 30x speedup!

09.12.2024 15:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Our previous work introducing the Differentiable Tree Machine (DTM) is an example of a unified neurosymbolic system where trees are represented and operated over in vector space.

09.12.2024 15:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Hybrid systems use neural networks to parameterize symbolic components and can struggle with the same pitfalls as fully symbolic systems. In Unified Neurosymbolic systems, operations can simultaneously be viewed as either neural or symbolic, and this provides a fully neural path through the network.

09.12.2024 15:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿง  Neural networks struggle with compositionality, and symbolic methods struggle with flexibility and scalability. Neurosymbolic methods promise to combine the benefits of both methods, but there is a distinction between *hybrid* neurosymbolic methods and *unified* neurosymbolic methods.

09.12.2024 15:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

๐Ÿšจ Thrilled to share that Compositional Generalization Across Distributional Shifts with Sparse Tree Operations received a spotlight award at #NeurIPS2024! ๐ŸŒŸ I'll present a poster on Tuesday and give an invited lightning talk at the System 2 Reasoning Workshop on Sunday. ๐Ÿงต๐Ÿ‘‡

09.12.2024 15:06 โ€” ๐Ÿ‘ 10    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Preview
Compositional Generalization Across Distributional Shifts with... Neural networks continue to struggle with compositional generalization, and this issue is exacerbated by a lack of massive pre-training. One successful approach for developing neural systems which...

Hi Melanie, I'll be there presenting some neurosymbolic work at the main conference and the system 2 workshop, as well as some other early work on Transformers and Computational Linguistics at the system 2 workshop!

openreview.net/forum?id=fOQ...

openreview.net/forum?id=6Pj...

01.12.2024 14:44 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Applied AGI scientist is a wild job title considering people have no idea how to even define AGI let alone what we should apply to create it.

18.11.2024 11:27 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
OpenAI and others seek new path to smarter AI as current methods hit limitations Artificial intelligence companies like OpenAI are seeking to overcome unexpected delays and challenges in the pursuit of ever-biggerlarge language models by developing training techniques that use more human-like ways for algorithms to "think".

An important distinction that Sutskever makes in this article is that scale is not dead, but โ€œScaling the right thing matters more now than ever.โ€ Vector symbolic architectures are a promising direction to scale symbolic methods in a fully differentiable manner.

www.reuters.com/technology/a...

13.11.2024 08:56 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Okay the people requested one so here is an attempt at a Computational Cognitive Science starter pack -- with apologies to everyone I've missed! LMK if there's anyone I should add!

go.bsky.app/KDTg6pv

11.11.2024 17:27 โ€” ๐Ÿ‘ 223    ๐Ÿ” 92    ๐Ÿ’ฌ 71    ๐Ÿ“Œ 3
Preview
Toward Compositional Behavior in Neural Models: A Survey of Current Views Kate McCurdy, Paul Soulos, Paul Smolensky, Roland Fernandez, Jianfeng Gao. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024.

Want to learn more? Speak to the wonderful Kate McCurdy at #EMNLP2024 poster session 2, poster 1053, tomorrow from 11-12:30p ET.

Check out the paper if you want the full details! aclanthology.org/2024.emnlp-m...

11.11.2024 20:40 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Researchers are split on HOW to achieve compositional behavior. Some propose data interventions, others argue we need entirely new model architectures, and some suggest we need to integrate symbolic paradigms.

11.11.2024 20:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Key finding: ~75% of researchers agree that CURRENT neural models do NOT demonstrate true compositional behavior. Scale alone won't solve this - we need fundamental breakthroughs.

11.11.2024 20:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We surveyed 79 top AI researchers about compositional behavior. Our goal? Map out the field's consensus and disagreements on how neural models process language to illuminate promising paths forward. Inspired by Dennettโ€™s logical geography, we cluster participants by responses ๐Ÿ—บ๏ธ

11.11.2024 20:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Compositionality is fundamental to language: the ability to understand complex expressions by combining simpler parts. But do current AI models REALLY understand this? Spoiler: Most researchers say NO.

11.11.2024 20:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Iโ€™m excited to share our survey investigating the current challenges and debates around achieving compositional behavior (CB) in language models, to be presented at #EMNLP2024! What makes language understanding truly intelligent? A thread unpacking our latest research ๐Ÿค–๐Ÿ“Š๐Ÿงต

11.11.2024 20:40 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Besides being ergonomically beneficial, a split keyboard can prevent this from happening!

19.09.2024 07:46 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@paulsoulos is following 20 prominent accounts