Andrei Manolache's Avatar

Andrei Manolache

@amanolache.bsky.social

ELLIS PhD Student @ IMPRS-IS/Uni Stuttgart; ML Research @ Bitdefender | https://andreimano.github.io/

253 Followers  |  186 Following  |  11 Posts  |  Joined: 12.11.2024  |  1.7332

Latest posts by amanolache.bsky.social on Bluesky

Post image

EEML'25, our yearly machine learning summer school event, will be organised next summer in the beautiful city of Sarajevo - the place where East meets West ๐Ÿ‡ง๐Ÿ‡ฆ๐Ÿ‡ง๐Ÿ‡ฆ๐Ÿ‡ง๐Ÿ‡ฆ.

More details coming soon, please see the link in the thread!

15.12.2024 18:22 โ€” ๐Ÿ‘ 18    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

Catch my poster tomorrow at the NeurIPS MLSB Workshop! We present a simple (yet effective ๐Ÿ˜) multimodal Transformer for molecules, supporting multiple 3D conformations & showing promise for transfer learning.

Interested in molecular representation learning? Letโ€™s chat ๐Ÿ‘‹!

15.12.2024 00:31 โ€” ๐Ÿ‘ 10    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

* in the afternoon session ๐Ÿ˜…

11.12.2024 22:27 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Happening today, East Exhibit Hall A-C, poster #3110. Come say "Hi!"! ๐Ÿ‘‹

11.12.2024 18:21 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

6/6 Interested in learning more? Check out our preprint here: arxiv.org/pdf/2405.17311.
If youโ€™d like to discuss, Iโ€™d be very happy to chat during the poster session in Vancouver! :)

07.12.2024 17:52 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Post image

5/6 How it works: Probabilistic sampling connects original nodes to virtual ones, enhancing connectivity without explicit pairwise computations. The result is a framework that achieves both higher WL expressiveness and efficiency in graph-based learning.

07.12.2024 17:52 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

4/6 We demonstrate SOTA results on various benchmarks, effectively addressing over-squashing and under-reaching. IPR-MPNNs also surpass standard MPNNs in expressiveness, distinguishing complex graph structuresโ€”all while being faster and more memory-efficient than GTs. ๐Ÿš€

07.12.2024 17:51 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

3/6 Enter IPR-MPNNs: Our approach learns to rewire graphs probabilistically by adding virtual nodes. This eliminates the need for heuristics, making the method more flexible and task-adaptive, while maintaining computational efficiency. ๐ŸŽฏ

07.12.2024 17:51 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

2/6 Standard MPNNs struggle with long-range interactions, making them less effective for large, complex graphs. Transformers help but come with quadratic complexity, which is computationally expensive. Rewiring heuristics? Often brittle and task-specific.

07.12.2024 17:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

1/6 We're excited to share our #NeurIPS2024 paper: Probabilistic Graph Rewiring via Virtual Nodes! It addresses key challenges in GNNs, such as over-squashing and under-reaching, while reducing reliance on heuristic rewiring. w/ Chendi Qian, @christophermorris.bsky.social @mniepert.bsky.social ๐Ÿงต

07.12.2024 17:50 โ€” ๐Ÿ‘ 30    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Genuine question - why do captions go above tables? I've always assumed that this is due to wanting to make tables distinct from figures, but it seems like a convention.

29.11.2024 01:20 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

๐Ÿ‘€๐Ÿ‘‹

18.11.2024 15:35 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@amanolache is following 20 prominent accounts