Ece Takmaz's Avatar

Ece Takmaz

@ecekt.bsky.social

Postdoc at Utrecht University, previously PhD candidate at the University of Amsterdam Multimodal NLP, Vision and Language, Cognitively Inspired NLP https://ecekt.github.io/

599 Followers  |  496 Following  |  45 Posts  |  Joined: 10.11.2024  |  2.0956

Latest posts by ecekt.bsky.social on Bluesky


The submission deadline for CMCL is coming up in less than a month! (Feb 25) CMCL will be co-located with LREC and take place on May 16!🌴https://sites.google.com/view/cmclworkshop/cfp

28.01.2026 19:34 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

I had so many inspiring conversations with lovely colleagues and I am already looking forward to visiting again in the future! Many thanks to @simeonjunker.bsky.social, @bbunzeck.bsky.social, @manarali.bsky.social, @hbuschme.bsky.social, Clara Lachenmaier, Lisa Gottschalk, Emilie Sitter, Yu Wang ✨

18.01.2026 14:43 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

I have just returned from a week-long visit to Bielefeld University! Thank you very much for hosting me Sina Zarrieß and @ozgealacam.bsky.social 😊 @clausebielefeld.bsky.social

18.01.2026 14:43 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

This week we’re having @ecekt.bsky.social as our guest in Bielefeld. She gave a highly timely talk on language+vision models, how they process images under noise conditions, and about how to train a highly effective multimodal BabyLM with model merging. πŸ—£οΈπŸ‘€πŸ’»

13.01.2026 10:42 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Post image Post image Post image Post image 21.12.2025 19:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image Post image 21.12.2025 19:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image Post image Post image

Photos from the Computational Psycholinguistics Meeting in Utrecht, many thanks to everyone who joined us in making this a memorable event! ✨

21.12.2025 19:47 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The CfP for CMCL is out!🌴 We are looking forward to receiving many interesting submissions! ✨ (Deadline: February 25, 2026) sites.google.com/view/cmclwor...

15.12.2025 09:20 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1

which song did they use? money for nothing?

09.12.2025 12:49 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Many thanks to @dnliu.bsky.social for inviting me, and to the members of the group for their insightful questions! 😊✨

09.12.2025 12:40 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The program of the Computational Psycholinguistics Meeting 2025 at Utrecht University is out, packed with a lot of very interesting talks! The registration is full, but there is a waiting list, if you would like to attend✨ cpl2025.sites.uu.nl/schedule/

04.12.2025 10:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Cognitive Modeling and Computational Linguistics (CMCL) workshop will be co-located with LREC 2026 in Palma, Mallorca!🌴Stay tuned for more details!✨
@byungdoh.bsky.social Tatsuki Kuribayashi @grambelli.bsky.social Philipp Wicke, Jixing Li, Ryo Yoshida @cmclworkshop.bsky.social

01.12.2025 10:01 β€” πŸ‘ 13    πŸ” 4    πŸ’¬ 0    πŸ“Œ 2
Post image Post image Post image

I was in Sweden this week! πŸ‡ΈπŸ‡ͺ❄️ Many thanks to Nikolai Ilinykh for inviting me to give a talk at the University of Gothenburg. I enjoyed having inspiring chats and delicious food with Sharid LoΓ‘iciga, @asayeed.bsky.social, Simon Dobnik, Hyewon Jang and Chris Howes at CLASP. Much appreciated! πŸ˜ŠπŸŽ„

22.11.2025 10:27 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Model Merging to Maintain Language-Only Performance in Developmentally Plausible Multimodal Models Ece Takmaz, Lisa Bylinina, Jakub Dotlacil. Proceedings of the First BabyLM Workshop. 2025.

I hope our findings would be helpful for the future contributors to the multimodal track of the BabyLM challenge! aclanthology.org/2025.babylm-...

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Instead of using the data provided in the BabyLM challenge, I opted for obtaining them from their sources, which added extra layers of filtering and complexity, revealing some discrepancies in the multimodal BabyLM data. I mention these in the paper.

01.11.2025 15:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Unfortunately, we had limited time and resources to modify the whole evaluation pipeline for our specific multimodal architecture. As a result, we tested our models on a subset of the benchmarks.

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The report on the Findings of the Third BabyLM Challenge indicates that the multimodal track received only 1 full submission this year. We submitted our paper to the workshop track instead of the challenge.

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We experiment with weighted linear interpolation of language-only and multimodal model weights. Model merging with language-only checkpoints helps alleviate the issue to some extent, benefiting performance in language-only benchmarks and not disrupting accuracy in multimodal tasks heavily.

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

How can we mitigate this issue in developmentally plausible multimodal models and maintain language-only performance? We explored model merging, a technique that has been shown to benefit multi-task and multi-language models, reducing the effects of catastrophic forgetting.

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our multimodal BabyLM model surpasses previous multimodal baselines and submissions on the leaderboard. Yet, compared to language-only models, it underperforms in grammar-oriented benchmarks, although being exposed to the same language-only data as the language-only models (+ multimodal data).

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Previous work, including BabyLM contributions, indicates that multimodal data has limited or no benefits in text-only benchmarks. We reach similar conclusions in our low-resource multimodal scenario.

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I will be attending EMNLP in China to present our paper with @bylinina.bsky.social (who will be in China, too) and Jakub Dotlacil in the BabyLM workshop! Looking forward to meeting people there! ✨ 😊 #EMNLP2025 @emnlpmeeting.bsky.social

lnkd.in/e-Bzz6De

01.11.2025 15:52 β€” πŸ‘ 12    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
Traces of Image Memorability in Vision Encoders: Activations, Attention Distributions and Autoencoder Losses Images vary in how memorable they are to humans. Inspired by findings from cognitive science and computer vision, this paper explores the correlates of image memorability in pretrained vision encoders...

I felt very much at home at #ICCV2025! Here is the paper: arxiv.org/abs/2509.01453

27.10.2025 21:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image

Just got back from Hawaii, where I presented a workshop paper on image memorability at @iccv.bsky.social 🌺 Coming from multimodal NLP, it was my first time attending a CV conference. Everywhere I looked, there were talks and posters that were incredibly interesting!

27.10.2025 21:13 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🌍Introducing BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data!

LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data

We extend this effort to 45 new languages!

15.10.2025 10:53 β€” πŸ‘ 44    πŸ” 16    πŸ’¬ 1    πŸ“Œ 4
Preview
Traces of Image Memorability in Vision Encoders: Activations, Attention Distributions and Autoencoder Losses Images vary in how memorable they are to humans. Inspired by findings from cognitive science and computer vision, this paper explores the correlates of image memorability in pretrained vision encoders...

I will be presenting this work at the @iccv.bsky.social
2025 workshop MemVis: The 1st Workshop on Memory and Vision! 🌺 Work done with Albert Gatt & Jakub Dotlacil arxiv.org/abs/2509.01453

15.10.2025 09:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image

What makes an image memorable? And can we predict image memorability using pretrained vision encoders? We explored activations, attention distributions, image patch uniformity and sparse autoencoder losses using image representations across the layers of CLIP, DINOv2 and SigLIP2.

15.10.2025 09:10 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

taking the NS train, I do that multiple times a week :)

13.10.2025 18:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Release of the massive HPLT v3.0 multilingual dataset - Corpora - ELRA lists

Could it be the HPLT v3.0 multilingual dataset? list.elra.info/mailman3/hyp...

08.10.2025 11:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Postdoctoral Researcher in Memory access in language Help uncover how memory shapes language use. As a postdoctoral researcher at the Institute for Language Sciences, you will join the ERC-funded MEMLANG project.

Hi all, there is a postdoc position open in the group I'm currently based in! ✨ Let me know if you are interested or have questions πŸ™‚ Please share if you know someone who might be interested www.uu.nl/en/organisat...

08.10.2025 07:55 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

@ecekt is following 20 prominent accounts