Ece Takmaz's Avatar

Ece Takmaz

@ecekt.bsky.social

Postdoc at Utrecht University, previously PhD candidate at the University of Amsterdam Multimodal NLP, Vision and Language, Cognitively Inspired NLP https://ecekt.github.io/

584 Followers  |  486 Following  |  38 Posts  |  Joined: 10.11.2024  |  1.8611

Latest posts by ecekt.bsky.social on Bluesky

which song did they use? money for nothing?

09.12.2025 12:49 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Many thanks to @dnliu.bsky.social for inviting me, and to the members of the group for their insightful questions! 😊✨

09.12.2025 12:40 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The program of the Computational Psycholinguistics Meeting 2025 at Utrecht University is out, packed with a lot of very interesting talks! The registration is full, but there is a waiting list, if you would like to attend✨ cpl2025.sites.uu.nl/schedule/

04.12.2025 10:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Cognitive Modeling and Computational Linguistics (CMCL) workshop will be co-located with LREC 2026 in Palma, Mallorca!🌴Stay tuned for more details!✨
@byungdoh.bsky.social Tatsuki Kuribayashi @grambelli.bsky.social Philipp Wicke, Jixing Li, Ryo Yoshida @cmclworkshop.bsky.social

01.12.2025 10:01 β€” πŸ‘ 12    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image

I was in Sweden this week! πŸ‡ΈπŸ‡ͺ❄️ Many thanks to Nikolai Ilinykh for inviting me to give a talk at the University of Gothenburg. I enjoyed having inspiring chats and delicious food with Sharid LoΓ‘iciga, @asayeed.bsky.social, Simon Dobnik, Hyewon Jang and Chris Howes at CLASP. Much appreciated! πŸ˜ŠπŸŽ„

22.11.2025 10:27 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Model Merging to Maintain Language-Only Performance in Developmentally Plausible Multimodal Models Ece Takmaz, Lisa Bylinina, Jakub Dotlacil. Proceedings of the First BabyLM Workshop. 2025.

I hope our findings would be helpful for the future contributors to the multimodal track of the BabyLM challenge! aclanthology.org/2025.babylm-...

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Instead of using the data provided in the BabyLM challenge, I opted for obtaining them from their sources, which added extra layers of filtering and complexity, revealing some discrepancies in the multimodal BabyLM data. I mention these in the paper.

01.11.2025 15:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Unfortunately, we had limited time and resources to modify the whole evaluation pipeline for our specific multimodal architecture. As a result, we tested our models on a subset of the benchmarks.

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The report on the Findings of the Third BabyLM Challenge indicates that the multimodal track received only 1 full submission this year. We submitted our paper to the workshop track instead of the challenge.

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We experiment with weighted linear interpolation of language-only and multimodal model weights. Model merging with language-only checkpoints helps alleviate the issue to some extent, benefiting performance in language-only benchmarks and not disrupting accuracy in multimodal tasks heavily.

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

How can we mitigate this issue in developmentally plausible multimodal models and maintain language-only performance? We explored model merging, a technique that has been shown to benefit multi-task and multi-language models, reducing the effects of catastrophic forgetting.

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our multimodal BabyLM model surpasses previous multimodal baselines and submissions on the leaderboard. Yet, compared to language-only models, it underperforms in grammar-oriented benchmarks, although being exposed to the same language-only data as the language-only models (+ multimodal data).

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Previous work, including BabyLM contributions, indicates that multimodal data has limited or no benefits in text-only benchmarks. We reach similar conclusions in our low-resource multimodal scenario.

01.11.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I will be attending EMNLP in China to present our paper with @bylinina.bsky.social (who will be in China, too) and Jakub Dotlacil in the BabyLM workshop! Looking forward to meeting people there! ✨ 😊 #EMNLP2025 @emnlpmeeting.bsky.social

lnkd.in/e-Bzz6De

01.11.2025 15:52 β€” πŸ‘ 12    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
Traces of Image Memorability in Vision Encoders: Activations, Attention Distributions and Autoencoder Losses Images vary in how memorable they are to humans. Inspired by findings from cognitive science and computer vision, this paper explores the correlates of image memorability in pretrained vision encoders...

I felt very much at home at #ICCV2025! Here is the paper: arxiv.org/abs/2509.01453

27.10.2025 21:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image

Just got back from Hawaii, where I presented a workshop paper on image memorability at @iccv.bsky.social 🌺 Coming from multimodal NLP, it was my first time attending a CV conference. Everywhere I looked, there were talks and posters that were incredibly interesting!

27.10.2025 21:13 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🌍Introducing BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data!

LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data

We extend this effort to 45 new languages!

15.10.2025 10:53 β€” πŸ‘ 43    πŸ” 16    πŸ’¬ 1    πŸ“Œ 3
Preview
Traces of Image Memorability in Vision Encoders: Activations, Attention Distributions and Autoencoder Losses Images vary in how memorable they are to humans. Inspired by findings from cognitive science and computer vision, this paper explores the correlates of image memorability in pretrained vision encoders...

I will be presenting this work at the @iccv.bsky.social
2025 workshop MemVis: The 1st Workshop on Memory and Vision! 🌺 Work done with Albert Gatt & Jakub Dotlacil arxiv.org/abs/2509.01453

15.10.2025 09:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image

What makes an image memorable? And can we predict image memorability using pretrained vision encoders? We explored activations, attention distributions, image patch uniformity and sparse autoencoder losses using image representations across the layers of CLIP, DINOv2 and SigLIP2.

15.10.2025 09:10 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

taking the NS train, I do that multiple times a week :)

13.10.2025 18:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Release of the massive HPLT v3.0 multilingual dataset - Corpora - ELRA lists

Could it be the HPLT v3.0 multilingual dataset? list.elra.info/mailman3/hyp...

08.10.2025 11:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Postdoctoral Researcher in Memory access in language Help uncover how memory shapes language use. As a postdoctoral researcher at the Institute for Language Sciences, you will join the ERC-funded MEMLANG project.

Hi all, there is a postdoc position open in the group I'm currently based in! ✨ Let me know if you are interested or have questions πŸ™‚ Please share if you know someone who might be interested www.uu.nl/en/organisat...

08.10.2025 07:55 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Had such a great time presenting our tutorial on Interpretability Techniques for Speech Models at #Interspeech2025! πŸ”

For anyone looking for an introduction to the topic, we've now uploaded all materials to the website: interpretingdl.github.io/speech-inter...

19.08.2025 21:23 β€” πŸ‘ 40    πŸ” 14    πŸ’¬ 2    πŸ“Œ 1
Post image Post image

Some amazing @amsterdamnlp.bsky.social people in ViennaπŸ’«#acl2025 Raquel FernΓ‘ndez Sandro Pezzelle Katia Shutova @esamghaleb.bsky.social @veraneplenbroek.bsky.social @annabavaresco.bsky.social + @leobertolazzi.bsky.social

03.08.2025 08:16 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

with @duyguislakoglu.bsky.social and Ozge Alacam at the ACL 2025 social event✨ #acl2025

03.08.2025 08:00 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

CoNLL audience today ✨ @conll-conf.bsky.social

01.08.2025 19:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

together with some Utrecht NLP people at ACL 2025! #acl2025 #acl2025NLP

27.07.2025 19:48 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

I'll be attending ACL 2025 in Vienna! Looking forward to seeing people there!πŸ˜ŠπŸ‡¦πŸ‡Ή We are going to present 'LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks' aclanthology.org/2025.acl-sho... #acl2025 #acl2025nlp

24.07.2025 12:34 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Interpretability Techniques for Speech Models β€” Tutorial @ Interspeech 2025

The @interspeech.bsky.social early registration deadline is coming up in a few days!

Want to learn how to analyze the inner workings of speech processing models? πŸ” Check out the programme for our tutorial:
interpretingdl.github.io/speech-inter... & sign up through the conference registration form!

13.06.2025 05:18 β€” πŸ‘ 27    πŸ” 10    πŸ’¬ 1    πŸ“Œ 2

Abstract submission deadline extended to June 29!

10.06.2025 12:27 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@ecekt is following 20 prominent accounts