Yuki Asano's Avatar

Yuki Asano

@yukimasano.bsky.social

Professor at University of Technology Nuremberg Head of Fundamental AI Lab

1,262 Followers  |  56 Following  |  19 Posts  |  Joined: 18.11.2024  |  1.9195

Latest posts by yukimasano.bsky.social on Bluesky

Post image

Pretrained ViTs usually come in rigid sizes (S, B, L, H). But your hardware constraints don't

We built a way to make DINO or CLIP fully elastic in <5 mins without any retraining ⚑️

Get the exact model size you need, not just what was released

Find Walter at #NeurIPS Poster 4709 | Thu 4:30-7:30 PM

03.12.2025 13:38 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

On the occasion of the 1000th citation of our Sinkhorn-Knopp self-supervised representation learning paper, I've written a whole post about the history and the key bits of this method that powers the state-of-the-art SSL vision models.

Read it here :): docs.google.com/document/d/1...

15.10.2025 10:00 β€” πŸ‘ 18    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Today, we release Franca, a new vision Foundation Model that matches and often outperforms DINOv2.
The data, the training code and the model weights are open-source.

This is the result of a close and fun collaboration
@valeoai.bsky.social (in France) and @funailab.bsky.social (in Franconia)πŸš€

21.07.2025 14:58 β€” πŸ‘ 21    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

Agreed, very interesting! Future engines that run on information? 🀯

23.12.2024 18:24 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our Lab is now also on bsky! πŸ₯³

10.12.2024 09:37 β€” πŸ‘ 21    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Post image

πŸš€πŸš€PaliGemma 2 is our updated and improved PaliGemma release using the Gemma 2 models and providing new pre-trained checkpoints for the full cross product of {224px,448px,896px} resolutions and {3B,10B,28B} model sizes.

1/7

05.12.2024 18:16 β€” πŸ‘ 68    πŸ” 21    πŸ’¬ 1    πŸ“Œ 5
https://tinyurl.com/BristolCVLectureship

Pls RT
Permanent Assistant Professor (Lecturer) position in Computer Vision @bristoluni.bsky.social [DL 6 Jan 2025]
This is a research+teaching permanent post within MaVi group uob-mavi.github.io in Computer Science. Suitable for strong postdocs or exceptional PhD graduates.
t.co/k7sRRyfx9o
1/2

04.12.2024 17:22 β€” πŸ‘ 23    πŸ” 14    πŸ’¬ 1    πŸ“Œ 1
Post image Post image Post image

Today we had a joint workshop between our FunAI Lab, UTN and AIST Japan. 13 talks, 1 cake and lots of Bavarian food really get research discussions going!
Towards more collaborations in AI between πŸ‡©πŸ‡ͺ & πŸ‡―πŸ‡΅.
@hirokatukataoka.bsky.social

02.12.2024 19:36 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 2
Preview
NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models Decoder-only large language model (LLM)-based embedding models are beginning to outperform BERT or T5-based embedding models in general-purpose text embedding tasks, including dense vector-based retri...

Thanks for tagging. In addition have a look at NV-Embed paper: arxiv.org/abs/2405.17428 they do contrastive finetuning after turning on the bidirectional attention mask

28.11.2024 17:53 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Also @phdcomics.bsky.social is on πŸ¦‹ πŸ‘. slowly nesting here.

27.11.2024 08:25 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yay, @xkcd.com is on πŸ¦‹

27.11.2024 07:54 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Nice πŸ‘! We love small (M)LLMs :) will training code also be released?

26.11.2024 19:13 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Do better language models have crisper vision? How well do text-only Large Language Models (LLMs) grasp the visual world? As LLMs are increasingly used in computer vision, addressing this question becomes both fundamental and pertinent. However, e...

and also perhaps interesting for you: probing text-representations of LLMs for CLIP-like zero-shot classification: arxiv.org/abs/2410.07173

26.11.2024 13:21 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Sam next to his poster; I'm still very impressed he did all this for his MSc thesis! #BMVC2024

26.11.2024 10:25 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

exactly. hence the new post-(pre)training term perhaps? post-training seems to be a good generic term for the RLHF/preference tuning etc in NLP allenai.org/papers/tulu-.... so by saying post-pretraining, we could emphasize the fact it's unsupervised

26.11.2024 08:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

"Post-pretraining", "unsupervised domain adaptation" fits, but I think is used for different tasks

26.11.2024 08:01 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Preview
Prompt Generation Networks for Input-Space Adaptation of Frozen Vision Transformers With the introduction of the transformer architecture in computer vision, increasing model scale has been demonstrated as a clear path to achieving performance and robustness gains. However, with mode...

This work was led by Jochem Loedeman in his MSc, and supervised by Maarten Stol, Tengda Han and myself.
πŸ““: arxiv.org/abs/2210.06466

Visit BMVC poster 532 at 10am today!

26.11.2024 07:28 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

This means we can simply send an adapted RGB image to the server to get a personalised output.
We also show that the gains don't just come from adding a new learnable model, but instead from the interplay between the pretrained one and the PGN.

26.11.2024 07:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

This CNN (e.g. running on a phone) outputs a softmax over a set of learned tokens. These are then combined and used for the adaptation. This allows efficient learning, but also for moving the signal back into pixel-space via pseudo-inverse.

26.11.2024 07:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Also known as reprogramming, works from @phillipisola.bsky.social showed that even adjusting singular pixels allows adapting a model. We take this one step further and make the input-only adaptation signal dependent on the image itself: We introduce a lightweight CNN, the Prompt Generation Network.

26.11.2024 07:28 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

LoRA is great but one disadvantage is that if you have 1000s of these adapters and want to serve them in an efficient way, it's very difficult: GPUs are inefficient when you e.g. use one adapter for only one sample in a large batch. The solution is to adapt the model strictly in input-space.

26.11.2024 07:28 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

LoRA et al. enable personalised model generation and serving, which is crucial as finetuned models still outperform general ones in many tasks. However, serving a base model with many LoRAs is very inefficient! Now, there's a better way: enter Prompt Generation Networks, presented today #BMVC

26.11.2024 07:28 β€” πŸ‘ 31    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

Hello world!
Is there any tool to sync twitter and bluesky posting?

20.11.2024 20:53 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

My growing list of #computervision researchers on Bsky.

Missed you? Let me know.

go.bsky.app/M7HGC3Y

19.11.2024 23:00 β€” πŸ‘ 131    πŸ” 42    πŸ’¬ 88    πŸ“Œ 9
Preview
Sky Follower Bridge - Chrome Web Store Instantly find and follow the same users from your Twitter follows on Bluesky.

The thingie that brings over your twitter followers worked jolly well for me. Very cool! I am following another 500 people now thanks to that…
chromewebstore.google.com/detail/sky-f...

19.11.2024 10:28 β€” πŸ‘ 150    πŸ” 10    πŸ’¬ 13    πŸ“Œ 4

@yukimasano is following 20 prominent accounts