Matteo Dunnhofer's Avatar

Matteo Dunnhofer

@mdunnhofer.bsky.social

MSCA Postdoctoral Fellow at University of Udine ๐Ÿ‡ฎ๐Ÿ‡น and York University ๐Ÿ‡จ๐Ÿ‡ฆ - interested in computer vision ๐Ÿ‘๏ธ๐Ÿค– https://matteo-dunnhofer.github.io

676 Followers  |  237 Following  |  29 Posts  |  Joined: 18.11.2024  |  2.0932

Latest posts by mdunnhofer.bsky.social on Bluesky

As an Highlight โœจ in the main conference, we will present results of a new investigation on object tracking in first-person vision by comparison against third-person videos. Work done at the MLP lab at the University of Udine led by Christian Micheloni

๐Ÿ“† Oct 21th, 15:00 - 17:00
๐Ÿ“Poster #542

3/3

18.10.2025 15:59 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

At the Human-inspired Computer Vision #HiCV2025 workshop, I will present a poster with recent results on comparing video-based ANNs and the primate visual system. Ongoing project at the ViTA lab @yorkuniversity.bsky.social led by @kohitij.bsky.social

๐Ÿ“† Oct 20th, 8:30 - 12:30
๐Ÿ“Room 309

2/3

18.10.2025 15:59 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

On my way to #ICCV2025 in Honolulu ๐Ÿ๏ธ
@iccv.bsky.social

I will share results on ongoing projects I am working on at
@yorkuniversity.bsky.social
and at the University of Udine

Looking forward to discussions!

1/3

18.10.2025 15:59 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Our #ICCV2025 paper will be presented as an Highlight โœจ

24.07.2025 17:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Joint work with @zairamanigrasso.bsky.social and Christian Micheloni

Funded by PRIN 2022 PNRR, MSCA Actions

(7/7)

23.07.2025 14:51 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Is Tracking really more challenging in First Person Egocentric Vision? Visual object tracking and segmentation are becoming fundamental tasks for understanding human activities in egocentric vision. Recent research has benchmarked state-of-the-art methods and concluded t...

All details can be found in our paper.

๐Ÿ“„ arXiv: arxiv.org/abs/2507.16015
๐ŸŒ Webpage: machinelearning.uniud.it/datasets/vista

The VISTA benchmark will be released soon. Stay tuned!

(6/7)

23.07.2025 14:51 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

These FPV-specific challenges include:
- Frequent object disappearances
- Continuous camera motion altering object appearance
- Object distractors
- Wide field-of-view distortions near frame edges

(5/7)

23.07.2025 14:51 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

- Trackers learn viewpoint biases and perform best on the viewpoint used during training.
- FPV tracking presents its specific challenges.

(4/7)

23.07.2025 14:51 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Key takeaways from our study:

- FPV is challenging for state-of-the-art generalistic trackers.
- Tracking objects in human-object interaction videos is difficult across both first- and third-person viewpoints.

(3/7)

23.07.2025 14:51 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We specifically examined whether these drops are due to FPV itself or to the complexity of human-object interaction scenarios.

To do this, we designed VISTA, a benchmark using synchronized first and third-person recordings of the same activities.

(2/7)

23.07.2025 14:51 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Is Tracking really more challenging in First Person Egocentric Vision?

Our new #ICCV2025 paper follows up our IJCV 2023 study (bit.ly/4nVRJw9).

We further investigate the causes of performance drops in object and segmentation tracking under egocentric FPV settings.

๐Ÿงต (1/7)

23.07.2025 14:51 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image Post image Post image

In the past weeks we have been around NETI at @utaustin.bsky.social, @vssmtg.bsky.social 2025, and the CVR-CIAN conf. @yorkuniversity.bsky.social, to discuss early findings on modeling object motion in the macaque visual system by deep neural networks. Details to appear soon! @kohitij.bsky.social

20.06.2025 22:27 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
CVIU | Computer Vision and Image Understanding | ScienceDirect.com by Elsevier - Computer Vision and Image Understanding | ScienceDirect.com by ElsevierScienceDirect Read the latest articles of Computer Vision and Image Understanding at ScienceDirect.com, Elsevierโ€™s leading platform of peer-reviewed scholarly literature

Details: sciencedirect.com/special-issu...

Submissions open: June 1, 2025
Deadline: September 30, 2025

2/2

29.05.2025 15:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

The teams behind the workshops on Computer Vision for Winter Sports at @wacvconference.bsky.social and on Computer Vision in Sports at @cvprconference.bsky.social have joined forces in the organisation of a special issue in CVIU on topics related to computer vision applications in sports.

1/2

29.05.2025 15:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Honored to be on the list this year!

10.05.2025 23:27 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We are now live with the 3d Workshop on Computer Vision for Winter Sports @wacvconference.bsky.social #WACV2025.

Make sure to attend if you are around!

04.03.2025 15:37 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This paper contributes to our projects PRIN 2022 EXTRA EYE and Project PRIN 2022 PNRR TEAM funded by European Union-NextGenerationEU.

6/6

03.03.2025 17:49 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This work was led by Moritz Nottebaum (stop by his poster!) at the Machine Learning and Perception Lab of the University of Udine

5/6

03.03.2025 17:49 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

LowFormer achieves significant speedups in image throughput and latency on various hardware platforms, while maintaining or surpassing the accuracy of current state-of-the-art models across image recognition, object detection, and semantic segmentation.

4/6

03.03.2025 17:49 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We used insights from such an analysis to enhance the hardware efficiency of backbones at the macro level, and introduced a slimmed-down version of multi-head self-attention to improve efficiency in the micro design.

3/6

03.03.2025 17:49 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We empirically found out that MACs alone do not accurately account for inference speed.

2/6

03.03.2025 17:49 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
LowFormer: Hardware Efficient Design for Convolutional Transformer Backbones Research in efficient vision backbones is evolving into models that are a mixture of convolutions and transformer blocks. A smart combination of both, architecture-wise and component-wise is mandatory...

Today at @wacvconference.bsky.social #WACV2025, we present LowFormer, a new family of convolutional-transformer architectures for computer vision that improve efficiency by optimizing running time on hardware rather than minimizing MAC operations.

arXiv: arxiv.org/abs/2409.03460

1/6

03.03.2025 17:49 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Is attendance open to YorkU researchers (e.g. postdocs)? Would like a lot to learn from your teaching style!

23.12.2024 01:25 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Did the same a few weeks ago in Toronto. I think this is the best pizza flavor you can get in Canada ๐Ÿ˜‚

15.12.2024 12:40 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
CV4WS@WACV2025 UPDATE (11/19/24): DEADLINE EXTENDED

The top-performing teams will be invited to present their solution at the 3rd Workshop on Computer Vision for Winter Sports at #WACV2025!

๐Ÿ“„ sites.google.com/unitn.it/cv4...

3/3

01.12.2024 14:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
CodaLab - Competition

The challenge platform is hosted on CodaLab and you can find all the submission instructions there.

Deadline for submission is January 31st 2025.

๐Ÿ† codalab.lisn.upsaclay.fr/competitions...

2/3

01.12.2024 14:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
[WACV 2024] Tracking Skiers from the Top to the Bottom - qualitative examples
YouTube video by Matteo Dunnhofer [WACV 2024] Tracking Skiers from the Top to the Bottom - qualitative examples

The #SkiTB Visual Tracking Challenge at #WACV2025 is open for submissions!

The goal is to track a skier in a video capturing his/her full performance across multiple video cameras, and it is based on our recently released SkiTB dataset.

๐ŸŽฅ youtu.be/Aos5iKrYM5o

1/3

01.12.2024 14:50 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Would be happy to be added as well :) I am working on visual tracking, currently at YorkU ;)

24.11.2024 17:12 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

The outgoing phase of my MSCA project PRINNEVOT has started a few days ago. I am now at the #CentreforVisionResearch of #YorkUniversity.

I am looking forward to the next two years of research at the intersection of computer vision and visual neuroscience! ๐Ÿค–๐Ÿง 

18.11.2024 13:30 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@mdunnhofer is following 20 prominent accounts