Pierre-François DP's Avatar

Pierre-François DP

@pfdeplaen.bsky.social

Final-year PhD student in computer vision at KU Leuven, Belgium. Minimizing entropy only to realize my level of surprise increased gh.io/pf

32 Followers  |  144 Following  |  27 Posts  |  Joined: 27.11.2024  |  1.8349

Latest posts by pfdeplaen.bsky.social on Bluesky

ViT-5.0, way larger but the size is kept private by OpenAI

22.08.2025 15:21 — 👍 2    🔁 0    💬 0    📌 0

Call for Papers update - ILR+G workshop @iccv.bsky.social

We will now feature a single submission track with new submission dates.

📅 New submission deadline: June 21, 2025
🔗 Submit here: cmt3.research.microsoft.com/ILRnG2025
🌐 More details: ilr-workshop.github.io/ICCVW2025/

#ICCV2025

24.05.2025 08:27 — 👍 10    🔁 8    💬 0    📌 0

Some universities give monetary rewards to scientists when they publish, so these researchers may be incentivised to slice a paper to earn more

18.05.2025 09:47 — 👍 1    🔁 0    💬 1    📌 0

Update: #ICML sent an email asking reviewers to update reviews and add an "update after rebuttal" section.
Although the review process is far from perfect in ML and CV conferences, I welcome the fact that ICML is trying to improve it.

09.04.2025 18:04 — 👍 2    🔁 0    💬 0    📌 0

I completely agree! The issue however is that authors can't engage in the discussion unless reviewers respond or ask for a clarification, and that most reviewers don't

06.04.2025 20:59 — 👍 1    🔁 0    💬 1    📌 0

It's stated that "the reviewer is required to acknowledge the response and agree to update the review in light of the response if necessary"

06.04.2025 20:55 — 👍 0    🔁 0    💬 0    📌 0

Yes, the discussion is until April 8th. Still, reviewers had to acknowledge and update reviews by April 4th at last

06.04.2025 11:15 — 👍 0    🔁 0    💬 1    📌 0

ICML introduced a button for reviewers to acknowledge that they have read rebuttals and will take them into consideration.

The idea sounds nice, but in practice most reviewers (around 75% in my reviewer's batch of papers) just clicked the button without leaving a comment or updating scores...

04.04.2025 16:43 — 👍 1    🔁 0    💬 1    📌 1

We're still waiting to hear back from the conference, but I have little expectations at this stage...

24.03.2025 17:05 — 👍 0    🔁 0    💬 1    📌 0
Post image

It is unfortunately not even discussed so far... I'm in favour of the motion !

11.03.2025 09:37 — 👍 2    🔁 0    💬 0    📌 0

Ok, thank you for the answer!

03.03.2025 15:00 — 👍 1    🔁 0    💬 0    📌 0

Are there plans to organize a CVPR conference outside of North America?

03.03.2025 14:44 — 👍 1    🔁 0    💬 1    📌 0
Preview
Adversarial Dependence Minimization Many machine learning techniques rely on minimizing the covariance between output feature dimensions to extract minimally redundant representations from data. However, these methods do not eliminate a...

n/n
Paper: arxiv.org/abs/2502.03227
Code: github.com/pfdp0/min_de... (coming soon)

21.02.2025 11:19 — 👍 0    🔁 0    💬 0    📌 0
Results: the method generalizes beyond label supervision on classification and reaches high accuracy on SSL

Results: the method generalizes beyond label supervision on classification and reaches high accuracy on SSL

4/n
We investigate various applications:
- extending the PCA algorithm to non-linear decorrelation
- learning minimally redundant representations for SSL
- learning features that generalize beyond label supervision in supervised learning

21.02.2025 11:18 — 👍 1    🔁 0    💬 0    📌 0
Algorithm overview: dependency predictors minimize the
reconstruction error by learning how dimensions relate, while the encoder maximizes the error by reducing dependencies

Algorithm overview: dependency predictors minimize the reconstruction error by learning how dimensions relate, while the encoder maximizes the error by reducing dependencies

3/n
Our method employs an adversarial game where small networks identify dependencies among feature dimensions, while the main network exploits this information to reduce dependencies.

21.02.2025 11:17 — 👍 0    🔁 0    💬 0    📌 0
Example of uncorrelated random variables that are not independent: x_2 = (x_1)^2 with x_1 uniformly distributed on [-1,1]

Example of uncorrelated random variables that are not independent: x_2 = (x_1)^2 with x_1 uniformly distributed on [-1,1]

2/n
Currently, most ML techniques rely on minimizing the covariance between output feature dimensions to extract minimally redundant representations.
Still, this is not sufficient as linearly uncorrelated variables can still exhibit nonlinear relationships.

21.02.2025 11:16 — 👍 0    🔁 0    💬 0    📌 0
Post image

Did you know that a PCA decomposition or SSL decorrelation techniques (eg Barlow Twins) don't necessarily extract minimally redundant/dependent features?
Our paper explains why and introduces an algorithm for general dependence minimization.
🧵

21.02.2025 11:13 — 👍 0    🔁 0    💬 4    📌 0

I miss the video explanation 🎶

14.02.2025 19:24 — 👍 2    🔁 0    💬 0    📌 0

Reinforcement learning: read the "popular with friends" feed and follow new accounts.

12.01.2025 18:36 — 👍 1    🔁 0    💬 0    📌 0

As a reviewer, it's difficult to check for potential plagiarism (eg from an arXiv preprint) as we don't have access to the authors' names and should avoid breaking anonymity.
Should conferences implement a new "role" dedicated to spot plagiarism?
@cvprconference.bsky.social @iclr-conf.bsky.social

10.01.2025 11:59 — 👍 0    🔁 0    💬 0    📌 0

The only real new contribution in our opinion is an evaluation in combination with newer variants of DETR. It's highly unlikely the paper would have been accepted if the reviewers were aware of our earlier work.

10.01.2025 11:57 — 👍 0    🔁 0    💬 0    📌 0

Looking closer into the paper, it becomes obvious that the claimed contributions are all rephrasings of ours. For any of the remaining (minor) differences, the two methods are not explicitly compared in the paper, neither experimentally nor in the discussion, although that's what one would expect...

10.01.2025 11:57 — 👍 0    🔁 0    💬 1    📌 0

Deliberately concealing the similarities between the two works and reusing our illustrations without properly quoting are clear scientific integrity violations that require to be addressed. We reported the case to the PCs and hope the conference will take proper action.

10.01.2025 11:57 — 👍 0    🔁 0    💬 2    📌 0

Our CVPR 2023 paper: arxiv.org/pdf/2307.02402
The ACMMM'24 paper's open review: openreview.net/forum?id=N3y...

10.01.2025 11:53 — 👍 0    🔁 0    💬 1    📌 0
Post image Post image

🚨 A peer-reviewed publication from MM'24 copied our CVPR 2023 paper! #plagiarism
The authors rephrased our method, but their approach is not different from ours.
Surprisingly, they cited us for general observations but did everything they could to hide our contributions from the readers/reviewers.

10.01.2025 11:52 — 👍 4    🔁 0    💬 2    📌 1

Very nice slides, thank you !

17.12.2024 10:57 — 👍 1    🔁 0    💬 0    📌 0

Everyone shoud know you can't see if you put a hat over the i's 👀

04.12.2024 17:32 — 👍 2    🔁 0    💬 0    📌 0

x[-(k % len(x))]

03.12.2024 08:07 — 👍 0    🔁 0    💬 0    📌 0

@pfdeplaen is following 19 prominent accounts