ViT-5.0, way larger but the size is kept private by OpenAI
22.08.2025 15:21 — 👍 2 🔁 0 💬 0 📌 0@pfdeplaen.bsky.social
Final-year PhD student in computer vision at KU Leuven, Belgium. Minimizing entropy only to realize my level of surprise increased gh.io/pf
ViT-5.0, way larger but the size is kept private by OpenAI
22.08.2025 15:21 — 👍 2 🔁 0 💬 0 📌 0Call for Papers update - ILR+G workshop @iccv.bsky.social
We will now feature a single submission track with new submission dates.
📅 New submission deadline: June 21, 2025
🔗 Submit here: cmt3.research.microsoft.com/ILRnG2025
🌐 More details: ilr-workshop.github.io/ICCVW2025/
#ICCV2025
Some universities give monetary rewards to scientists when they publish, so these researchers may be incentivised to slice a paper to earn more
18.05.2025 09:47 — 👍 1 🔁 0 💬 1 📌 0Update: #ICML sent an email asking reviewers to update reviews and add an "update after rebuttal" section.
Although the review process is far from perfect in ML and CV conferences, I welcome the fact that ICML is trying to improve it.
I completely agree! The issue however is that authors can't engage in the discussion unless reviewers respond or ask for a clarification, and that most reviewers don't
06.04.2025 20:59 — 👍 1 🔁 0 💬 1 📌 0It's stated that "the reviewer is required to acknowledge the response and agree to update the review in light of the response if necessary"
06.04.2025 20:55 — 👍 0 🔁 0 💬 0 📌 0Yes, the discussion is until April 8th. Still, reviewers had to acknowledge and update reviews by April 4th at last
06.04.2025 11:15 — 👍 0 🔁 0 💬 1 📌 0ICML introduced a button for reviewers to acknowledge that they have read rebuttals and will take them into consideration.
The idea sounds nice, but in practice most reviewers (around 75% in my reviewer's batch of papers) just clicked the button without leaving a comment or updating scores...
We're still waiting to hear back from the conference, but I have little expectations at this stage...
24.03.2025 17:05 — 👍 0 🔁 0 💬 1 📌 0It is unfortunately not even discussed so far... I'm in favour of the motion !
11.03.2025 09:37 — 👍 2 🔁 0 💬 0 📌 0Ok, thank you for the answer!
03.03.2025 15:00 — 👍 1 🔁 0 💬 0 📌 0Are there plans to organize a CVPR conference outside of North America?
03.03.2025 14:44 — 👍 1 🔁 0 💬 1 📌 0n/n
Paper: arxiv.org/abs/2502.03227
Code: github.com/pfdp0/min_de... (coming soon)
Results: the method generalizes beyond label supervision on classification and reaches high accuracy on SSL
4/n
We investigate various applications:
- extending the PCA algorithm to non-linear decorrelation
- learning minimally redundant representations for SSL
- learning features that generalize beyond label supervision in supervised learning
Algorithm overview: dependency predictors minimize the reconstruction error by learning how dimensions relate, while the encoder maximizes the error by reducing dependencies
3/n
Our method employs an adversarial game where small networks identify dependencies among feature dimensions, while the main network exploits this information to reduce dependencies.
Example of uncorrelated random variables that are not independent: x_2 = (x_1)^2 with x_1 uniformly distributed on [-1,1]
2/n
Currently, most ML techniques rely on minimizing the covariance between output feature dimensions to extract minimally redundant representations.
Still, this is not sufficient as linearly uncorrelated variables can still exhibit nonlinear relationships.
Did you know that a PCA decomposition or SSL decorrelation techniques (eg Barlow Twins) don't necessarily extract minimally redundant/dependent features?
Our paper explains why and introduces an algorithm for general dependence minimization.
🧵
I miss the video explanation 🎶
14.02.2025 19:24 — 👍 2 🔁 0 💬 0 📌 0Reinforcement learning: read the "popular with friends" feed and follow new accounts.
12.01.2025 18:36 — 👍 1 🔁 0 💬 0 📌 0As a reviewer, it's difficult to check for potential plagiarism (eg from an arXiv preprint) as we don't have access to the authors' names and should avoid breaking anonymity.
Should conferences implement a new "role" dedicated to spot plagiarism?
@cvprconference.bsky.social @iclr-conf.bsky.social
The only real new contribution in our opinion is an evaluation in combination with newer variants of DETR. It's highly unlikely the paper would have been accepted if the reviewers were aware of our earlier work.
10.01.2025 11:57 — 👍 0 🔁 0 💬 0 📌 0Looking closer into the paper, it becomes obvious that the claimed contributions are all rephrasings of ours. For any of the remaining (minor) differences, the two methods are not explicitly compared in the paper, neither experimentally nor in the discussion, although that's what one would expect...
10.01.2025 11:57 — 👍 0 🔁 0 💬 1 📌 0Deliberately concealing the similarities between the two works and reusing our illustrations without properly quoting are clear scientific integrity violations that require to be addressed. We reported the case to the PCs and hope the conference will take proper action.
10.01.2025 11:57 — 👍 0 🔁 0 💬 2 📌 0Our CVPR 2023 paper: arxiv.org/pdf/2307.02402
The ACMMM'24 paper's open review: openreview.net/forum?id=N3y...
🚨 A peer-reviewed publication from MM'24 copied our CVPR 2023 paper! #plagiarism
The authors rephrased our method, but their approach is not different from ours.
Surprisingly, they cited us for general observations but did everything they could to hide our contributions from the readers/reviewers.
Very nice slides, thank you !
17.12.2024 10:57 — 👍 1 🔁 0 💬 0 📌 0Everyone shoud know you can't see if you put a hat over the i's 👀
04.12.2024 17:32 — 👍 2 🔁 0 💬 0 📌 0x[-(k % len(x))]
03.12.2024 08:07 — 👍 0 🔁 0 💬 0 📌 0