Bibliography cleanup is known to be harder than AGI. Same with successfully connecting a laptop with a projector. It's what we humans will be doing long after the robot takeover. ๐
04.02.2026 22:56 โ ๐ 52 ๐ 9 ๐ฌ 3 ๐ 1@jvgemert.bsky.social
Head of the Computer Vision lab; TU Delft. - Fundamental empirical Deep Learning research - Visual inductive priors for data efficiency Web: https://jvgemert.github.io/
Bibliography cleanup is known to be harder than AGI. Same with successfully connecting a laptop with a projector. It's what we humans will be doing long after the robot takeover. ๐
04.02.2026 22:56 โ ๐ 52 ๐ 9 ๐ฌ 3 ๐ 1I also like them. I think because they are tickling my brain to see meaning (where there's none)
Sometimes reminiscent of reviewing for cvpr ๐
I found The Knowledge Machine a solid read on what science is:
www.strevens.org/scientia/
Sorry for the unclarity ๐
. I was wondering if a textbook is a "review" (it has limited novelty, I hope ๐
). And, as such, if it's allowed on arXiv ๐ค
I think I have seen them on arXiv in the past..
What about textbooks? ๐ค
29.01.2026 18:56 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Well. At least they validate that deep learning methods might suffer from overfitting ๐
27.01.2026 15:46 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
But they are investing in this:
rightsforum.org/abp-investee...
Looks great!! Wonderful work. Is this tied to a research field? Or can it be used generally? ๐ค
22.01.2026 20:39 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0But more on topic: our university uses Microsoft everywhere. To change that would require a fight. One that I am not sure if I want to take upon me.. ๐
20.01.2026 09:43 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Our wonderful support staff person is about to retire, and I am afraid I will be relegated to "central IT" ๐ฑ
20.01.2026 09:41 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 1Paywall?
16.01.2026 18:56 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Haha, well, I was also thinking about CoordConv, the paper that added position info to convnets :)
(Which, interestingly, seems to go against that finding that convnets can already encode absolute position ๐ค)
I've been also thinking about position embeddings :). I might investigate with a MSc student, although that is still a while before they start.. ๐
12.01.2026 08:03 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Ahaa! "negative impact" is a continuous term, not discrete.
Now I understand why they were even threatening to desk reject one of my papers that was already desk rejected ;)
(We didn't make it in time so it was already out..)
Oh, i found the "urgent! Risk of desk reject when co-author reviews not done" quite exciting ;)
09.01.2026 11:37 โ ๐ 9 ๐ 0 ๐ฌ 2 ๐ 0Interesting! Even in a time where the last bastion of "knowledge based visual feature engineering" (ie: geometry) is replaced by learning (eg: VGGT) ?
07.01.2026 20:27 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Does it have to be 3 years? ๐ค
For a failure, I can recommend my own TPAMI 2010 paper on "visual word ambiguity", exemplified by:
- not end-to-end (feature engineering)
- grayscale only
- convexity assumption
- small datasets
- ... (?)
I misread it as "chess cake" ๐.
Or should it be "check, cake"? (With the bishop ๐).
You taught the 17y old to play well, nice position ๐.
I'd like to propose the following norm for peer review of papers. If a paper shows clear signs of LLM-generated errors that were not detected by the author, the paper should be immediately rejected. My reasoning: 1/ #ResearchIntegrity
28.12.2025 06:23 โ ๐ 115 ๐ 27 ๐ฌ 4 ๐ 6I'm catching up during the hiking holidays ๐
23.12.2025 20:09 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Did we truly understand this ourselves? ๐
21.12.2025 21:41 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Gefeliciteerd! ๐
15.12.2025 18:14 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0@davidpicard.bsky.social is doing it in Paris, with cvpr pronounced in a French accent ๐.
13.12.2025 17:47 โ ๐ 5 ๐ 0 ๐ฌ 1 ๐ 0Incognito mode ๐ฅธ
09.12.2025 13:52 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Dutch nationalistic reading? ๐๐ณ๐ฑ
08.12.2025 20:04 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0Wonderful! I like nearly all of his work ๐.
As i also do for most of the work of Neal Stephenson :)
Very interesting! I will have a look at those sources. Thank you.
I'm in machine learning (but not "AI" ๐) myself, and shortcut learning is one of the unsolved (practical?) problems in our field.
Definitely a language issue, my apologies.
I guess his point is: there's conflict between gains for science and gains for the scientist.
And that aligning them is an unsolved problem; in machine learning there's Bostrom's paperclip example ๐. Also Goodhart's law.
Incentives are intended as rewards; but that having principals hurts personally.
Ie: to have principles and not "just go along with shady business" might hurt your career.
Eg: criticizing misconduct of famous/powerful people might get your next grant proposal rejected.