Adriano D'Alessandro's Avatar

Adriano D'Alessandro

@adrian-dalessandro.bsky.social

| Computer vision researcher | Computer science PhD candidate @ SFU | More: https://dalessandro.dev/ I like to count things and periodically I work on applications in plant agriculture + ecology. Follow for stale political hot takes. Free Palestine πŸ‡΅πŸ‡Έ

78 Followers  |  109 Following  |  252 Posts  |  Joined: 19.11.2024  |  2.6112

Latest posts by adrian-dalessandro.bsky.social on Bluesky

This is an VERY important point, and I wonder how much knowledge we have lost because an interesting research path is cutoff due to early results not outperforming big models on some benchmarks. We are "expecting skyscrapers to learn to fly if we build them tall enough", while punishing other ideas.

03.10.2025 16:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Video essays are just easier to listen to in the background. Travel vlogs are more visual and experiential and require more attentiveness to get the full experience.

24.09.2025 16:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

That's interesting! I sometimes use this puzzle as a litmus test for mental rotation in multimodal language models. I didn't expect that they could learn this skill spontaneously but it's interesting to see the information is there in some models!

22.09.2025 14:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I just reviewed for AAAI and I wasn't sold on the AI reviewer. It provides a decent breadth but it's a bit shallow. What I think would be superior is an LLM that audits the reviewer's actual review, attempts to identify weak arguments by the reviewer, and tries to get the reviewer to correct that.

22.09.2025 08:17 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

questions specific to the review, with the goal of improving the overall quality of the review. Perhaps when the reviewer submits, the LLM generates a list of questions that the reviewer should answer before finalizing.

22.09.2025 07:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

One idea is having the reviewers work with an LLM. For example, a reviewer on a paper claimed the work wasn't novel because prior work existed. What prior work? They never said! I ended up pressing them on it because I wanted to champion the paper. An LLM in the loop could probe reviewers with key

22.09.2025 07:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The total number of submissions could increase while still representing a reduction in the rate of submission growth. So I'm curious to see if EMNLP has such a drastic jump. But this might also just be a new normal, with LLMs accelerating the speed of research and now there's simply more papers.

22.09.2025 06:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I disagree with the idea that accepting more papers inherently leads to more papers being submitted. If I submit a paper, I'm not just sitting on my hands. I'm polishing the submitted paper and starting a new paper. If the submitted paper is rejected, I'm sending 2 papers to the next conference.

21.09.2025 22:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Just accept more papers? I'm curious what % of the total submission volume is borderline/accept papers that are resubmitted with minimal changes.

21.09.2025 17:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I don't know if it's just my batch, but the other reviewers for papers I reviewed are quite possibly the lowest quality reviews I've ever seen. I legitimately wonder if they just tossed the paper to ChatGPT and said "find any reason to justify rejecting this paper". #AAAI is going to be noisy.

15.09.2025 01:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
LookAlikes Dataset LookAlikes is a fine-grained object-counting dataset for evaluating the generalization of class-agnostic counting methods. Each image contains visually similar objects, from which only a subcategory m...

Want to evaluate open-world object counting in a fine-grained setting? We’re excited to release the LookAlikes dataset, which is a test-set-only benchmark where images contain objects from multiple visually or semantically similar categories! #ComputerVision #DeepLearning #AI #Counting #OpenDataset

11.09.2025 20:52 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

On the other hand, I suppose having access to it would just lead to authors gaming the system.

10.09.2025 19:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Do they plan on releasing the model? If I can get quick feedback on a paper BEFORE submitting, then I would know whether the paper is at a stage where its worth accepting at all and iterate on it to improve the quality before submitting (rather than waiting 2 or 3 months before I get that feedback).

10.09.2025 19:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

might be very useful for multi-modal search, but I'm still not quite sure how to distill that knowledge.

03.09.2025 18:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Something I've been wondering about -- image generative models seem able to visually represent categories that CLIP struggles to fully resolve. SDXL, for example, can draw Greylag Geese but CLIP struggles to separate Greylag Geese from Canadian Geese. So there's something in SDXL features that

03.09.2025 18:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Wet–dry cycles cause nucleic acid monomers to polymerize into long chains | PNAS The key first step in the oligomerization of monomers is to find an initiator, which is usually done by thermolysis or photolysis. We present a mar...

It's possible that tides are a precondition for life! Wet-dry cycles can lead to interesting chemistry or can concentrate materials in tide pools!

www.pnas.org/doi/10.1073/...

www.nature.com/articles/s41...

02.09.2025 21:13 β€” πŸ‘ 29    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

still accepting that paper. Why send it back into the wild for such minor things? If the work is good, the community will use it and sort out the rest. We tried to get away from the long review times of journals but we recreated it with this reject/resubmit cycle in highly selective conferences.

01.09.2025 00:32 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our field having a conference-first approach is very atypical, and we shifted to it for faster dissemination of our work. I think immature but good ideas are exactly what conferences are for. When I'm reviewing, if an idea is good but there's a few experiments that could make it more complete, I'm

01.09.2025 00:32 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Is there an induced demand argument for conferences? I suppose the firehose of papers just expands to hit as many lottery tickets as possible, but it also reduces the pressure on students with immature but good ideas from throwing them into ECCV|ICCV because the next option is several months away.

31.08.2025 21:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

CV papers submitted to AAAI: 10K
CV papers submitted to ICCV: 11K

So AAAI is now processing as many CV papers as ICCV. Given ICCV rejection was about a month before the AAAI deadline, presumably some of the 7000 rejected CV papers are ending up at AAAI.

31.08.2025 21:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The problem is, AAAI is simply seen as more prestigious than either ACCV or WACV. CV researchers will still preferentially submit to AAAI over the other two. I know PIs who have submitted to WACV and found the experience frustrating and amateurish. We need something with effort and names behind it.

31.08.2025 18:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

hard to establish as considered top tier. If CV researchers miss the CVPR+ECCV|ICCV window, which ends around May/June with rebuttals/decisions for ECCV|ICCV, then they have to wait until November to resubmit to another top tier CV conference. Those papers are getting shifted to NeurIPS/AAAI.

29.08.2025 21:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

AAAI is undergoing a similar phenomena. Nearly 30,000 papers submitted. 10,000 were CV papers (for comparison, ICCV had 11,000 submissions). But CV researchers only have 2 top-tier deadlines (CVPR + ECCV|ICCV) to submit to annually. We need another CV deadline in July or August, which we all work

29.08.2025 21:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Why are there only 2 major CV conferences a year? If I want to submit a deep learning paper, there is NeurIPS/ICLR/ICML/AAAI. For a CV paper, I only have CVPR and ECCV|ICCV (Nov/Mar deadlines). This year there were 10K CV papers submitted to AAAI and 11K to ICCV. There should be a July CV deadline

28.08.2025 18:21 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image 20.08.2025 06:34 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

GPT5 still can't solve this visual puzzle, which is interesting!

20.08.2025 06:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I guess we know what's left for GPT6

08.08.2025 01:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

misogyny, the only narrative available to them as they try to desperately make their lives coherent.

02.08.2025 06:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

the sort of data that is most lucrative to the apps and we spend our lives interacting with a torrent of data on those apps. And, because of this we find ourselves living in this perpetual and flattening present, without any connection to the past or future. It is not surprising that men turn to

02.08.2025 06:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I've been reading a bit of Byung-Chul Han, recently, and I'm starting to think the male loneliness epidemic is just a byproduct of digital ecosystems destroying our ability to form coherent narratives about our lives. Dating is reduced to listing of algorithm friendly facts. We convert ourselves to

02.08.2025 06:48 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@adrian-dalessandro is following 20 prominent accounts