wymmmm
21.02.2026 21:04 β π 0 π 0 π¬ 0 π 0@namer.bsky.social
(He/Him). Previously at LARA Lab @UMBC, Mohsin Lab @BRACU | Accessibility, Explainability and Multimodal DL. My opinions are mine. I'm on the PhD application cycle for Fall '26! www.shadabchy.com
wymmmm
21.02.2026 21:04 β π 0 π 0 π¬ 0 π 0
Me in the Intro of a short paper in a Track that calls for "A 'prequel' to motivate or provoke novel conversations or future work":
> "Broadly, we offer a provocation to-"
Reviewer 2:
> "Weaknesses are that the contribution is primarily a provocation/synthesis (limited empirical validation"
That's not remotely the issue here though. LLMs can already generate stories/code/designs. World models aren't the same thing as persistent memory.
It can't generate *good* stories for reasons of verifiability an un-RL-ability, as said above, and world models won't change that at all.
The neatest thing is just how much it looks like mold growing on the fruits. A nicely picked example.
06.02.2026 22:25 β π 0 π 0 π¬ 0 π 0A twitter post: Kyunghyun Cho @kchonyc i was made aware of miscitations thanks to the GPTZero team (cc @alexcdot ). ji won and i quickly checked them ourselves and have posted what happened on openreview: https://openreview.net/forum?id=IiEtQPGVyV¬eId=W66rrM5XPk. we have already notified NeurIPS'25 PC's about this issue. i truly thank the GPTZero team for bringing this to our attention as well as raising the awareness of this serious issue (https://gptzero.me/news/neurips/), and at the same time i sincerely apologize to all for our error. There is an image inside the post: We identify the causes of miscitations in this work and provide fixes for them. These miscitations were identified and reported by the GPTZero team. The full report can be found at Nazar Shmatko, Alex Adam and Paul Esau. GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers. January 2026. https://gptzero.me/news/neurips/ We (Park & Cho) used a large-scale language model (LLM; specifically ChatGPT) to generate the citations after giving it author-year in-text citations, titles, or their paraphrases. Most, if not all, of these miscitations, either hallucinated, typoed or misattributed, are incorrect or non-existent bibtex entries fetched by the LLM. We categorize and analyze these citations below. We are submitting an updated version to arXiv (https://arxiv.org/abs/2510.21310) and have already uploaded the updated version at https://tinyurl.com/ymb99d7s. We will shortly reach out to the program chairs of NeurIPS'25 as well. We sincerely apologize to the whole community, especially the authors affected, for this grave mistake and thank the GPTZero team for bringing this issue to our attention. (1) Prior work directly relevant to the paper Below are miscitations for papers. Methods in these papers were implemented, or considered for implementation, in our paper as baseline uncertainty quantification methods, baseline sampling methods, or as the base LMs.
Like, if you don't know how to I'm happy to show you. It's not hard or something and it just speeds up your workflow.
Copying and pasting each of your citations into ChatGPT is just silly.
(img source: the other site)
Why on earth would you even do this in the first place?
Pasting the details into ChatGPT, then asking it to generate the citation is a way bigger hassle than simply having Scholar + Zotero (or another bib manager) extensions set up correctly to grab metadata and generate citations.
Lowkey, I think this is intentional, and it seems like the kind of thing I would do if I was making Claude more attractive to students or other people who don't want their code to be easily recognized as LLM-generated.
08.01.2026 21:26 β π 0 π 0 π¬ 0 π 0Poster titled "Launching HCCS: Inauguration and research dialogue with the HCI Pioneers of Bangladesh". On the bottom left, there are 3 portraits, named Dr Syed Ishtiaque Ahmed, Dr. Hasan Shahid Ferdous and Dr. Sharifa Sultana
A stage with 4 people sitting on chairs in the middle. A poster in the background shows the previous image's poster, as well as a QR code to the panel discussion question submission.
It was great listening to Dr's Ishtiaque, Ferdous, and Sultana talk about their experiences!
There's a *lot* of HCI work to be done in the scope of Bangladesh, and IMO not enough folks working on them, so HCCS should be a great addition. Looking forward to the work soon to come out of the lab.
lowkey I appreciate folks aren't posting linkedinisms like "In 2025 I achieved X, Y, Z" this time around.
It's fine to celebrate your wins, but I imagine for most people, 2025 was...
A Year.
...and I think that's all that needs to be said.
Nahhh my high school friends would've also found that name funny as fuck 10 years ago
01.01.2026 09:21 β π 6 π 0 π¬ 0 π 0
Seen on LinkedIn.
How does someone raise $5 million and then write job ads like this?
smh...
I don't think "money" is the simple answer, since every frontier lab is a black hole of money rn, and gene editing could've also been ludicrously profitable.
Was it the political climate? The everyday accessibility of GenAI? Lower levels of scruples in the community (not to accuse anyone directly)?
In the mid 2010s biology folks figured out how to do human gene editing. The community took one look, realized the consequences would be so dire for humanity, and put a hard stop to it. People like Jiankui He were excoriated for illegally editing embryos.
Why wasn't this the case with GenAI?
And it's not ML, it's "GenAI" that invokes certain concerns.
People don't mind when ML's used to detect cancer or study whale speech. Those models aren't trained on human inputs and used to take human jobs. GenAI specifically, however, is trained on human inputs and used to take human jobs.
Nah. That article's 8mo old and says they're "not there yet". Vincke's recent comment, however, explicitly mentions flesh out PowerPoint presentations, develop concept art" and these tasks are explicitly creative jobs lost.
17.12.2025 14:31 β π 0 π 0 π¬ 1 π 0by perchance is it specifically like these 5 companies?
16.12.2025 22:45 β π 1 π 0 π¬ 1 π 0I hope the Peer Reviewer Recognition Policy actually puts the <$30~ per paper reviewed (based on 3 reviewers per $100 paper minus waived papers) to good use.
04.12.2025 19:25 β π 0 π 0 π¬ 0 π 0Primary Paper Initiative: IJCAI-ECAI 2026 is launching the Primary Paper Initiative in response to the international AI research communityβs call to address challenges and to revitalize the peer review process, while strengthening the reviewers and authors in the process. Under the IJCAI-ECAI 2026 Primary Paper Initiative, every submission is subject to a fee of USD 100. That paper submission fee is waived for primary papers, i.e., papers for which none of the authors appear as an author on any other submission to IJCAI-ECAI 2026. The initiative applies to the main track, Survey Track, and all special tracks, excluding the Journal Track, the Sister Conferences Track, Early Career Highlights, Competitions, Demos, and the Doctoral Consortium. All proceeds generated from the Primary Paper Initiative will be exclusively directed toward the support of the reviewing community of IJCAI-ECAI 2026. To recognize the reviewersβ contributions, the initiative introduces Peer Reviewer Recognition Policy with clearly defined standards (which will be published on the conference web site). The initiative aims to enhance review quality, strengthen accountability, and uphold the scientific excellence of the conference. Details and the FAQ will be published on the IJCAI-ECAI 2026 website.
How was no one talking about this?* IJCAI-ECAI 2026 @ijcai.org levying a $100 fee per submission unless every author on the paper is only on that one submitted paper.
* rhetorical question. I assume the ICLR drama drowned it
Basically, if your Altmetric score is higher than your Accesses, you just got ratioed.
29.11.2025 13:33 β π 0 π 0 π¬ 0 π 0
Case in point.
Good grief.
Took me about 5 minutes to dig out the identity of the 40 questions reviewer β after someone posted it on the other site; I dunno how to search Xiaohongshu directly.
Honestly, I don't think western academics are going to feel a fraction of the shitstorm that the Chinese ML community's probably in.
This is the site I unfortunately have to be professional on π
20.11.2025 18:17 β π 1 π 0 π¬ 1 π 0
many such cases
Simpsons-tier prescience
I understand the sentiment behind this, but I'm just extremely bearish on rankings that do fancy mathematical tricks or use black-box algorithms because the more of that you do, the more you can bias it towards a specific outcome.
The closer the metric is to the raw data instead, the better.
Two issues: first, this is not transparent. There's NO way to tell *which* papers were counted, nor how 'most important papers to this paper' is computed. The papers contributing to the ranking should be listed.
Second, CSRankings is CC BY-NC-ND 4.0 so I'm pretty sure this is copyright infringement
I can't believe I spent the evening skimming through every Visual Reasoning paper at ICLR instead of finishing my SoP.
15.11.2025 14:58 β π 0 π 0 π¬ 0 π 0It may be in bad taste to call out a reviewer like this, but I don't believe anyone who gives weaknesses like this, as if it isn't empirical common sense for anyone working with MLLMs that larger models give better outcomes when inference speed isn't relevant, is acting in good faith.
15.11.2025 14:48 β π 2 π 0 π¬ 1 π 0
There's a reviewer at ICLR who apparently always writes *exactly* 40 weaknesses and comments no matter what paper he's reviewing.
Exhibit A: openreview.net/forum?id=8qk...
Exhibit B: openreview.net/forum?id=GlX...
Exhibit C: openreview.net/forum?id=kDh...
Tbh, it's better to go through Yann LeCun's lectures on JEPA first before trying to comprehend the mathematical foundations of LeJEPA, since this is a specific optimization on top of the original JEPA theory.
The lecture notes were easier to work out initially.