Siddarth Venkatraman @ NeurIPS 2024's Avatar

Siddarth Venkatraman @ NeurIPS 2024

@hyperpotatoneo.bsky.social

PhD student at Mila | Diffusion models and reinforcement learning 🧐 | hyperpotatoneo.github.io

318 Followers  |  446 Following  |  31 Posts  |  Joined: 19.12.2023  |  1.7508

Latest posts by hyperpotatoneo.bsky.social on Bluesky

Honestly, it feels like as an AI researcher it might actually be worth it to throw your dignity aside and pay Elon for Twitter blue to advertise your papers. Getting papers famous is literally just a social media clout game now.

28.01.2025 15:33 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

See the second part of my post - yes, they are likely using explicit search to improve performance at test time. But the focus should be on the search through reasoning chains itself, which the model has been trained to do with RL. Even for the explicit search, you require the RL value functions.

23.12.2024 01:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Few fields reward quick pivoting as much as AI, or vice versa punish the very thing a phd is usually meant to be: stick with one research direction for 5 years no matter what, go really deep, becoming a niche expert

for your research to be relevant in AI, you might wanna pivot every 1-2 years

22.12.2024 06:10 β€” πŸ‘ 15    πŸ” 2    πŸ’¬ 3    πŸ“Œ 0

I think the intersection of builders and researchers is higher in machine learning, compared to other disciplines.

22.12.2024 05:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You could still wrap this with explicit search techniques like MCTS if you have value functions for partial sequences (which would also be a product of the RL training). This could further improve performance, similar to fast vs slow policy in AlphaZero.

22.12.2024 04:09 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Saying o3 is just a β€œmore principled search technique” is quite reductive. The o series of models don’t require β€œexplicit search” strategies in the form of tree search, wrapped in loops etc. Instead, RL is used to train the model to β€œlearn to search” using long CoT chains.

22.12.2024 04:07 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0

You’re correct, there’s plenty of simulated environments we can’t solve yet. But do you consider having 1 million parallel instances of an environment sped up 100x solving it with PPO with low wall clock time a desirable solution?

22.12.2024 02:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This isn’t a general solution to RL. The point is to make learning algorithms sample efficient. If the environment you are doing RL on is the real world, you can’t make the β€œenvironment go fast”.

With β€œinfinite samples”, you can random sample policies till you stumble on one with high reward.

21.12.2024 15:51 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - GFNOrg/diffusion-samplers Contribute to GFNOrg/diffusion-samplers development by creating an account on GitHub.

Come check out our neurips poster today! We will be at West Ballroom #7101 from 4:30pm - 7:30pm.

Website: github.com/gfnorg/diffu...

12.12.2024 20:51 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

If you're at NeurIPS, RLC is hosting an RL event from 8 till late at The Pearl on Dec. 11th. Join us, meet all the RL researchers, and spread the word!

10.12.2024 21:55 β€” πŸ‘ 63    πŸ” 18    πŸ’¬ 2    πŸ“Œ 4

Even his current claim that o1 is β€œbetter than most humans in most tasks” is pretty wild imo. What are β€œmost tasks” here even? Obviously not any physical tasks because there is no embodiment. Can o1 actually completely replace a human in any job? Can it manage a project from start to finish?

07.12.2024 23:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
x.com

x.com/vahidk/statu...

07.12.2024 22:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It also doesn’t help when OpenAI staff post about how o1 is already AGI (yes this happened today).

Unfortunately the dialogue is directed by those on either end of the spectrum (AI is useless vs AGI is already here) without much room for nuance.

07.12.2024 22:14 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
A year before CEO shooting, lawsuit alleged UHC used AI to deny coverage The lawsuit accuses UnitedHealthcare of using artificial intelligence to deny coverage to elderly patients.

www.newsweek.com/united-healt...

I have anecdotal evidence from a friend who works at a client company for a popular insurance firm. They are using shitty β€œAI models” which are basically just CatBoost to mass process claims. They know the models are shit, but that’s also the point. Truly sickening.

06.12.2024 09:01 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It is reductive to blame it all on a single CEO, but I find it hard to believe how you are β€œshocked” by this public reaction. UHC has the highest claim denial rate among insurance providers, resulting in untold medical bankruptcies and preventable deaths. I’m shocked this doesn’t happen more often.

06.12.2024 08:45 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Subtlety and nuance go out the window when strong political feelings are thrown in the mix. I understand why AI researchers can get defensive/angry due to toxic comments, but we should still try to understand the origin of people’s anger. Imo, right wing AI silicon valley billionaires are the root.

01.12.2024 20:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I think the recent conflict between AI researchers and the anti-AI clique hints at the latter. This broad left leaning user base could fracture again as differences in opinions between the farther left and moderate factions get amplified.

01.12.2024 04:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This app is an interesting social experiment. Assuming Bluesky doesn’t just fizzle out, will hostile social relations as in Twitter resurface here too? If hostilities do return, will it be because conservatives come to this app, or will it be new political tensions within left leaning communities?

01.12.2024 04:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Another thing; let’s reflect if they actually have a point. When I deeply reflect upon it, I am not even personally convinced that in the grand scheme of things AI is going to be a net good for humanity. So, maybe the distaste is warranted and we’re the ones in the bubble?

30.11.2024 14:05 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

As AI researchers, we shouldn’t demonize people outside our space who have a passionate distaste for AI. You have to understand that most of the pro-AI sentiment people see online comes from absolutely vile β€œAI-bros”, especially on twitter. We just need to distinguish ourselves as academics.

30.11.2024 14:03 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Yeah, it will definitely not be β€œtrue OT” at end, but it works to get surprisingly smooth ODE paths which can be easily numerically integrated. You can train a CIFAR 10 flow model which can generate high quality images with 5-10 Euler steps.

30.11.2024 13:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Improving and generalizing flow-based generative models with minibatch optimal transport Continuous normalizing flows (CNFs) are an attractive generative modeling technique, but they have been held back by limitations in their simulation-based maximum likelihood training. We introduce the...

You can do minibatch OT coupling to get actual optimal transport flows with simulation free training.

arxiv.org/abs/2302.00482

30.11.2024 13:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Sure, that argument works from a utilitarian perspective.

From monkey brain casual user point of view, it looks ugly and outdated. And I think this is what should be focused on.

29.11.2024 04:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Anyone has thoughts about which generative models are also the best for representation learning features for downstream tasks?

My guess is GANs are a dark horse and the latents carry important abstract features. But we haven’t explored this much since they are hard to train.

29.11.2024 04:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You can just have a verification system like the system in pre-Elon twitter, where blue check marks are verified accounts.

29.11.2024 03:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Ideally it should default to your username like Twitter. These small inconveniences add up over time and could cause people to go back over to twitter and need to be changed. Twitter perfected the design of this kind of social media, and these minor design choices matter.

29.11.2024 01:38 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

IQL and BCQ are still the most consistent, reliable offline RL algorithms. Interestingly, IQL optimizes for the optimal batch constrained policy too (just without a behavior policy model which is needed for BCQ).

Many other algorithms seem to work β€œbetter” since they overfit hyperparams for D4RL.

27.11.2024 14:30 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
xLSTM: Extended Long Short-Term Memory In the 1990s, the constant error carousel and gating were introduced as the central ideas of the Long Short-Term Memory (LSTM). Since then, LSTMs have stood the test of time and contributed to numerou...

XLSTM helps with the parallelizable thing arxiv.org/abs/2405.04517

I suspect the memory issues and compute scaling with sequence lengths will motivate some large scale model with these soon. Probably for high dimensional data like videos rather than language.

27.11.2024 14:20 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Pretty cool, didn’t know of this work. Recurrent nets are still quite slow to train for large sequences like in LLMs cuz it’s not parallelizable (though chunking like your paper would definitively help). Would be curious to see how well it works at very large scale.

27.11.2024 14:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Would like to be added :)

27.11.2024 14:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@hyperpotatoneo is following 20 prominent accounts