Claas Voelcker

Claas Voelcker

@cvoelcker.bsky.social

For professional, see https://cvoelcker.de If I seem very angry, check if I have been watered in the last 24 hours. Now 🇺🇸 flavoured, previously available in 🇨🇦 and 🇩🇪

2,699 Followers 555 Following 837 Posts Joined Oct 2023
22 minutes ago

Basis of comparison really matters :D
DB (past) >> DB (today) >>>>>>>>>>> VIA

0 0 0 0
4 hours ago

My most-used model is Gemini because I’m a Luddite. Not saying Gemini is bad, but its competitive advantage remains search replacement with cross-references, which is the application you care about if you don’t actually trust LLMs.

29 2 3 1
6 hours ago

I will admit: having a husband who can understand your papers, has a master’s in your topic, and can tell you how to read a GPU utilization graph is a real boost 😅

10 0 0 0
6 hours ago

bsky.app/profile/cvoe...

1 0 0 0
6 hours ago

It’s not just overwhelm. AI contributes a lot to isolation: people talk less. Many students don’t ask things anymore, or at least they don’t ask me, they ask AI. I think this tool is useful, but it feels fundamentally alienating. I don’t want to read AI summaries, I want to read other’s thoughts.

5 0 4 1
6 hours ago

Very important nuance: I don’t think it’s not useful! It is incredibly useful, but that doesn’t mean I like the working mode. Its hard to describe, but it’s like how some people hate driving an automatic car?

3 0 4 0
14 hours ago

To be honest, renting from a cooperation, which fundamentally sees you as a customer and important for fiscal survival, can be much better than renting from a senior who thinks they are doing you a favour for graciously “letting” you live in “their” place where the windows don’t close…

1 0 0 0
15 hours ago

You did not say that in the post I was criticizing. You were saying “are you even a researcher, you need to catch up”. I’m asking you to reflect that your first reply was decidedly not about accessibility, quite frankly, it has the opposite vibe.

1 0 1 0
15 hours ago

Yes exactly. "Productivity" looks good and grabs attention, even if most of the work and effort ends up being for naught in the end. It's hard to compete with the sheer amount of noise even a single AI-amplified person can produce, let alone the noise of 25,000 submissions to ICML...

2 0 1 0
15 hours ago

I was not talking about your work, I was talking about you replying to a post that calls for "Hopeful visions for the future" with belittling others and saying "they have to catch up". All of these debates have little to do with US politics or access to resources.

1 0 1 0
16 hours ago

I think we are also in slightly different career stages. I don't feel like I am in a position where I can afford to not use every possible "advantage", but it also just removes so much of why I wanted this career in the first place. Collaborating (with humans), teaching real students is why I'm here

3 0 1 0
16 hours ago

Funnily enough, I am good friends with @marcelhussing.bsky.social who has been working with Aaron and Michael closely and told me a lot about their test of AI-aided mathematical research :) I'm not saying that people can't do incredible things with it, I just don't like this way of working.

1 0 0 0
16 hours ago

I will also say, many pro-AI people are ... tiring in the way they hype being extremely wired and building non-stop. I don't want to be competing with and sifting through mountains of stuff that is produced without any care. I want you all to take a breather and relax before your hearts give out.

14 0 2 0
16 hours ago

OK, after a few weeks of trying to use LLMs for my research, I have concluded that I just kinda hate it... I genuinely do not like interacting with AI. I think part of my issue is that my "think about it, read things, and understand it" drive is much higher than my "build something" drive.

15 0 2 0
16 hours ago

I want to assume you are engaging in good faith, but have you asked yourself whether you are part of the problem here and not the solution? "People need to catch up" is an utterly inhumane thing to say in the context of a post that talks about looking for a better way.

3 0 1 0
16 hours ago

No actually. But the doc and this reply where caused by the same set of incidents :D

0 1 0 0
1 day ago
Post image

Neural networks are highly non-convex, so approximate error minimizers need not look anything like each other in parameter space. But we show that nevertheless (for many model sizes) approximate error minimizers must closely agree in function/prediction space despite this!

8 7 1 0
1 day ago

We have zero societal expectations and guardrails around beneficial, critical use of AI, and the climate created by both hype and hate is making it impossible to stop, think, and talk about how we collectively want to use them.

13 0 1 0
1 day ago

I think not talking and deliberating about their use breaks me more. Two things are weighing heavily on me: I often can't tell if I am talking to a student or ChatGPT, and everybody around me seems to think that working without any human interaction is somehow fun? This makes my work very isolating.

16 1 1 0
2 days ago

There is nothing that pisses me off quite as much as "my field is sooooooo different from the rest of ML, nobody can tell me any of their intuitions apply". I have been talking to roboticists and LLM people in short succession...
Every filed is challenging, you can still all learn from each other!

3 0 0 0
2 days ago

The current direction of AI labs is “we’re building something that’s going to replace you and we have no plan to make sure you’re going to land in a better place, but we’ll make billions.”

The logical reaction is, “shut it down.” Labs need to get serious on addressing labor impacts.

38 10 1 4
3 days ago

It's this time of the year again: your baselines cannot be PPO and SAC.

2 1 1 0
1 week ago
Preview
Academics Need to Wake Up on AI Ten theses for folks who haven't noticed the ground shifting under their feet

I'm not gonna touch most of this but two things:
1. The apprenticeship model of science was never supposed to be about the PI's workflow. The point isn't to have research assistants to help to publish stuff faster. Training the next generation researcher *is* the point. Because we care. Allegedly.

178 30 5 9
1 week ago

5/5 facct papers accepted, and also 2/2 chi posters (for some reason much more competitive than you might think). I'm pretty burnt out though, I haven't gotten a single phone call in two years of being on the job market, and the pressures to publish lots while also piecing together small grants 1/2

14 1 1 2
1 week ago

No walmart, vanilla flavored yogurt is not a viable substitute for plain yogurt! I don't want vanilla-flavored naan! This is my punishment for going full burrito-taxi on my weekly groceries...

9 1 2 0
1 week ago

Saw a random hyper-racist post, thought "Oh shit, did I accidentally open twitter again?" and yes, of course I did. I should install some filter to block me from clicking on twitter links.

4 0 0 0
1 week ago

I think that's my main issue with seeing AI mails, which are an instant ignore for me. I can't trust that you, the sender, care as much about this interaction as the words make it seem.

Please, please, please, don't send me AI e-mails! I will take typos and bad grammar instead.

3 0 0 0
2 weeks ago

The real teacher forcing was the saccharine "solarpunk" stories I made you rehearse.

0 0 0 0
2 weeks ago

Submit your RL papers to RLC!
This is now perhaps the best venue for RL researchers.

12 3 1 0
2 weeks ago

I still think the best alignment strategy is to write a lot of really hopeful and optimistic fiction about AI so that this saturates the pretraining datasets and future AI will be forced to roleplay the most benevolent versions of themselves we can think of.

6 0 2 0