Aidan Clark

Aidan Clark

@aidanclark.bsky.social

I train models @ OpenAI. Previously Research at DeepMind. Hae sententiae verbaque mihi soli sunt.

2,559 Followers 148 Following 137 Posts Joined Mar 2023
11 months ago

> and those were objectively superior to horses in several ways

Do you think there is no angle along which ChatGPT is objectively better than the thing replacing it?

0 0 1 0
11 months ago

And to be clear — none of the arguments you’re making are in the essay! The essay is making arguments which are far far weaker, and mostly revolve around trivializing what AI is currently capable of.

2 0 1 0
11 months ago

Do you think the first cars were cheaper or more reliable than a horse? You can call me overly ambitious for making this connection, but you must also see the precedent.

1 0 2 0
11 months ago

I think it’s reasonable to say “these essays aren’t yet that good” (they’re not!) and “this stuff costs a crazy amount of money upfront” (it does!), but that’s not the argument an essay like this makes; the essay says “this is going nowhere”, and that, to me, I find so hard to sympathize with.

4 0 1 0
11 months ago

But we’re talking about capabilities, not profits. **we couldn’t do this 5 years ago**, and now we can. It’s crazy to call that fumes!!!!

1 0 1 0
11 months ago

I’m so confused by seeing technology that has progressed from garbled sentences to being able to write many college-level essays over a span of just 6 years…. And concluding that the field is all fumes. Just mid. I’m incredulous.

7 0 2 0
11 months ago

Idk I feel like our job needs to be partially educational but I don’t know how to reach folks who aren’t interested in listening.

1 0 1 0
11 months ago

Posted on Bluesky because my Twitter will be an echo chamber of agreement.

7 0 2 0
11 months ago

It’s hard for me to take the opposing camp against AI seriously (those saying AI isn’t very good, not the camp which says it’s unjust) when their proponents essays are so filled with rhetorical tricks (ending by aligning AI to DOGE?!?!?) and a lack of desire to seriously grapple with AI’s value.

9 0 2 0
11 months ago
Preview
Opinion | The Tech Fantasy That Powers A.I. Is Running on Fumes A.I. is just what we need in the post-fact era: less research and more predicting what we want to hear.

Today’s NYT column could have been written in 1900 decrying the mid-ness of the horseless carriage. It says AI is a fizzling fad while mentioning —without any self awareness— that AI can “predict my lecture […] anticipate essay prompts, research questions […] and then, finally, write a paper”

10 0 3 0
11 months ago

Someone has gotta start hyping MCBench on here I’m lost

1 0 0 0
1 year ago

Unfortunately everyone serious (vis-a-via being a real scientist) refuses to engage on the topic in good faith. I’ve legitimately considered writing papers in the conference where the anti-AI congregate just to force them to the table….

6 0 0 0
1 year ago

The most impressive paper I read is a paper which tells me the authors were able to convincingly show me that an empirical statement about deep neural networks holds true in a general way.

1 0 1 0
1 year ago

A few years ago the best work was to do focused small scale architecture/data innovation, but I feel increasingly unable to trustfully extrapolate innovation from small scale. That’s why extrapolative-science-as-artifact is so valuable.

3 0 1 0
1 year ago

If you can’t train big models, the best experience you can get is working on projects that develop and show a clear ability to do subtle deep learning empirical science. Hard to understate how valuable that skill is.

6 0 1 0
1 year ago

It’s been eye opening to me how much AI people (a) enjoy LLM poetry as capability examples but (b) have a remarkably surface-level understanding of what makes for good poetry

2 0 0 0
1 year ago

I realize for every 10 people voting Trump for Gaza, 9 of them were bots, but …. man …. I’d love to hear from that 10% right now.

7 0 0 0
1 year ago

this is also a comment on the problem with modern governments

2 0 0 0
1 year ago

It does seem emblematic of the problem with modern society that this company is lodging a court case instead of asking the government "hey, can you adjust the law so that this requirement is generalized to the extent where we can helpfully comply"

2 0 1 0
1 year ago

Common sense? 🙃

4 0 0 0
1 year ago

I don’t care so much about the charity more about the blind trust.

3 0 0 0
1 year ago

I wish academic ML was a bit more skeptical of papers and less skeptical of industry. I get that it sucks to not have visibility on details, but it doesn’t invalidate the results. On the flip side, there are too many papers whose message are parroted despite sketchy experiments.

9 2 3 0
1 year ago

+1 this is pretty clear virtue signaling IMO

3 0 0 0
1 year ago

get real

0 0 1 0
1 year ago

you’re telling me he _went up into a tree_?!?!?

0 0 1 0
1 year ago

Oh shit now here is a tactic I like

1 0 1 0
1 year ago

Ugh it sucks I have to decide between a feed full of people defending Elon’s parenting and a feed full of people telling me I’m a moron for finding value in one of the most helpful tools of all time.

20 1 5 1
1 year ago

If you are someone who thinks LLMs are bad for the world, claiming they are useless will **actively hurt your cause**; because the people you need to convince are people that are using them and scratching their heads as to why you’re claiming what you are.

15 0 1 0
1 year ago

Idk I wanna demilitarize not keep this stupid arms race going…. I get why people are upset, especially artists + writers. I just hope they realize that most tech people are actually very sympathic and are just being turned off by their insistence on holding a view of extreme denial.

7 0 3 0
1 year ago

By far the most common thing I see anti-LLM people say is “you’re an idiot for using a tool which can lie to you” …. Which is a hilariously false equivalence.

14 1 1 0