> and those were objectively superior to horses in several ways
Do you think there is no angle along which ChatGPT is objectively better than the thing replacing it?
And to be clear — none of the arguments you’re making are in the essay! The essay is making arguments which are far far weaker, and mostly revolve around trivializing what AI is currently capable of.
Do you think the first cars were cheaper or more reliable than a horse? You can call me overly ambitious for making this connection, but you must also see the precedent.
I think it’s reasonable to say “these essays aren’t yet that good” (they’re not!) and “this stuff costs a crazy amount of money upfront” (it does!), but that’s not the argument an essay like this makes; the essay says “this is going nowhere”, and that, to me, I find so hard to sympathize with.
But we’re talking about capabilities, not profits. **we couldn’t do this 5 years ago**, and now we can. It’s crazy to call that fumes!!!!
I’m so confused by seeing technology that has progressed from garbled sentences to being able to write many college-level essays over a span of just 6 years…. And concluding that the field is all fumes. Just mid. I’m incredulous.
Idk I feel like our job needs to be partially educational but I don’t know how to reach folks who aren’t interested in listening.
Posted on Bluesky because my Twitter will be an echo chamber of agreement.
It’s hard for me to take the opposing camp against AI seriously (those saying AI isn’t very good, not the camp which says it’s unjust) when their proponents essays are so filled with rhetorical tricks (ending by aligning AI to DOGE?!?!?) and a lack of desire to seriously grapple with AI’s value.
Today’s NYT column could have been written in 1900 decrying the mid-ness of the horseless carriage. It says AI is a fizzling fad while mentioning —without any self awareness— that AI can “predict my lecture […] anticipate essay prompts, research questions […] and then, finally, write a paper”
Someone has gotta start hyping MCBench on here I’m lost
Unfortunately everyone serious (vis-a-via being a real scientist) refuses to engage on the topic in good faith. I’ve legitimately considered writing papers in the conference where the anti-AI congregate just to force them to the table….
The most impressive paper I read is a paper which tells me the authors were able to convincingly show me that an empirical statement about deep neural networks holds true in a general way.
A few years ago the best work was to do focused small scale architecture/data innovation, but I feel increasingly unable to trustfully extrapolate innovation from small scale. That’s why extrapolative-science-as-artifact is so valuable.
If you can’t train big models, the best experience you can get is working on projects that develop and show a clear ability to do subtle deep learning empirical science. Hard to understate how valuable that skill is.
It’s been eye opening to me how much AI people (a) enjoy LLM poetry as capability examples but (b) have a remarkably surface-level understanding of what makes for good poetry
I realize for every 10 people voting Trump for Gaza, 9 of them were bots, but …. man …. I’d love to hear from that 10% right now.
this is also a comment on the problem with modern governments
It does seem emblematic of the problem with modern society that this company is lodging a court case instead of asking the government "hey, can you adjust the law so that this requirement is generalized to the extent where we can helpfully comply"
Common sense? 🙃
I don’t care so much about the charity more about the blind trust.
I wish academic ML was a bit more skeptical of papers and less skeptical of industry. I get that it sucks to not have visibility on details, but it doesn’t invalidate the results. On the flip side, there are too many papers whose message are parroted despite sketchy experiments.
+1 this is pretty clear virtue signaling IMO
get real
you’re telling me he _went up into a tree_?!?!?
Oh shit now here is a tactic I like
Ugh it sucks I have to decide between a feed full of people defending Elon’s parenting and a feed full of people telling me I’m a moron for finding value in one of the most helpful tools of all time.
If you are someone who thinks LLMs are bad for the world, claiming they are useless will **actively hurt your cause**; because the people you need to convince are people that are using them and scratching their heads as to why you’re claiming what you are.
Idk I wanna demilitarize not keep this stupid arms race going…. I get why people are upset, especially artists + writers. I just hope they realize that most tech people are actually very sympathic and are just being turned off by their insistence on holding a view of extreme denial.
By far the most common thing I see anti-LLM people say is “you’re an idiot for using a tool which can lie to you” …. Which is a hilariously false equivalence.