Daniel Eth (yes, Eth is my actual last name)'s Avatar

Daniel Eth (yes, Eth is my actual last name)

@daniel-eth.bsky.social

AI alignment & memes | "known for his humorous and insightful tweets" - Bing/GPT-4 | prev: @FHIOxford

4,323 Followers  |  47 Following  |  227 Posts  |  Joined: 11.04.2023  |  1.7469

Latest posts by daniel-eth.bsky.social on Bluesky

Marc Andreessen losing all his money by investing in all the wrong AI companies - call that AI safety by default

15.01.2026 05:27 β€” πŸ‘ 19    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If you can substitute "hungry ghost trapped in a jar" for "AI" in a sentence it's probably a valid use case for LLMs. Take "I have a bunch of hungry ghosts in jars, they mainly write SQL queries for me". Sure. Reasonable use case.

"My girlfriend is a hungry ghost I trapped in a jar"? No. Deranged.

13.08.2025 00:56 β€” πŸ‘ 2590    πŸ” 626    πŸ’¬ 37    πŸ“Œ 57

This *might* be an indication that Anthropic has gotten better at getting models to do longer tasks, specifically. If so, this could be the first signs that they’ve solved/are solving a complex bottleneck to more complex tasks. Or not. Unclear. But if so, that’s a big deal!

11.01.2026 00:08 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Third, the curve for Claude Opus 4.5 is β€œflatter” than previous models (it does relatively better at longer tasks compared to shorter). And the longest tasks it does are ones where it’s getting ~50%, b/c METR doesn’t have enough tasks that are long enough in their dataset…

11.01.2026 00:07 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You could argue we’re on a 4-month doubling time now instead of 7-month doubling time (I remain uncertain of what to expect over the next year), but regardless this is a continuation of previous progress, not a discontinuity

11.01.2026 00:07 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Second, on a log plot, note this is hardly above trend. Sure, it *could* represent a new trend, but it seems like every time there’s a model release that overperforms people think timelines get super short, & every time a model underperforms they think timelines get super long…

11.01.2026 00:06 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

A few thoughts on Claude Opus 4.5:

First off, in absolute terms, this is a pretty big step up. Anthropic is showing they have juice, and things are going faster than previously expected. At the very least, this should dispel all recent talk about how AI was entering a slowdown

11.01.2026 00:03 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Post image

A reminder that we're hiring for several really important roles at Coefficient Giving! Learn more here: coefficientgiving.org/about-us/ca...

09.01.2026 22:42 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

lol

07.01.2026 01:42 β€” πŸ‘ 21    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

AI accelerationists are in a bit of a bind, in that their views are deeply unpopular; by aggressively fighting for them they also raise the salience of AI politically, which hurts their cause

06.01.2026 18:19 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Notably, the public has also shifted away from Republicans on the issue (up on the graph), coinciding with many Republicans pushing an anti-regulatory attitude towards AI. Voter now trust Dems & Rs about equally on the issue, indicating voters are up for grabs by either party

05.01.2026 08:30 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

There’s still a long ways to go before AI is a top voter concern like health care or cost of living, but I’d expect this trend to continue as AI becomes more powerful. Politicians who side with wealthy tech donors over voter preferences may wind up regretting that decision

05.01.2026 08:29 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Graph showing AI becoming higher salience to voters (more to the right on the graph). According to this data, AI is now higher salience than climate change, and approaching the salience of gas prices

05.01.2026 08:28 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

I wonder if this is related to Trump’s recent shift from not caring about AI preemption to heavily pushing it. The OpenAI-Andreessen super PAC can spook rank-and-file members of Congress, but donating to Trump’s super PAC would build a stronger relationship w/ Trump, specifically

04.01.2026 03:20 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Oh wow - OpenAI’s Greg Brockman was the single largest donor to Trump’s super PAC over the past 6 months. I knew OpenAI/Brockman were trying to flex their muscles politically to block all meaningful AI regulations, didn’t realize they had literally become Trump’s largest donor

04.01.2026 03:20 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Post image Post image

For those who aren’t following the details, here are the relevant connections:

24.11.2025 06:15 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

I think more OpenAI employees should be aware of the very bad-faith political activities that OpenAI is supporting through Greg Brockman’s funding of the Andreessen-OpenAI super PAC cluster

(Twitter’s location verification has a known bug, but Leamer doesn’t care about the truth.)

24.11.2025 06:14 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I think further advancements may overcome these challenges, the way that reasoning models overcame previous challenges associated with reasoning. I don’t think the clearest shot toward AGI is literally just scaling up LLMs, but instead a combination of scale and modifications on current methods

21.11.2025 18:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Now, I do think the automated AI R&D feedback loop will *eventually* speed things up a ton, but I don’t think this has really kicked off yet

21.11.2025 09:22 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Meanwhile, various people predicted the trend was about to (or already did) become faster, e.g., due to paradigm shifts with reasoning models. I think those people's predictions were also off.

21.11.2025 09:21 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Viewing the graph on a linear scale demonstrate that claims of AI "hitting a wall" are clearly off. People *keep making* these claims, but while not every model release lives up to hype, no, AI has not hit a wall yet, and there's no indication it's about to, either

21.11.2025 09:19 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Things are looking smoothly exponential for AI over the past several years, and I continue to think this is the best default assumption (until the AI R&D automation feedback loop eventually speeds everything up)

21.11.2025 09:19 β€” πŸ‘ 16    πŸ” 2    πŸ’¬ 3    πŸ“Œ 1

Republicans already tried to ban states’ ability to regulate AI in their Big, Ugly Bill.

That ban was voted down 99-1.

Their new political maneuver would be a free pass to Big Tech, and it must be stopped again.

19.11.2025 21:30 β€” πŸ‘ 22    πŸ” 4    πŸ’¬ 11    πŸ“Œ 2

TBC I have no problem with a federal standard that both actually provides strong guardrails and preempts the states. In fact, I’d be for that. But Andreessen isn’t actually for federal rules; he just wants minimal rules

19.11.2025 05:28 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The guy who mocks the pope and funds gambling and porn apps thinks a ban on state AI laws is great! Seems like basically every normal person hates this idea - hopefully politicians recognize it for what it is and don’t side with Andreessen

19.11.2025 03:30 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

Hot take but if AI accelerationists are actually worried about a patchwork of state regulations, they should work w/ those who want guardrails to craft serious federal rules. Don’t just try to sneak a standalone blanket moratorium into a must-pass bill behind closed doors

19.11.2025 01:35 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
https://asteriskmag.substack.com/p/common-ground-between-ai-2027-and

Good collaborative piece between the authors of AI 2027 and those of AI as Normal Technology on areas of shared agreement

t.co/h82vvrYRPM

15.11.2025 23:12 β€” πŸ‘ 19    πŸ” 3    πŸ’¬ 0    πŸ“Œ 3
Post image

If you jog in these sneakers it’s called a training run

15.11.2025 19:15 β€” πŸ‘ 20    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This whole Andreessen thing is a good reminder that you shouldn’t confuse vice with competence. Just because the guy is rude & subversive does not mean that he has intelligent things to say

12.11.2025 19:17 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Feels like the dam has broken on people in the tech community airing grievances with Andreessen. Honestly makes me feel better about the direction of the tech community writ large

12.11.2025 04:09 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@daniel-eth is following 20 prominent accounts