's Avatar

@climateer.bsky.social

74 Followers  |  82 Following  |  15 Posts  |  Joined: 08.11.2024  |  1.4226

Latest posts by climateer.bsky.social on Bluesky

Preview
Are We on the Brink of AGI? A Tale of Two Timelines

The post: amistrongeryet.substack.com/p/are-we-on-...

06.01.2025 04:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Fast scenario: capabilities still jagged, but AIs handle so much R&D grunt work that the number of experiments skyrockets & we stumble into major breakthroughs, addressing continuous learning, tasks requiring external context, & long-horizon planning. 2025 still not Year of the Agent, but 2026 is.

06.01.2025 04:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Slow scenario: scaling continues, but slowly. Inference-time compute bears fruit, but models still struggle with some tasks. AIs accelerate *some* aspects of AI R&D, but Amdahl's Law kicks in to limit the overall impact. 2025 is predicted to be the Year of the Agent, but this proves very premature.

06.01.2025 04:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

As we learn more about AI progress (o3 scoring 25.2% on the *very* difficult FrontierMath benchmark), views on AGI timelines seem to be diverging, not converging. In a post that was fun to write, I sketch fast and slow timelines and flag some signals to watch for. (Link in 🧡)

06.01.2025 04:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
The Black Spatula Project: Day Five Off to a Roaring Start

A Christmas Eve update on The Black Spatula Project: the community is off to a roaring start. Dozens of participants contributing ideas, refining prompts, finding errors in papers, and more. Join us in a fun and easy opportunity to improve science! amistrongeryet.substack.com/p/black-spat...

24.12.2024 16:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The Future Is Already Here, It’s Just Not Evenly Distributed You Can Awe Some of the People Some of the Time

My post: amistrongeryet.substack.com/p/uneven-imp...

16.12.2024 00:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is not temporary! AI is going to keep advancing, it'll always have uneven capabilities (so do we!), and some people will continue to be more comfortable trying new things. Spotting ways of connecting AI strengths to important problems will continue to be a source of value.

16.12.2024 00:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

@emollick.bsky.social recently noted the weirdly uneven nature of AI announcements – "Here is a tool to accelerate science. It also talks like Santa". This reflects an underlying reality: AI capabilities and adoption are both wildly uneven. I discuss in my latest post (link below).

16.12.2024 00:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

tbc, I'm referring to there being many reasonable-seeming people who are fairly confident AGI is near, and also many who are fairly confident it is not. I can't recall the last time we've had such legitimate and profound uncertainty over such a titanically important question of fact.)

26.11.2024 19:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I was wrestling with how to approach a post about timelines, and then the one of those opening sentences that defines the entire essay popped into my head:

How the fuck did we wind up in a situation where it's impossible to tell whether AGI is just 3 years away?

26.11.2024 19:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

TFW deciding whether to spend ten minutes reading an article about how we all have too many things competing for our attention

21.11.2024 02:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
https://amistrongeryet.substack.com/p/the-real-questions-in-ai

In my latest post, I try to identify these key questions.

t.co/Ah0RfC5ljW

20.11.2024 22:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

When you see someone argue for a position, ask: what must the writer believe about these questions, in order for their position to make sense? When you see disagreement: are the two sides discussing differing versions of reality? A functional debate must address the underlying questions of fact.

20.11.2024 22:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The fundamental questions often boil down to: how quickly will AI impact the world? Could disaster arise without warning? Is international cooperation feasible? Will we be safer in unipolar or multipolar worlds? Should AIs have rights?

20.11.2024 22:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

@sebk.bsky.social
identified a key source of disagreement on AI policy: wildly divergent views of AI's impact. Arguments seem to stem from conflicting values – profits and progress vs. safety and status quo. But often, the real disagreement is unstated factual assumptions regarding the future.

20.11.2024 22:54 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

@climateer is following 20 prominent accounts