Schaun Wheeler's Avatar

Schaun Wheeler

@schaunwheeler.bsky.social

Anthropologist + Data Scientist. Cofounder at aampe.com

40 Followers  |  605 Following  |  68 Posts  |  Joined: 03.10.2023  |  2.2291

Latest posts by schaunwheeler.bsky.social on Bluesky

I've never really enjoyed Wodehouse. I tried several of his books and thought they were all kinda meh. But I ate up everything from Deeping, Rinehart, Chambers, Morris, Oppenheim, etc. I donโ€™t get Wodehouseโ€™s appeal, given his contemporaries...which makes me feel I'm missing something important.

03.09.2025 23:22 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Question (honestly curious, not trying to be snarky): what do you find so perfectly executed about that story? I mean, it's delightful...but seems to be so in the same way as others of the same milieu, and with pacing a bit more stodgy than fits the character/setting.

03.09.2025 23:22 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

An agent that canโ€™t *choose* its next move isnโ€™t an agent. Itโ€™s just a novel interface for the same information retrieval, content management, and marketing automation systems weโ€™ve had for years.

25.08.2025 15:30 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Fully agentic systems need a hybrid architecture: a semanticโ€“associative learner that builds and updates long-term user profiles, and a procedural actor that generates fluent, on-brand content (or retrieves it from inventory).

25.08.2025 15:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Such hacks donโ€™t build conceptual understanding or learn from outcomes. If a user ignores a โ€œ20% offโ€ push yesterday, an LLM can draft todayโ€™s new message about your return policy - but it wonโ€™t autonomously pick that message. No adaptation, no evolving preferences.

25.08.2025 15:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Recent LLM hacks like retrieval-augmented generation (external info stuffed into the prompt) or session-summary โ€œmemoryโ€ (re-feeding past interactions) preserve surface continuity, sometimes reduce hallucinations. But they arenโ€™t true semantic/associative memory.

25.08.2025 15:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Real autonomy requires semantic + associative learning. An agent must consolidate experiences into transferable categories and tie those to outcomes. Thatโ€™s how it forms opinions on which strategies to pursue or avoid over time.

25.08.2025 15:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Agency โ‰  next-token prediction. A truly agentic system decides *when* and *how* to act without waiting for instructions. LLMs only predict the next token given a prompt. They donโ€™t decide to prompt themselves.

25.08.2025 15:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Imagine someone who follows every cooking step flawlessly but has no sense of taste, no clue if others liked it, no idea how to improve. Without semantic understanding or feedback associations, true adaptation - and true agency - canโ€™t happen.

25.08.2025 15:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Those are crucial to human cognition, but we also rely on:
3. Semantic memory = abstract concepts (knowing that โ€œsustainabilityโ€ is a thing).
4. Associative learning = linking concepts to outcomes (learning that stressing sustainability drives engagement).

25.08.2025 15:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

LLMs excel at two kinds of โ€œthinkingโ€:
1. Procedural memory = automating skills (like writing a sentence or riding a bike).
2. Working memory = juggling info in the moment (like keeping a phone number in mind).

25.08.2025 15:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Procedural Memory Is Not All You Need: Bridging Cognitive Gaps in LLM-Based Agents Large Language Models (LLMs) represent a landmark achievement in Artificial Intelligence (AI), demonstrating unprecedented proficiency in procedural tasks such as text generation, code completion, and...

An LLM, by itself, cannot be truly agentic. Same for swarms, teams, workflows, or โ€œmulti-agentโ€ systems. If the LLM drives everything, itโ€™s not agentic. LLMs can be useful appendages, but not a sound foundation.

arxiv.org/abs/2505.03434

25.08.2025 15:30 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Itโ€™s hard to move beyond campaigns because theyโ€™re simple. They tame messy behavior into tidy segments. But simplicity for you isnโ€™t value for users. An agentic mindset means letting agents manage orchestrationโ€™s complexity while analysis stays clear and human-scale.

22.08.2025 16:24 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Agentic systems separate orchestration from analysis. Orchestration is about maximizing who could benefit. Analysis is about retrospective learningโ€”what worked, for whom, under which conditions. That separation expands impact without giving up interpretability.

22.08.2025 16:24 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

That dual role makes campaign logic feel natural even when itโ€™s arbitrary. Consider: โ€œnudge users who havenโ€™t engaged in 30 days.โ€ Why 30? Why not 29, or 1? The threshold isnโ€™t about user needs. Itโ€™s a simplification shaped by campaign design, not by actual behavior.

22.08.2025 16:24 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Campaigns usually serve two roles at once. First, orchestration: deciding which users get which messages under what conditions, breaking logistics into parts. Second, analysis: measuring outcomes tied to audience, timing, and content. Doing both jobs obscures insight.

22.08.2025 16:24 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
A comparison of campaign-based engagement, which is easier to manage, with agent-based engagement, which is more effective because it decouples orchestration from analytics.

A comparison of campaign-based engagement, which is easier to manage, with agent-based engagement, which is more effective because it decouples orchestration from analytics.

Technology is enabling, but also constraining. Choosing a tool means trading flexibility in one dimension for scale in another. Customer engagement is no different. Campaigns, in particular, arenโ€™t neutral abstractionsโ€”theyโ€™re design choices with real consequences.

22.08.2025 16:24 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We donโ€™t need to solve the philosophy of agency. What we need is a performance definition of acting agentically under complexity. Current benchmarks rarely test this, or when they do, they show LLMs fall short. That gap matters more than whether โ€œagencyโ€ is solved.

21.08.2025 12:06 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Over minutes, mimicry looks convincing. But over hours, days, or weeks, acting agentically means deciding what to do next, why, and how to carry those lessons forward. For that, semanticโ€“associative learning is required. Procedural memory alone isnโ€™t enough.

21.08.2025 12:06 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

When signals are delayed, goals conflict, or feedback is ambiguous, procedural mimicry fails. Without semantic memory to form abstractions and associative learning to link them to outcomes, systems canโ€™t adapt with consistent success across shifting contexts.

21.08.2025 12:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

In predictable, stable environments with clear feedback, even LLMs can appear agentic. With only procedural and working memory, they give the impression of knowing what theyโ€™re doing. But the appearance fades when environments become less structured.

21.08.2025 12:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

In practice, the challenge is building systems whose behavior is hard to distinguish from beings who think for themselves. Acting agentically is a generalized Turing Test: not proving thought, but performing well enough that it looks like intention is present.

21.08.2025 12:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Because the concept of agency is so unsettled, I think itโ€™s better to sidestep. A system doesnโ€™t need to โ€œhave agencyโ€ in order to โ€œact agentically.โ€ That distinction matters more than trying to solve the philosophical problem of what agency really is.

21.08.2025 12:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

We throw around terms like โ€œagencyโ€ in AI, but the word itself lacks definition. That problem predates AI. Philosophers have debated it for centuries without consensus: Do instincts count? Are reasons different from causes? Is planning the same as action? Does coercion erase it?

21.08.2025 12:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

95% failing sounds too positive to me. Traditional ML projects have long been pegged at 70โ€“80% failure, after over a decade of playbooks, best practices, and data science professionalization. Iโ€™d have expected generative AIโ€™s early-stage failure rate to be much higher than what that report shows.

21.08.2025 00:59 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

If a system doesnโ€™t do these six things, I have a hard time seeing how it could operate agentically. Large inventory, abstraction, broad rewards, individual learning, distributions, and priors arenโ€™t really optional. Theyโ€™re prerequisites for agentic behavior.

20.08.2025 11:32 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

6. Can the system fill gaps in individual histories with inferences from others? Sparse data is unavoidable. Agents should be able to draw on similar users and impute where needed. Without this, theyโ€™ll fail early in a userโ€™s journey, before thereโ€™s enough evidence.

20.08.2025 11:32 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

5. Are you modeling performance as a distribution, not a point estimate? Confidence matters. One good outcome isnโ€™t the same as ten. And outcomes should be weighted by probability, not just existence. Without this, I donโ€™t see how agents can reason about risk.

20.08.2025 11:32 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

4. Does the system track individual performance first? To be agentic, it has to learn how different message attributes work for each user, not just the average user. Aggregates are useful, but the agentโ€™s view should be long, not wide. That feels essential to me.

20.08.2025 11:32 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

3. Does the reward function let the agent use every behavioral signal? Not just clicks or conversions, but any sign the user is closer to a meaningful goal. If it canโ€™t see that, the agent wonโ€™t have enough signal to calibrate its choices. That seems like a fatal gap to me.

20.08.2025 11:32 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@schaunwheeler is following 20 prominent accounts