I've never really enjoyed Wodehouse. I tried several of his books and thought they were all kinda meh. But I ate up everything from Deeping, Rinehart, Chambers, Morris, Oppenheim, etc. I donโt get Wodehouseโs appeal, given his contemporaries...which makes me feel I'm missing something important.
03.09.2025 23:22 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
Question (honestly curious, not trying to be snarky): what do you find so perfectly executed about that story? I mean, it's delightful...but seems to be so in the same way as others of the same milieu, and with pacing a bit more stodgy than fits the character/setting.
03.09.2025 23:22 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
An agent that canโt *choose* its next move isnโt an agent. Itโs just a novel interface for the same information retrieval, content management, and marketing automation systems weโve had for years.
25.08.2025 15:30 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Fully agentic systems need a hybrid architecture: a semanticโassociative learner that builds and updates long-term user profiles, and a procedural actor that generates fluent, on-brand content (or retrieves it from inventory).
25.08.2025 15:30 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Such hacks donโt build conceptual understanding or learn from outcomes. If a user ignores a โ20% offโ push yesterday, an LLM can draft todayโs new message about your return policy - but it wonโt autonomously pick that message. No adaptation, no evolving preferences.
25.08.2025 15:30 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Recent LLM hacks like retrieval-augmented generation (external info stuffed into the prompt) or session-summary โmemoryโ (re-feeding past interactions) preserve surface continuity, sometimes reduce hallucinations. But they arenโt true semantic/associative memory.
25.08.2025 15:30 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Real autonomy requires semantic + associative learning. An agent must consolidate experiences into transferable categories and tie those to outcomes. Thatโs how it forms opinions on which strategies to pursue or avoid over time.
25.08.2025 15:30 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Agency โ next-token prediction. A truly agentic system decides *when* and *how* to act without waiting for instructions. LLMs only predict the next token given a prompt. They donโt decide to prompt themselves.
25.08.2025 15:30 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Imagine someone who follows every cooking step flawlessly but has no sense of taste, no clue if others liked it, no idea how to improve. Without semantic understanding or feedback associations, true adaptation - and true agency - canโt happen.
25.08.2025 15:30 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Those are crucial to human cognition, but we also rely on:
3. Semantic memory = abstract concepts (knowing that โsustainabilityโ is a thing).
4. Associative learning = linking concepts to outcomes (learning that stressing sustainability drives engagement).
25.08.2025 15:30 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
LLMs excel at two kinds of โthinkingโ:
1. Procedural memory = automating skills (like writing a sentence or riding a bike).
2. Working memory = juggling info in the moment (like keeping a phone number in mind).
25.08.2025 15:30 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Procedural Memory Is Not All You Need: Bridging Cognitive Gaps in LLM-Based Agents
Large Language Models (LLMs) represent a landmark achievement in Artificial Intelligence (AI), demonstrating unprecedented proficiency in procedural tasks such as text generation, code completion, and...
An LLM, by itself, cannot be truly agentic. Same for swarms, teams, workflows, or โmulti-agentโ systems. If the LLM drives everything, itโs not agentic. LLMs can be useful appendages, but not a sound foundation.
arxiv.org/abs/2505.03434
25.08.2025 15:30 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
Itโs hard to move beyond campaigns because theyโre simple. They tame messy behavior into tidy segments. But simplicity for you isnโt value for users. An agentic mindset means letting agents manage orchestrationโs complexity while analysis stays clear and human-scale.
22.08.2025 16:24 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Agentic systems separate orchestration from analysis. Orchestration is about maximizing who could benefit. Analysis is about retrospective learningโwhat worked, for whom, under which conditions. That separation expands impact without giving up interpretability.
22.08.2025 16:24 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
That dual role makes campaign logic feel natural even when itโs arbitrary. Consider: โnudge users who havenโt engaged in 30 days.โ Why 30? Why not 29, or 1? The threshold isnโt about user needs. Itโs a simplification shaped by campaign design, not by actual behavior.
22.08.2025 16:24 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Campaigns usually serve two roles at once. First, orchestration: deciding which users get which messages under what conditions, breaking logistics into parts. Second, analysis: measuring outcomes tied to audience, timing, and content. Doing both jobs obscures insight.
22.08.2025 16:24 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
A comparison of campaign-based engagement, which is easier to manage, with agent-based engagement, which is more effective because it decouples orchestration from analytics.
Technology is enabling, but also constraining. Choosing a tool means trading flexibility in one dimension for scale in another. Customer engagement is no different. Campaigns, in particular, arenโt neutral abstractionsโtheyโre design choices with real consequences.
22.08.2025 16:24 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
We donโt need to solve the philosophy of agency. What we need is a performance definition of acting agentically under complexity. Current benchmarks rarely test this, or when they do, they show LLMs fall short. That gap matters more than whether โagencyโ is solved.
21.08.2025 12:06 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Over minutes, mimicry looks convincing. But over hours, days, or weeks, acting agentically means deciding what to do next, why, and how to carry those lessons forward. For that, semanticโassociative learning is required. Procedural memory alone isnโt enough.
21.08.2025 12:06 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
When signals are delayed, goals conflict, or feedback is ambiguous, procedural mimicry fails. Without semantic memory to form abstractions and associative learning to link them to outcomes, systems canโt adapt with consistent success across shifting contexts.
21.08.2025 12:06 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
In predictable, stable environments with clear feedback, even LLMs can appear agentic. With only procedural and working memory, they give the impression of knowing what theyโre doing. But the appearance fades when environments become less structured.
21.08.2025 12:06 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
In practice, the challenge is building systems whose behavior is hard to distinguish from beings who think for themselves. Acting agentically is a generalized Turing Test: not proving thought, but performing well enough that it looks like intention is present.
21.08.2025 12:06 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Because the concept of agency is so unsettled, I think itโs better to sidestep. A system doesnโt need to โhave agencyโ in order to โact agentically.โ That distinction matters more than trying to solve the philosophical problem of what agency really is.
21.08.2025 12:06 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
We throw around terms like โagencyโ in AI, but the word itself lacks definition. That problem predates AI. Philosophers have debated it for centuries without consensus: Do instincts count? Are reasons different from causes? Is planning the same as action? Does coercion erase it?
21.08.2025 12:06 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
95% failing sounds too positive to me. Traditional ML projects have long been pegged at 70โ80% failure, after over a decade of playbooks, best practices, and data science professionalization. Iโd have expected generative AIโs early-stage failure rate to be much higher than what that report shows.
21.08.2025 00:59 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
If a system doesnโt do these six things, I have a hard time seeing how it could operate agentically. Large inventory, abstraction, broad rewards, individual learning, distributions, and priors arenโt really optional. Theyโre prerequisites for agentic behavior.
20.08.2025 11:32 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
6. Can the system fill gaps in individual histories with inferences from others? Sparse data is unavoidable. Agents should be able to draw on similar users and impute where needed. Without this, theyโll fail early in a userโs journey, before thereโs enough evidence.
20.08.2025 11:32 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
5. Are you modeling performance as a distribution, not a point estimate? Confidence matters. One good outcome isnโt the same as ten. And outcomes should be weighted by probability, not just existence. Without this, I donโt see how agents can reason about risk.
20.08.2025 11:32 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
4. Does the system track individual performance first? To be agentic, it has to learn how different message attributes work for each user, not just the average user. Aggregates are useful, but the agentโs view should be long, not wide. That feels essential to me.
20.08.2025 11:32 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
3. Does the reward function let the agent use every behavioral signal? Not just clicks or conversions, but any sign the user is closer to a meaningful goal. If it canโt see that, the agent wonโt have enough signal to calibrate its choices. That seems like a fatal gap to me.
20.08.2025 11:32 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Serial marketer, AI Marketers Guild founder
Marketing AI Institute is a media, event, and online education company making AI approachable and accessible to marketing and business leaders around the world.
Making brands sparkle | Beauty & Fashion Strategist
Fueled by coffee & couture . Turning big dreams into reality. Get in touch โจ
www.luxenoirstrategies.com
Professor of Marketing at NYU Stern School of Business, serial entrepreneur, and host of the Prof G and Pivot Podcasts.
All the news, insights and inspiration you need to know in advertising, marketing and media.
Amanda Katz said this was the cool kids table.
The Lord of RTB.
God of Programmatic Prophecy. The Savior of AdTech Sinners.
Co-founder and CMO at Marketecture Media, Inc
AdTechGodยฎ๏ธis a Registered Trademark
Everything you want to know about me: https://linktr.ee/adtechgod
Ethics, technology, AI, marketing, responsible business in healthcare. Assoc Professor in digital and AI marketing at UCL GBSH. Used to love Twitter. Views my own.
I am a digital business scholar. My research focuses on service digitalisation and how it interacts with vulnerability.
www.anacanhoto.com
Marketing academic. Journal editor. Teacher. Hobbies: buying books, reading books, writing papers, writing books, looking after cats, ad infinitum.
For scholarly work, check out: https://york.academia.edu/marktadajewski
Associate Professor at Hanken in Finland. I write about the consumer culture and advertising markets that allow disinformation to thrive on social media.
Author of the book "Market-Oriented Disinformation Research."
https://www.carlosdiazruiz.com
Associate Professor of Communication focused on social media, PR, crisis communication, and AI. ๐ต
Management prof. Arts & fitness. Saving planet & children
I teach people how to shizzle on teh internetz. Author. MMU Lecturer. Coach. Digital Marketing. E-Commerce. AI. Digital Literacy. AI Literacy. DigiBiz https://amzn.to/3WZe1SG
Professor of leadership and strategic communication and head of business and law at Buckinghamshire New University. Principal Fellow of the HEA.
๐ธ๐ชLund - Halmstad
๐ง๐ปโ๐PhD, Place branding and marketing academic
๐Wine marketing in Sweden
๐ผ๏ธModern art, museums and food
๐๏ธHalmstad University, Sweden
Professor of Digital Communication and Director of the QUT Digital Media Research Centre. Cyclist, beer brewer, nerd, community advocate.
Communication Professor studying opinion leaders, motivated reasoning, watching others online, meta-analysis, etc.
Co-author of โCritical Questions in Persuasion Researchโ as well as โThe Science of Gaining Complianceโ both from Cognella
he/him
asst.-prof. in digital & multimodal communication / humane AI @rug.nl | genAI at the intersections of critical data studies & discourse studies โจ| SECR of the #ICA Visual Comm Division | https://nataliialaba.com