Adrian Chan's Avatar

Adrian Chan

@gravity7.bsky.social

Bridging IxD, UX, & Gen AI design & theory. Ex Deloitte Digital CX. Stanford '88 IR. Edinburgh, Berlin, SF. Philosophy, Psych, Sociology, Film, Cycling, Guitar, Photog. Linkedin: adrianchan. Web: gravity7.com. Insta, X, medium: @gravity7

747 Followers  |  617 Following  |  406 Posts  |  Joined: 24.10.2024  |  2.1407

Latest posts by gravity7.bsky.social on Bluesky

Preview
Flattery, Fluff, and Fog: Diagnosing and Mitigating Idiosyncratic Biases in Preference Models Language models serve as proxies for human preference judgements in alignment and evaluation, yet they exhibit systematic miscalibration, prioritizing superficial patterns over substantive qualities. ...

Those #LLM reward models like sycophancy even more than you do!

Researchers find preferences for verbosity, listicles, vagueness, and jargon even higher among LLM-based reward models (synthetic data) than among us humans.
#AI #AIalignment
arxiv.org/abs/2506.05339

09.06.2025 15:01 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Do you think that ChatGPT can reason?
YouTube video by Machine Learning Street Talk Do you think that ChatGPT can reason?

Everybody talking about the "new" apple paper might find this MLST interview with @rao2z.bsky.social interesting. "Reasoning" and "inner thoughts" of LLMs were exposed as self-mumblings and fumblings long ago. #LLMs #AI
www.youtube.com/watch?v=y1Wn...

08.06.2025 19:40 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

yes - people will still need a phone, and a lot of AI products, services, and UI will need a screen. and a touchable one at that.

03.06.2025 01:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
When Thinking Fails: The Pitfalls of Reasoning for Instruction-Following in LLMs Reasoning-enhanced large language models (RLLMs), whether explicitly trained for reasoning or prompted via chain-of-thought (CoT), have achieved state-of-the-art performance on many complex reasoning ...

This is interesting, published yesterday. CoT type reasoning shifts attention away from instruction tokens. Paper proposes "constraint attention" to keep models attentive to instructions when doing CoT.
#AI #LLM

www.arxiv.org/abs/2505.11423

21.05.2025 17:54 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The CoT Encyclopedia: Analyzing, Predicting, and Controlling how a Reasoning Model will Think Long chain-of-thought (CoT) is an essential ingredient in effective usage of modern large language models, but our understanding of the reasoning strategies underlying these capabilities remains limit...

"What's the best way to think about this?" #LLM research produces encyclopedia of reasoning strategies, allowing models to select the best way to reason through problems.

arxiv.org/abs/2505.10185

16.05.2025 22:07 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Clarifying the Path to User Satisfaction: An Investigation into Clarification Usefulness Clarifying questions are an integral component of modern information retrieval systems, directly impacting user satisfaction and overall system performance. Poorly formulated questions can lead to use...

Clarifying questions w #LLMs increase user satisfaction when users can see the point of answering them. Specific questions beat generic ones.

But I wonder if this changes when #agents are personal assistants, & are more personal & more aware.

#UX #AI #Design

arxiv.org/abs/2402.01934

14.05.2025 17:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Backtracing: Retrieving the Cause of the Query Many online content portals allow users to ask questions to supplement their understanding (e.g., of lectures). While information retrieval (IR) systems may provide answers for such user queries, they...

Interesting - could #LLMs in search capture context missed when googling?

"backtracing ... retrieve the cause of the query from a corpus. ... targets the information need of content creators who wish to improve their content in light of questions from information seekers."
arxiv.org/abs/2403.03956

14.05.2025 14:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

They mostly test whether they can steer pos/neg responses. But given Shakespeare was a test also, wld be interesting to extract style vectors from any number of authors then compare generations. (Is this approach used in those "historical avatars?" No idea.)

14.05.2025 14:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Style Vectors for Steering Generative Large Language Model This research explores strategies for steering the output of large language models (LLMs) towards specific styles, such as sentiment, emotion, or writing style, by adding style vectors to the activati...

@tedunderwood.me In case you haven't seen this paper, you might find interesting. Researchers extract style vectors (incl from Shakespeare) and apply to an LLM internal layers instead of training on original texts. Generations can then be "steered" to a desired style.

arxiv.org/abs/2402.01618

14.05.2025 14:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But design will need to focus on tweaking model interactions so that they track conversational content and turns over time. For example with bi-directional prompting: models prompt users to keep conversations on track.

This seems a rich opportunity for interaction design #UX #IxD #LLMs #AI

14.05.2025 13:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

to sustain dialog. Social interaction face to face or online is already vulnerable to misunderstandings and failures, and we have use of countless signals, gestures, etc w which to rescue our interactions.

A communication-first approach to LLMs for conversation makes sense, as talk is not writing.

14.05.2025 13:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

"when LLMs take a wrong turn in a conversation, they get lost and do not recover."

Interaction design is going to be necessary to scaffold LLMs for talk, be it voice or single user chat or multi-user (e.g. social media).

It's one thing to read/summarize written documents, quite another ...

14.05.2025 13:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
LLMs Get Lost In Multi-Turn Conversation Large Language Models (LLMs) are conversational interfaces. As such, LLMs have the potential to assist their users not only when they can fully specify the task at hand, but also to help them define, ...

"LLMs tend to (1) generate overly verbose responses, leading them to (2) propose final solutions prematurely in conversation, (3) make incorrect assumptions about underspecified details, and (4) rely too heavily on previous (incorrect) answer attempts."

arxiv.org/abs/2505.06120

14.05.2025 13:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Attention Mechanisms Perspective: Exploring LLM Processing of Graph-Structured Data Attention mechanisms are critical to the success of large language models (LLMs), driving significant advancements in multiple fields. However, for graph-structured data, which requires emphasis on to...

"LLMs ... recognize graph-structured data... However... we found that even when the topological connection information was randomly shuffled, it had almost no effect on the LLMs’ performance... LLMs did not effectively utilize the correct connectivity information."
www.arxiv.org/abs/2505.02130

14.05.2025 13:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Perhaps one could fine tune on Lewis Carroll, then feed the model with philosophical paradoxes, and see whether the model produces more imaginative generations.

12.05.2025 17:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I think because this isn't making the model trip, synesthetically, but is simply giving it juxtapositions. So what is studied is a response to these paradoxical and conceptually incompatible prompts, not a measure of any latent conceptual activations or features.

12.05.2025 17:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Triggering Hallucinations in LLMs: A Quantitative Study of Prompt-Induced Hallucination in Large Language Models Hallucinations in large language models (LLMs) present a growing challenge across real-world applications, from healthcare to law, where factual reliability is essential. Despite advances in alignment...

Let's dose an LLM and study its hallucinations!

LLMs were fed "blended" prompts, impossible conceptual combinations, meant to elicit hallucinations. Models did not trip, but instead tried to reason their way through their responses.

arxiv.org/abs/2505.00557

12.05.2025 17:21 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yes and the label applied says as much about the person as it does about the model. In the world of creatives, the most-used term now is "slop," derived perhaps from enshitification. The latter capturing corporate malice where the "slop" is AI-generated byproduct unfit for human consumption...

10.05.2025 17:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thread started w your second post so yes I missed the initial post. Never mind.

10.05.2025 16:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Assuming alignment using synthetic data is undesirable, one route is to complement global alignment (alignment to some "universally" preferred human values) w local, contextualized alignment, via feedback and use by the user. Tune the LLM's behavior to user preferences.

10.05.2025 16:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Customized LLMs use the feedback obtained from the individual user interactions and align to those.

10.05.2025 16:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Staying power of ceasefires becoming a proxy for multilateral resilience amid baseline rivalries?

10.05.2025 16:33 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I think this will be one accelerant for individualized/personally customized AI - e.g. personal assistants. The verifiers can use the user's preferences and tune to those rather than apply globally aligned behavioral rules.

10.05.2025 16:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It's also a problem of use cases and user adoption. Though it may turn out that Transformer-based AI does indeed fail to meet expectations.

There's a lot of misunderstanding and anthropomorphism of AI's reasoning, for example, that might not turn out well.

10.05.2025 16:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Coincidentally many startups of that time set up in loft & warehouse spaces w exposed concrete & steel beams.... I like this analogy especially for Social Interaction Design/Social UX, where "social architecture" is exposed for users to take up in norms, behaviors, expectations for how to engage

10.05.2025 16:24 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I can't disagree w that. Reflection through reading employs more critical thinking skills than conversation; bots solicit unserious interaction & even attempts to "hack" guardrails. I'm a huge reader but I do have lengthy convos w ChatGPT, likely because I read/reflect.

06.05.2025 15:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Agree w you. Tariffs as targeted protections of domestic ind, as reciprocity, as reshoring incentives, as embargoes - all these are different & neglect unintended consequences as we're seeing in markets & bonds & dollar.

Regardless of motives it's now a matter of game theory - who moves, when, etc

06.05.2025 14:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

For now I can see that chatbots likely would fail to provide accurate or probable reasoning if prompted for explanations of historical choices, actions, etc, for lack of proper historical context. But this too could be improved w training on secondary lit.

It's admittedly all rather Black Mirror.

06.05.2025 14:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

To learn a historical figure from a book however is to imagine their reasons, motives, actions in abstract. (Which is fine.) To have them personified as chatbots seems absurd and kitschy - but might reach some students who simply don't engage by reading.

06.05.2025 14:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

These bots likely are built on texts but not graphs, w which they could better hew to facts, etc. They might be trained to better interact - but this would be to layer pedagogical learning methods onto the bot's conversation style (is still interesting). On moral view, you're absolutely right.

06.05.2025 14:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@gravity7 is following 20 prominent accounts