Brian Grellmann's Avatar

Brian Grellmann

@briangrellmann.bsky.social

πŸ’Ό UX Research & Accessibility Lead in Finance 🏫 Industry Advisory Board HCI at 2 Universities ✍️ Posting summaries & reflections of my reading list 🐢 Rescue dog dad

14 Followers  |  32 Following  |  32 Posts  |  Joined: 09.04.2025  |  2.6308

Latest posts by briangrellmann.bsky.social on Bluesky

Preview
Try Comet with Pro included For a limited time, get access to Comet with a month of free Perplexity Pro

If you're looking for an invite to Comet with Pro included, (the AI powered browser that acts as a personal assistant), then you're in luck: pplx.ai/briangrell35...

21.10.2025 16:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This new case study shows:

⭐ strategies for building & exploring personal knowledge bases

⭐ how retrieval shapes the way people create & maintain notes

⭐ where AI could support knowledge work in the future

25.09.2025 07:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Screenshot of paper title: How people manage knowledge in their "second brains"

Screenshot of paper title: How people manage knowledge in their "second brains"

No way! Some researchers at IBM in Brazil have looked into exactly what I’ve been trying to figure out myself… how researchers use Obsidian as a β€œsecond brain” to manage knowledge πŸ§ πŸ“

arxiv.org/pdf/2509.20187

25.09.2025 07:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Someone please run this study!

22.09.2025 12:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Wore a suit in central London for the first time and was offered so much cocaine.

Does πŸ‘”+πŸŒ† = πŸ’Š?

My hypothesis: People in suits are more likely to be approached with illicit drugs than those in casual wear, as suits may signal disposable income, social capital, or lower perceived risk to dealers.

22.09.2025 12:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ’­ A relevant paper to our discussions in HCI curriculum development. How do we encourage critical thinking, understanding, and enquiry around AI for workforce skills requirements against academic integrity and the need to enforce against misuse.

arxiv.org/pdf/2506.22231

09.07.2025 18:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The following recommendations are suggested and summarise activities:

1. Redesign assessments to emphasise process and originality

2. Enhance AI literacy for staff and students

3. Implement multi layered enforcement and detection

4. Develop clear and detailed AI usage guidelines

09.07.2025 18:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🍎 There are pedagogical concerns like the erosion of academic integrity, and the risk of misinformation. If used as a shortcut rather than a learning aid there is the potential that unfettered use reduces understanding or ability to think critically.

β€”

So what can universities do?

09.07.2025 18:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

βœ… AI can provide great benefit across the academic spectrum. Writing research grants, increase research productivity and transform teaching and learning.

⛔️ It also presents risks: there is a prevalence of misuse in student work, and limitations to forensic AI detection.

09.07.2025 18:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
A screengrab of the paper title

A screengrab of the paper title

πŸ“ƒ in Adapting University Policies for Generative AI: Opportunities, Challenges, and Policy Solutions in Higher Education, Beale asks how do universities respond to the advent of LLMs?

There are clear benefits and risk for misuse. Which policies strike the right balance for use of AI?

09.07.2025 18:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In short, the paper combines worker sentiment and expert views to shows AI agents are most valuable when humans and machines collaborate, not when AI operates alone.

Responsible AI should:
βœ… Center human agency
βœ… Align AI design with worker preferences
βœ… Recognise where human strengths truly shine

06.07.2025 07:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Authors suggest key human skills are shifting with AI adoption:

The demand for information-processing skills is shrinking.

While interpersonal, organisational skills are found in tasks that demand high human agency.

Could this have implications for training, hiring, and designing with AI in mind?

06.07.2025 07:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The authors highlight 4 core insights, here are 2 of them:

2️⃣ There are mismatches between what AI can do and what workers want it to do

4️⃣ There’s a broader skills shift underway: from information-processing to interpersonal competence

06.07.2025 07:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
A framework for HAS H1 to H5 describing AI and human relationship across team dymanics, the degree of human involvement needed, the AIs role, and some example tasks.

A framework for HAS H1 to H5 describing AI and human relationship across team dymanics, the degree of human involvement needed, the AIs role, and some example tasks.

They introduce the Human Agency Scale: a shared language for human-AI task relationships

H1: AI handles the task entirely on its own

H2: AI needs minimal human input

H3: Equal human-agent partnership

H4: AI needs substantial human input

H5: AI can’t function without continuous human involvement

06.07.2025 07:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce. Shao et al

Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce. Shao et al

πŸ“˜ What do workers actually want AI agents to do?

A new paper from Stanford titled The Future of Work with AI Agents proposes a principled, survey-based framework to evaluate this, shifting the focus from technical capability to human desire and agency.

🧡
Paper: arxiv.org/pdf/2506.06576

06.07.2025 07:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Paper here: arxiv.org/pdf/2503.002...

Side note: I especially appreciated the researcher’s reflection on doing a solo-authored paperβ€”and how it deepened her appreciation for working collaboratively with co-authors and her team.

04.05.2025 20:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Eat this magical konjac jelly and you'll instantly know how to speak and understand every single language – type says honyaku konyaku

Eat this magical konjac jelly and you'll instantly know how to speak and understand every single language – type says honyaku konyaku

In short: Speculative tech in pop culture is a rich resource for rethinking how we design for real human needs in HCI.

Do I wish I could eat a konjac jelly and instantly understand every language instead of using an app? 100% yes.

04.05.2025 20:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The takeaway: Human needs haven’t changed much over the decadesβ€”but the technologies used to meet them have. While AI, AR, and VR echo some of Doraemon’s inventions, his tools are more seamlessly embedded in everyday life, moving beyond screen-based, modern UI paradigms.

04.05.2025 20:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For the unfamiliar: Doraemon is a robot cat from the 22nd century who travels back in time to help the hapless Nobita, armed with a seemingly endless supply of intuitive, problem-solving gadgets.

04.05.2025 20:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Doraemon, Nobita, and friends flying using Doraemon's Take-Copter

Doraemon, Nobita, and friends flying using Doraemon's Take-Copter

πŸ“˜ In Doraemon’s Gadget Lab, Tram Tran explores the speculative tech of the beloved Japanese manga Doraemon through an HCI lensβ€”categorising 379 gadgets by user needs, comparing them to today’s technologies, and asking how they might inspire future interaction design paradigms.

04.05.2025 20:33 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 2
Preview
People + AI Guidebook A toolkit for teams building human-centered AI products.

An important chapter to read for anyone designing AI-enabled systems, drawing links from established AI design principles and how users form mental models.

Worksheets: pair.withgoogle.com/worksheet/me...

pair.withgoogle.com/guidebook/ch...

20.04.2025 06:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

βžƒ Account for user expectations of human-like interaction.

Communicate the nature and limits of the AI to set realistic user expectations and avoid unintended deception.

Try to find the balance between cueing the right interaction while limiting the level of mismatched expectations or failures.

20.04.2025 06:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

βž‚ Plan for co-learning.

Implicit and explicit feedback improve AI and change the UX over time.

When the AI fails the 1st time, users will be disappointed so provide a ux that fails gracefully and doesn't rely on AI.

Remind and re-inforce mental models especially when user needs or journeys change

20.04.2025 06:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

➁ Onboard in stages.

Onboarding starts before users' first interaction and continues indefinitely.

- again, set the right expectation
- explain the benefit, not the technology
- use relevant and actionable 'inboarding' messages
- allow for tinkering and experimentation

20.04.2025 06:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

βž€ Set expectations for adaptation.

One of the biggest opportunities for creating effective mental models of AI products is to identify and build on existing models, while teaching users the dynamic relationship between their input and product output.

20.04.2025 06:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The 🧠 Mental Models chapter of the 🌐 People + AI Guidebook explains how AI-enabled systems change over time, yet users' mental models may not match what a product can actually do.

Mismatched mental models lead to unmet expectations, frustration, and product abandonment.

4 key considerations πŸ‘‡

20.04.2025 06:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

As AI capabilities continue to evolve at speed, it’s our responsibility to continually test whether they still resonate, still guide, and still serve the humans these systems are meant to support.

Great paper, found via @stanfordhai.bsky.social course, required reading

15.04.2025 12:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Their evaluation of deployed AI products reinforces what many of us observe in the field – users want to understand: - what the system can do,
- how to intervene when something feels off, and
- how it should behave over time.

The guidelines are conditions for trust, confidence, and adoption.

15.04.2025 12:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This breaks with how we evaluate usability and calls for design practices that are grounded in user expectations, mental models, and context.

15.04.2025 12:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Why do we need more design guidelines?

AI challenges core usability heuristics: predictability, consistency, and clarity are not guaranteed by intelligent systems, which behave probabilistically and respond to dynamic inputs.

15.04.2025 12:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@briangrellmann is following 20 prominent accounts