If you're looking for an invite to Comet with Pro included, (the AI powered browser that acts as a personal assistant), then you're in luck: pplx.ai/briangrell35...
21.10.2025 16:06 β π 0 π 0 π¬ 0 π 0@briangrellmann.bsky.social
πΌ UX Research & Accessibility Lead in Finance π« Industry Advisory Board HCI at 2 Universities βοΈ Posting summaries & reflections of my reading list πΆ Rescue dog dad
If you're looking for an invite to Comet with Pro included, (the AI powered browser that acts as a personal assistant), then you're in luck: pplx.ai/briangrell35...
21.10.2025 16:06 β π 0 π 0 π¬ 0 π 0This new case study shows:
β strategies for building & exploring personal knowledge bases
β how retrieval shapes the way people create & maintain notes
β where AI could support knowledge work in the future
Screenshot of paper title: How people manage knowledge in their "second brains"
No way! Some researchers at IBM in Brazil have looked into exactly what Iβve been trying to figure out myselfβ¦ how researchers use Obsidian as a βsecond brainβ to manage knowledge π§ π
arxiv.org/pdf/2509.20187
Someone please run this study!
22.09.2025 12:54 β π 0 π 0 π¬ 0 π 0Wore a suit in central London for the first time and was offered so much cocaine.
Does π+π = π?
My hypothesis: People in suits are more likely to be approached with illicit drugs than those in casual wear, as suits may signal disposable income, social capital, or lower perceived risk to dealers.
π A relevant paper to our discussions in HCI curriculum development. How do we encourage critical thinking, understanding, and enquiry around AI for workforce skills requirements against academic integrity and the need to enforce against misuse.
arxiv.org/pdf/2506.22231
The following recommendations are suggested and summarise activities:
1. Redesign assessments to emphasise process and originality
2. Enhance AI literacy for staff and students
3. Implement multi layered enforcement and detection
4. Develop clear and detailed AI usage guidelines
π There are pedagogical concerns like the erosion of academic integrity, and the risk of misinformation. If used as a shortcut rather than a learning aid there is the potential that unfettered use reduces understanding or ability to think critically.
β
So what can universities do?
β
AI can provide great benefit across the academic spectrum. Writing research grants, increase research productivity and transform teaching and learning.
βοΈ It also presents risks: there is a prevalence of misuse in student work, and limitations to forensic AI detection.
A screengrab of the paper title
π in Adapting University Policies for Generative AI: Opportunities, Challenges, and Policy Solutions in Higher Education, Beale asks how do universities respond to the advent of LLMs?
There are clear benefits and risk for misuse. Which policies strike the right balance for use of AI?
In short, the paper combines worker sentiment and expert views to shows AI agents are most valuable when humans and machines collaborate, not when AI operates alone.
Responsible AI should:
β
Center human agency
β
Align AI design with worker preferences
β
Recognise where human strengths truly shine
Authors suggest key human skills are shifting with AI adoption:
The demand for information-processing skills is shrinking.
While interpersonal, organisational skills are found in tasks that demand high human agency.
Could this have implications for training, hiring, and designing with AI in mind?
The authors highlight 4 core insights, here are 2 of them:
2οΈβ£ There are mismatches between what AI can do and what workers want it to do
4οΈβ£ Thereβs a broader skills shift underway: from information-processing to interpersonal competence
A framework for HAS H1 to H5 describing AI and human relationship across team dymanics, the degree of human involvement needed, the AIs role, and some example tasks.
They introduce the Human Agency Scale: a shared language for human-AI task relationships
H1: AI handles the task entirely on its own
H2: AI needs minimal human input
H3: Equal human-agent partnership
H4: AI needs substantial human input
H5: AI canβt function without continuous human involvement
Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce. Shao et al
π What do workers actually want AI agents to do?
A new paper from Stanford titled The Future of Work with AI Agents proposes a principled, survey-based framework to evaluate this, shifting the focus from technical capability to human desire and agency.
π§΅
Paper: arxiv.org/pdf/2506.06576
Paper here: arxiv.org/pdf/2503.002...
Side note: I especially appreciated the researcherβs reflection on doing a solo-authored paperβand how it deepened her appreciation for working collaboratively with co-authors and her team.
Eat this magical konjac jelly and you'll instantly know how to speak and understand every single language βΒ type says honyaku konyaku
In short: Speculative tech in pop culture is a rich resource for rethinking how we design for real human needs in HCI.
Do I wish I could eat a konjac jelly and instantly understand every language instead of using an app? 100% yes.
The takeaway: Human needs havenβt changed much over the decadesβbut the technologies used to meet them have. While AI, AR, and VR echo some of Doraemonβs inventions, his tools are more seamlessly embedded in everyday life, moving beyond screen-based, modern UI paradigms.
04.05.2025 20:33 β π 0 π 0 π¬ 1 π 0For the unfamiliar: Doraemon is a robot cat from the 22nd century who travels back in time to help the hapless Nobita, armed with a seemingly endless supply of intuitive, problem-solving gadgets.
04.05.2025 20:33 β π 0 π 0 π¬ 1 π 0Doraemon, Nobita, and friends flying using Doraemon's Take-Copter
π In Doraemonβs Gadget Lab, Tram Tran explores the speculative tech of the beloved Japanese manga Doraemon through an HCI lensβcategorising 379 gadgets by user needs, comparing them to todayβs technologies, and asking how they might inspire future interaction design paradigms.
04.05.2025 20:33 β π 1 π 0 π¬ 1 π 2An important chapter to read for anyone designing AI-enabled systems, drawing links from established AI design principles and how users form mental models.
Worksheets: pair.withgoogle.com/worksheet/me...
pair.withgoogle.com/guidebook/ch...
β Account for user expectations of human-like interaction.
Communicate the nature and limits of the AI to set realistic user expectations and avoid unintended deception.
Try to find the balance between cueing the right interaction while limiting the level of mismatched expectations or failures.
β Plan for co-learning.
Implicit and explicit feedback improve AI and change the UX over time.
When the AI fails the 1st time, users will be disappointed so provide a ux that fails gracefully and doesn't rely on AI.
Remind and re-inforce mental models especially when user needs or journeys change
β Onboard in stages.
Onboarding starts before users' first interaction and continues indefinitely.
- again, set the right expectation
- explain the benefit, not the technology
- use relevant and actionable 'inboarding' messages
- allow for tinkering and experimentation
β Set expectations for adaptation.
One of the biggest opportunities for creating effective mental models of AI products is to identify and build on existing models, while teaching users the dynamic relationship between their input and product output.
The π§ Mental Models chapter of the π People + AI Guidebook explains how AI-enabled systems change over time, yet users' mental models may not match what a product can actually do.
Mismatched mental models lead to unmet expectations, frustration, and product abandonment.
4 key considerations π
As AI capabilities continue to evolve at speed, itβs our responsibility to continually test whether they still resonate, still guide, and still serve the humans these systems are meant to support.
Great paper, found via @stanfordhai.bsky.social course, required reading
Their evaluation of deployed AI products reinforces what many of us observe in the field β users want to understand: - what the system can do,
- how to intervene when something feels off, and
- how it should behave over time.
The guidelines are conditions for trust, confidence, and adoption.
This breaks with how we evaluate usability and calls for design practices that are grounded in user expectations, mental models, and context.
15.04.2025 12:44 β π 0 π 0 π¬ 1 π 0Why do we need more design guidelines?
AI challenges core usability heuristics: predictability, consistency, and clarity are not guaranteed by intelligent systems, which behave probabilistically and respond to dynamic inputs.