Michelle Lam's Avatar

Michelle Lam

@mlam.bsky.social

Stanford CS PhD student | hci, human-centered AI, social computing, responsible AI (+ dance, design, doodling!) michelle123lam.github.io

920 Followers  |  246 Following  |  11 Posts  |  Joined: 05.07.2023
Posts Following

Posts by Michelle Lam (@mlam.bsky.social)

CSCW folks, I wanted to highlight how excited and proud I am to see work from our community (dl.acm.org/doi/10.1145/..., CSCW '24 best paper winner led by @jiachenyan.bsky.social and @mlam.bsky.social) grow and expand ambition into this Science paper. CSCW has a ton to offer the world.

03.12.2025 19:46 β€” πŸ‘ 25    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
A circular flow diagram that compares current and proposed practices for LLM development using data from adopters and non-adopters. Three gray boxes represent current practices: β€œR&D,” β€œChat Models,” and β€œAdopters’ Needs and Usage Data,” connected in a clockwise loop with black arrows. A blue box labeled β€œNon-adopters’ Needs and Usage Data” adds a proposed feedback path, shown with blue arrows, linking non-adopter data back to R&D and adopters’ data.

A circular flow diagram that compares current and proposed practices for LLM development using data from adopters and non-adopters. Three gray boxes represent current practices: β€œR&D,” β€œChat Models,” and β€œAdopters’ Needs and Usage Data,” connected in a clockwise loop with black arrows. A blue box labeled β€œNon-adopters’ Needs and Usage Data” adds a proposed feedback path, shown with blue arrows, linking non-adopter data back to R&D and adopters’ data.

As of June 2025, 66% of Americans have never used ChatGPT.

Our new position paper, Attention to Non-Adopters, explores why this matters: AI research is being shaped around adoptersβ€”leaving non-adopters’ needs, and key LLM research opportunities, behind.

arxiv.org/abs/2510.15951

21.10.2025 17:12 β€” πŸ‘ 38    πŸ” 13    πŸ’¬ 2    πŸ“Œ 0
Preview
Policy Maps: Tools for Guiding the Unbounded Space of LLM Behaviors AI policy sets boundaries on acceptable behavior for AI models, but this is challenging in the context of large language models (LLMs): how do you ensure coverage over a vast behavior space? We introd...

A huge thank you to co-authors @fredhohman.bsky.social, @domoritz.de, @jeffreybigham.com, @kenholstein.bsky.social, and Mary Beth Kery! This work was done during my summer internship w/ Apple AIML, and I’m thankful to work with this wonderful team :)

arxiv.org/abs/2409.18203
#UIST25 talk: Wed 11am!

29.09.2025 15:54 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Broader usage scenarios inclue multi-stakeholder collaboration (live mode, git for policy, policy forks, participatory maps) and model evaluation + auditing (policy test suite, policy audits)

Broader usage scenarios inclue multi-stakeholder collaboration (live mode, git for policy, policy forks, participatory maps) and model evaluation + auditing (policy test suite, policy audits)

We can extend policy maps to enable Git-style collaboration and forking, aid live deliberation, and support longitudinal policy test suites & third-party audits. Policy maps can transform a nebulous space of model possibilities to an explicit specification of model behavior.

29.09.2025 15:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
An evaluation with 12 LLM safety experts found it was much easier to identify policy gaps and author policies with the system compared to in their normal work.

An evaluation with 12 LLM safety experts found it was much easier to identify policy gaps and author policies with the system compared to in their normal work.

With our system, LLM safety experts rapidly discovered policy gaps and crafted new policies around problematic model behavior (e.g., incorrectly assuming genders; repeating hurtful names in summaries; blocking physical safety threats that a user needs to be able to monitor).

29.09.2025 15:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Given the unbounded space of LLM behaviors, developers need tools that concretize the subjective decisionmaking inherent to policy design. They should have a visual space to systematically explore, with explicit conceptual links between lofty principles and grounded examples.

29.09.2025 15:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Policy maps chart LLM policy coverage over an unbounded space of model behaviors. Here, an AI practitioner is designing a policy for how an LLM should summarize violent text. Policy map abstractions (right) allow the policy designer to interactively author and test policies that govern a model’s behavior using if-then rules over concepts. The designer can create any desired concept by providing a simple text definition to capture cases of model behavior. Our Policy Projector tool (center) renders cases, concepts, and policies as visual map layers to aid iterative policy design.

Policy maps chart LLM policy coverage over an unbounded space of model behaviors. Here, an AI practitioner is designing a policy for how an LLM should summarize violent text. Policy map abstractions (right) allow the policy designer to interactively author and test policies that govern a model’s behavior using if-then rules over concepts. The designer can create any desired concept by providing a simple text definition to capture cases of model behavior. Our Policy Projector tool (center) renders cases, concepts, and policies as visual map layers to aid iterative policy design.

Our system creates linked map layers of cases, concepts, & policies: so an AI developer can author a policy that blocks model responses involving violence, visually notice a gap of physical threats that a user ought to be aware of, and test a revised policy to address this gap.

29.09.2025 15:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

LLM safety work often reasons over high-level policies (be helpful & polite), but must tackle on-the-ground cases (unsolicited money advice when stocks are mentioned). This can feel like driving on an unfamiliar road guided by a generic driver’s manual instead of a map. We introduce: Policy Maps πŸ—ΊοΈ

29.09.2025 15:54 β€” πŸ‘ 8    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
What is LLooM? | LLooM Concept Induction: Analyzing Unstructured Text with High-Level Concepts

Somehow only just became aware of LlooM, a toolkit that uses a combination of clustering and prompts to extract concepts and describe custom datasets β€”Β similar to a topic model. Looks nice, with lots of documentation and open colab notebooks!

Has anyone used it?

stanfordhci.github.io/lloom/about/

17.01.2025 09:09 β€” πŸ‘ 149    πŸ” 22    πŸ’¬ 14    πŸ“Œ 0
Custom Models | LLooM Concept Induction: Analyzing Unstructured Text with High-Level Concepts

We made updates to LLooM after the CHI publication to support local models (and non-OpenAI models)! More info here, though we haven't run evals across open-source models: stanfordhci.github.io/lloom/about/...

17.01.2025 23:46 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Qualitatively, I found that the BERTopic groupings were still rather large, so I anticipate the GPT labels would still be quite generic (as opposed to specific/targeted concepts).

17.01.2025 17:18 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

That's a good point! In the technical evaluations, we used GPT to automatically find matches between the methods (including a GPT-only condition), but it could have evened the playing field even more to generate GPT-style labels for BERTopic before the matching step.

17.01.2025 17:18 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thanks so much for sharing our work! :)

17.01.2025 17:07 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We're excited to host a second iteration of the HEAL workshop! Join us at CHI 2025 :)

β†’ Deadline: Feb 17, more info at heal-workshop.github.io

19.12.2024 04:15 β€” πŸ‘ 12    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Building a Social Media Algorithm That Actually Promotes Societal Values A Stanford research team shows that building democratic values into a feed-ranking algorithm reduces partisan animosity.

Wednesday, at CSCW: @mlam.bsky.social and Chenyan Jia present their Best Paper award winner, "Embedding Democratic Values into Social Media AIs via Societal Objective Functions"

hai.stanford.edu/news/buildin...

12.11.2024 20:32 β€” πŸ‘ 19    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0