Huge thanks to my wonderful collaborators Kathryn Yurechko, Pranati Dani, @cqz.bsky.social and @axz.bsky.social.
Full details here β‘οΈ arxiv.org/pdf/2409.03247
Huge thanks to my wonderful collaborators Kathryn Yurechko, Pranati Dani, @cqz.bsky.social and @axz.bsky.social.
Full details here β‘οΈ arxiv.org/pdf/2409.03247
All three strategies struggled with iterative refinement.
Interestingly, participants adopted hybrid approaches when iterating on their prompt filters β like providing examples as in-context examples or writing rule-like prompts.
Despite π€LLM promptingβs better performance, participants preferred mixed strategies to create their filters.
For example, when their preferences were ill-defined but intuitive, πlabeling examples was considered the easiest way. (π§΅4/N)
To answer this question, our study had 37 non-programmers create personal content filters using these three strategies. (π§΅3/N)
25.03.2025 01:07 β π 0 π 0 π¬ 1 π 0
Existing content filter tools often expect lay people to work in a single, long setup session. Yet users engage with social media in short, everyday sessions.
How can we support social media users to more easily create and iterate on their filters? (π§΅2/N)
Can LLM prompting help social media users create and iterate on their content filters more easily?
In our #CHI2025 paper, we compared in an experiment three authoring strategies:
π€ Prompting LLM
π Labeling examples for ML classifiers
π Authoring keyword rules
(π§΅1/N)