Iโm writing an HCI paper about an AI-powered system. What should I report?
Eight Guidelines to Improve Research Quality and Enhance Chance of Acceptance
Writing an HCI paper about an AI-powered system to a venue like UIST 2026 or CHI 2027? Wondering what reviewers expect you to report, and how to approach paper framing and writing? Check out our reporting guidelines: medium.com/p/7c3ae86341...
03.03.2026 16:29 โ
๐ 1
๐ 1
๐ฌ 0
๐ 0
We argue that eval around AI for education should be disaggregated in a manner that pinpoints whether models can discern when a student may need pedagogical support, and whether models equitably serve students across different levels of proficiency.
03.03.2026 03:10 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Question: How many dots did the student include in their array?
For an erroneous student response: Model answer: 12. True answer: The student didn't include an array. True answer for a non-erroneous student response: The student included 12 dots in their array.
Question: How many squares did the student draw to show the number of cups of red paint?
For an erroneous student response: Model answer: The student drew 9 squares to show the number of cups of red paint. True answer: The student drew 12 squares to represent the cups of red paint.
True answer for a non-erroneous student response: The student drew 9 squares to show the number of cups of red paint.
Modelsโ mistakes may assume correct math solutions. Typically, models are trained on โhigh qualityโ math so that they can hill-climb on GSM8k, MATH, etc. However, dev pipelines that favor correct math are tension w/ education, where math errors require extra attention.
03.03.2026 03:09 โ
๐ 0
๐ 1
๐ฌ 1
๐ 0
A bar chart disaggregating results for four VLMs across different question types. Content description QA consistently drives the gap in VLM performance between student responses that contain errors versus those that do not. In addition, questions related to studentsโ correctness and errors are still the most difficult.
We find that this gap is primarily driven by QA related to content description. In addition, VLMs struggle to identify cases when help is needed; the most challenging QA are those related to assessing studentsโ correctness and errors.
03.03.2026 03:09 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
Title, author list, and two figures from the paper.
Title: The Aftermath of DrawEduMath: Vision Language Models
Underperform with Struggling Students and Misdiagnose Errors
Authors: Li Lucy, Albert Zhang, Nathan Anderson, Ryan Knight, Kyle Lo
Figure 1: On the left is a math problem, where students are asked to draw x < 5/2 on a number line. The right side shows two example student responses that differ in correctness. DrawEduMath pairs each math problem with one student response, and prompts VLMs to answer questions about the student response.
Figure 2: VLMs consistently perform worse on answering DrawEduMath benchmark questions pertaining to erroneous student responses. Performance on non-erroneous student responses is labeled with specific VLMsโ names; that same modelโs performance on erroneous student responses is directly below.
Models are now expert math solvers, and so AI for math education is receiving increasing attention.
Our new preprint evaluates 11 VLMs on our QA benchmark, DrawEduMath. We highlight a startling gap: models perform less well on inputs from K-12 students who need more help. ๐งต
03.03.2026 03:08 โ
๐ 31
๐ 9
๐ฌ 5
๐ 2
1/7 ๐งต The GPT-4 technical report featured detailed calibration curves.
Since then, not a single major model release has reported calibration. The field quietly stopped measuring whether models know what they don't know.
Our new position paper argues this is a mistake. Here's why.
02.03.2026 19:09 โ
๐ 8
๐ 2
๐ฌ 1
๐ 0
Abstract submissions close on March 3rd!
We are also extending a โจ call for mentored reviewers โจ if you advise excellent graduate or postdoctoral researchers you are welcome to recommend them to review for IC2S2 2026. Email IC2S2@uvm.edu to nominate mentored reviewers (or faculty colleagues)
23.02.2026 19:39 โ
๐ 12
๐ 12
๐ฌ 1
๐ 2
CORRECTION, Claude Code launched in February 2025, suggesting a roughly 13% increase above expectations.
26.02.2026 00:47 โ
๐ 5
๐ 1
๐ฌ 1
๐ 2
I remember the time to time muttering!! ๐ฎ Curious, chinese-speaking culture in mainland china or US or elsewhere??
25.02.2026 23:05 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
Agents of Chaos -- what are autonomous OpenClaw agents up to? How do they interact with each other? Read our investigation of OpenClaw at
researchgate.net/publication/...
And an interactive website agentsofchaos.baulab.info
@davidbau.bsky.social @natalieshapira.bsky.social @openclaw-x.bsky.social
24.02.2026 15:04 โ
๐ 18
๐ 6
๐ฌ 1
๐ 1
I'm hiring a postdoc at @cmu.edu (w/ far.ai & @dgrand.bsky.social + @gordpennycook.bsky.social)!
How do LLMs shape human beliefs โ and what do we do about it? AI safety meets behavioral science.
Open to technical and social science backgrounds.
23.02.2026 18:46 โ
๐ 42
๐ 27
๐ฌ 1
๐ 3
Anthropic Education Report: The AI Fluency Index
Anthropic's AI Fluency Index measures 11 observable behaviors across thousands of Claude.ai conversations to understand how people develop AI collaboration skills.
New research: The AI Fluency Index.
We tracked 11 behaviors across thousands of http://Claude.ai conversationsโfor example, how often people iterate and refine their work with Claudeโto measure how well people collaborate with AI.
Read more: https://www.anthropic.com/research/AI-fluency-index
23.02.2026 15:06 โ
๐ 15
๐ 1
๐ฌ 0
๐ 3
We've alllllmost gotten all the Jan26 ARR reviews in, but I'm still trying to track down new emergency reviewers for papers on the following topics:
1) agents
2) jailbreaking
3) coding
4) RL
5) reasoning
6) LLM for finance
7) AMR
8) alignment
If you can review any (in next 24-48h) please DM me ๐๐๐
20.02.2026 04:39 โ
๐ 3
๐ 9
๐ฌ 0
๐ 0
I was taught that to have a great job talk narrative, you really only need ~3 high quality papers
20.02.2026 01:54 โ
๐ 5
๐ 0
๐ฌ 2
๐ 0
How horrible to be a CS grad student under pressure to submit multiple first author papers to every conference deadline, whether they feel ready or not. This serves no oneโs best interests in long run (science included). But lots of students appear to being getting advice itโs necessary to compete
20.02.2026 01:03 โ
๐ 71
๐ 8
๐ฌ 1
๐ 2
Matching sounds to shapes: Evidence of the bouba-kiki effect in naรฏve baby chicks
Humans across multiple languages spontaneously associate the nonwords โkikiโ and โboubaโ with spiky and round shapes, respectively, a phenomenon named the bouba-kiki effect. To explore the origin of t...
โHumans across multiple languages spontaneously associate the nonwords kiki & bouba with spiky & round shapes, respectively...We tested the bouba-kiki effect in baby chickens. Similar to humans, they spontaneously chose a spiky shape when hearing a kiki sound & a round shape when hearing a bouba.โ๐ฒ๐งช
19.02.2026 19:20 โ
๐ 331
๐ 122
๐ฌ 13
๐ 40
I have a small project that is taking me outside of academia to dip into industry, just ever so briefly.
I engage a lot with AI. I was not at all prepared for how industry is using it. Not. at. all.
This brief little window is definitely helping me better frame my teaching in this new world.
17.02.2026 21:28 โ
๐ 49
๐ 6
๐ฌ 8
๐ 1
My contribution to the discourse, which I've said before and will say again: DH isn't over. DH has won. 1/
17.02.2026 15:46 โ
๐ 72
๐ 23
๐ฌ 5
๐ 11
I asked Gemini to "defend itself," and say what the big benefits of LLMs have been since 2020:
"Since 2020, the volume of digital noise has increased, and LLMs have provided the first reliable shield against it."
15.02.2026 15:18 โ
๐ 18
๐ 1
๐ฌ 3
๐ 1
The evolution of OpenAIโs mission statement
As a USA 501(c)(3) the OpenAI non-profit has to file a tax return each year with the IRS. One of the required fields on that tax return is to โBriefly โฆ
I had some fun pulling OpenAI's mission statement out of their IRS tax filings from 2016 to 2024, loading them into a git repo with fake commit dates and then taking a look at the diffs simonwillison.net/2026/Feb/13/...
13.02.2026 23:40 โ
๐ 240
๐ 45
๐ฌ 7
๐ 2
I doubt it. I would read the author's piece very literally. He just put this preprint on arxiv: arxiv.org/pdf/2601.19062 I think some (and my read, this includes the author) are realizing that much more than AI is disempowering us. Many of us have known this for a very long time, of course.
12.02.2026 05:32 โ
๐ 3
๐ 3
๐ฌ 1
๐ 0
I wrote a short article on AI Model Evaluation for the Open Encyclopedia of Cognitive Science ๐๐
Hope this is helpful for anyone who wants a super broad, beginner-friendly intro to the topic!
Thanks @mcxfrank.bsky.social and @asifamajid.bsky.social for this amazing initiative!
12.02.2026 22:22 โ
๐ 50
๐ 22
๐ฌ 0
๐ 1
Well done @zdenekkasner.bsky.social et al!
LLMs as Span Annotators: A Comparative Study of LLMs and Humans is accepted to multilingual-multicultural-evaluation.github.io ๐
See paper arxiv.org/abs/2504.08697
29.01.2026 15:35 โ
๐ 8
๐ 2
๐ฌ 2
๐ 1
If you think labeling text spans with LLMs is easy, you probably have not tried it yourself (we have! ๐).
Any method you can think of โ be it tagging, matching, or indexing โ has flaws.
In our new preprint, we tested them all ๐ชWe also proposed how to improve one of them.
arxiv.org/abs/2601.16946
29.01.2026 14:20 โ
๐ 40
๐ 6
๐ฌ 2
๐ 3
I am looking for 2 emergency reviewers for the ARR Ethics, Bias & Fairness track. Please DM me if you are available ๐
10.02.2026 09:27 โ
๐ 6
๐ 6
๐ฌ 0
๐ 0
Screen shot of title page of a preprint.
Title: Should generative AI be used in reflexive qualitative research?
Authors: Elida Izani Ibrahim, Laura K. Nelson, and Andrea Voyer
Recent publications arguing against the use of genAI in reflexive qual research inspired us (Elida Ibrahim and @andreavoyer.bsky.social) to write our own perspective. Not to convince anyone to use genAI but for those who might be interested and are looking for guidance.
osf.io/preprints/so...
09.02.2026 18:49 โ
๐ 52
๐ 21
๐ฌ 2
๐ 0
Bad Bunny's historical advisor is an assistant professor at UW-Madison.
Hell of a flex for your tenure file.
09.02.2026 13:47 โ
๐ 1607
๐ 263
๐ฌ 18
๐ 10
Excited to be co-organizing the #CHI2026 workshop on augmented reading interfaces ๐โจ Submissions are open for one more week! We want to know what you're working on!
06.02.2026 20:21 โ
๐ 10
๐ 2
๐ฌ 1
๐ 0