Indeed, how'd we even do that? Esp since we don't have decades of research and not that we literally developed the tech
05.03.2026 14:14 β π 0 π 0 π¬ 0 π 0@upolehsan.bsky.social
π― Making AI less evil= human-centered + explainable + responsible AI πΌ Harvard Berkman Klein Fellow | CS Prof. @Northeastern | Data & Society π’ Prev-Georgia Tech, {Google, IBM, MSFT}Research π¬ AI, HCI, Philosophy β F1, memes π upolehsan.com
Indeed, how'd we even do that? Esp since we don't have decades of research and not that we literally developed the tech
05.03.2026 14:14 β π 0 π 0 π¬ 0 π 0Lol... CS conferences that banned hybrid options are now scrambling to go hybrid given geopolitical events. Would have been nice to keep this accessible in the first place, right?
05.03.2026 12:13 β π 7 π 0 π¬ 2 π 0
π’ Deadline (Feb 19) is just around the corner! Get those hot takes, studies, and provocations in and join the most fun workshop at #CHI2026!
w/ Amal Alabdulkarim Justin Weisz Andreas Riener Min Kyung Lee @kenholstein.bsky.social
#ExplainableAI #HCXAI #XAI #HCI #AI
We are actually working behind the scenes to explore this avenue! If you've tips and tricks to share (or better join the effort), I'd love to chat
13.01.2026 04:03 β π 1 π 0 π¬ 1 π 0
π Past participants say HCXAI has become "central to their research practice", not just for the content, but for the authentic community that "preserves connection even as attendance soars past 100."
w/Justin Weisz Andreas Riener @kenholstein.bsky.social Min Kyun Lee Amal Alabdulkarim
8/8
n=8
We're calling for papers, prototypes, and provocations that:
π₯ Challenges assumptions about what constitutes explainability
π¨ Exposes limits, failures, and unintended consequences
π Bridges disciplines: HCI, AI, social science, law, design, domain expertise
7/n
4οΈβ£ Sociotechnical Evaluation & Futures: How do we move beyond technical metrics to measure real understanding and decision quality? What participatory approaches center affected communities? What should agentic XAI look like in 2030?
6/n
3οΈβ£ Trust, Accountability & Failure Modes: How do we support calibrated reliance: appropriate trust vs. dangerous over-reliance? What happens when explanations fail through dark patterns, manipulation, or cognitive overload? What's the difference between excusable AI and explainable AI?
5/n
1οΈβ£ Stakeholder Needs: What do users vs. developers actually need to know before, during, and after agent execution?
2οΈβ£ Explaining Agentic Behavior: Are chain-of-thought traces useful as explanations? How do we explain multi-step plans, tool invocations, and cascading effects?
4/n
Since 2021, our HCXAI workshops have built a community of 450+ researchers, practitioners, and policymakers from 21 countries. This year, we're reimagining explainability for agentic systems across the following areas:
3/n
π― The challenge: LLM-based agents are challenging XAI techniques. When AI systems plan multi-step strategies, invoke toolsβwhat does explainability even mean?
β‘οΈ The urgency: Without explainability, there can be no accountability. And unaccountable AI leads to automated injustice.
2/n
π¨[Pls repost!] Agentic AI is stress-testing Explainable AI. We need to fix it. That's why I'm thrilled to announce the 6th Human-Centered Explainable AI (HCXAI) workshop at CHI 2026 in Barcelona! π
π 2-5 page single column papers (excluding refs)
ποΈ Deadline: Feb 19, 2026
π hcxai.jimdosite.com
1/n
The person who knows the most about all of this is @upolehsan.bsky.social who has spent half a decade thinking about the human factors side of explainations.
And if you want to know about mechanistic faithfulness of rationales and chain of thought then @sarah-nlp.bsky.social is your go-to.
OpenAIβs big idea is to teach the LLM to post-hoc explain how it solved a problem. This is an extension of chain of thought.
It looks very similar in nature to βrationale generationβ, an explanation technique that has been around since 2018.
www.technologyreview.com/2025/12/03/1...
Trump is MAGA for Mamdani.
I've never seen another politician being as adept as Mamdani when it comes to coming back to the point.
How is he this good?
My hot take: it's his training as a rapper.
Somewhere there's a thesis to be written on nerdy rappers who end up as professionals.
Send it to me. I'll happily respond and b/cc you.
18.11.2025 04:23 β π 1 π 0 π¬ 0 π 0
π£ (pls repost, self plugs welcomed!):
Looking for papers on the precarity of *knowledge/white-collar workers* under AI.
Lots on gig work + AI, but less on how non-unionized, higher-income knowledge workers face rising risks from AI systems they help create.
Got recs?
#academicSky #HCI #AI
Test of Time awards eat Best Paper awards for lunch π₯π₯π₯ Given how fast moving Computer Science is as a field, work that stands the test of time is the real deal π―
Congrats @markriedl.bsky.social! So wholesome to see fellow EI-lab alum @matthewguz.bsky.social up there as well!
π‘ Because the best advisors don't create followers. They create leaders. Then they beam with pride when the world notices.
π Grateful for mentors like Mark who do exactly that.
π± And hoping I can pay that forward to my own students, present and future.
#academicsky #Mentorship #PhD #Gratitude
And honestly? My quiet prayer is that someday, I want to be at a conference, and someone the same thing.
And I'll find the nearest pen. Just like Mark did. βοΈ
If that ever happens, I'll know I did at least one part of this job right.
Because the best advisors...
1/3
π From the moment I started working with him, he pushed me to establish my identity.
When I wrote bios, he'd say: "Lead with your work. Don't start with 'I'm advised by Mark Riedl.'"
So seeing this badge ("Mark (Upol's Advisor)") felt like a full-circle moment. π«
1/2
My mentor @markriedl.bsky.social posted this from the AIES conference. Made my day!
Turns out people were opening conversations by saying: "Hey⦠you're Upol's advisor, right?"
On the surface, hilarious.
Underneath, it's the mentoring philosophy he's practiced for years.
Here's why:
1/n
Bro, I love the energy. (South) Asians go hard. Love that people are finally seeing our work ethic at the highest place in NYC. I love everything you're doing.
But for the sake of all humanity, I hope you go to sleep and get some rest! We can't afford a burned out Zohran!
There are many ways to motivate people.
We've seen what fear and division do.
Now we've a chance to see what hope and unity can do.
Didn't realize Bsky logged me out. Came back to this banger. Made my day!
04.11.2025 20:58 β π 6 π 0 π¬ 0 π 0
Peak academic mood: ghosting Nobel Prize for solitude.
β¨ The real prize was peace.
Academics after a paper deadline.
#academicSky
It is eye opening to see the same CS conference venues loudly talking about "accessibility" and "allyship" are forcing people to travel (regardless of visa/health issues) and banning hybrid efforts.
23.09.2025 21:15 β π 4 π 0 π¬ 0 π 0
We need to stop pretending brilliant science comes from brilliant minds working brilliantly at all times. Most of the stuff in academia is this unglamorous persistence from people who keep showing up.
7/7
n=7
#academicSky #mentalHealth #grit
If you're in the fog: Try the smallest possible "show up" today. π±
If you know someone struggling: Check on your people.
πͺ Because sometimes the gap between "almost didn't exist" and "submitted to CHI" is just... showing up.
6/n