Upol Ehsan | hiring PhD students Fall'26's Avatar

Upol Ehsan | hiring PhD students Fall'26

@upolehsan.bsky.social

🎯 Making AI less evil= human-centered + explainable + responsible AI πŸ’Ό Harvard Berkman Klein Fellow | CS Prof. @Northeastern | Data & Society 🏒 Prev-Georgia Tech, {Google, IBM, MSFT}Research πŸ”¬ AI, HCI, Philosophy β˜• F1, memes 🌐 upolehsan.com

1,395 Followers  |  374 Following  |  403 Posts  |  Joined: 15.11.2024
Posts Following

Posts by Upol Ehsan | hiring PhD students Fall'26 (@upolehsan.bsky.social)

Indeed, how'd we even do that? Esp since we don't have decades of research and not that we literally developed the tech

05.03.2026 14:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Lol... CS conferences that banned hybrid options are now scrambling to go hybrid given geopolitical events. Would have been nice to keep this accessible in the first place, right?

05.03.2026 12:13 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

πŸ“’ Deadline (Feb 19) is just around the corner! Get those hot takes, studies, and provocations in and join the most fun workshop at #CHI2026!

w/ Amal Alabdulkarim Justin Weisz Andreas Riener Min Kyung Lee @kenholstein.bsky.social

#ExplainableAI #HCXAI #XAI #HCI #AI

18.02.2026 19:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We are actually working behind the scenes to explore this avenue! If you've tips and tricks to share (or better join the effort), I'd love to chat

13.01.2026 04:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🌟 Past participants say HCXAI has become "central to their research practice", not just for the content, but for the authentic community that "preserves connection even as attendance soars past 100."

w/Justin Weisz Andreas Riener @kenholstein.bsky.social Min Kyun Lee Amal Alabdulkarim

8/8
n=8

12.01.2026 20:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Home | HCXAI ACM CHI 2026 Workshop on Human-Centered Explainable AI (HCXAI). April 13-16, 2025 (Barcelona Spain). This is the flagship workshop on HCXAI and one of the most well-attended and longest running worksh...

We're calling for papers, prototypes, and provocations that:
πŸ”₯ Challenges assumptions about what constitutes explainability
🚨 Exposes limits, failures, and unintended consequences
🌍 Bridges disciplines: HCI, AI, social science, law, design, domain expertise

7/n

12.01.2026 20:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Preview
Home | HCXAI ACM CHI 2026 Workshop on Human-Centered Explainable AI (HCXAI). April 13-16, 2025 (Barcelona Spain). This is the flagship workshop on HCXAI and one of the most well-attended and longest running worksh...

4️⃣ Sociotechnical Evaluation & Futures: How do we move beyond technical metrics to measure real understanding and decision quality? What participatory approaches center affected communities? What should agentic XAI look like in 2030?

6/n

12.01.2026 20:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3️⃣ Trust, Accountability & Failure Modes: How do we support calibrated reliance: appropriate trust vs. dangerous over-reliance? What happens when explanations fail through dark patterns, manipulation, or cognitive overload? What's the difference between excusable AI and explainable AI?

5/n

12.01.2026 20:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

1️⃣ Stakeholder Needs: What do users vs. developers actually need to know before, during, and after agent execution?

2️⃣ Explaining Agentic Behavior: Are chain-of-thought traces useful as explanations? How do we explain multi-step plans, tool invocations, and cascading effects?

4/n

12.01.2026 20:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Since 2021, our HCXAI workshops have built a community of 450+ researchers, practitioners, and policymakers from 21 countries. This year, we're reimagining explainability for agentic systems across the following areas:

3/n

12.01.2026 20:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🎯 The challenge: LLM-based agents are challenging XAI techniques. When AI systems plan multi-step strategies, invoke toolsβ€”what does explainability even mean?

⚑️ The urgency: Without explainability, there can be no accountability. And unaccountable AI leads to automated injustice.

2/n

12.01.2026 20:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨[Pls repost!] Agentic AI is stress-testing Explainable AI. We need to fix it. That's why I'm thrilled to announce the 6th Human-Centered Explainable AI (HCXAI) workshop at CHI 2026 in Barcelona! πŸš€

πŸ“ 2-5 page single column papers (excluding refs)
πŸ—“οΈ Deadline: Feb 19, 2026
πŸ”— hcxai.jimdosite.com

1/n

12.01.2026 20:58 β€” πŸ‘ 5    πŸ” 5    πŸ’¬ 2    πŸ“Œ 2

The person who knows the most about all of this is @upolehsan.bsky.social who has spent half a decade thinking about the human factors side of explainations.

And if you want to know about mechanistic faithfulness of rationales and chain of thought then @sarah-nlp.bsky.social is your go-to.

04.12.2025 14:17 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
OpenAI has trained its LLM to confess to bad behavior Large language models often lie and cheat. We can’t stop thatβ€”but we can make them own up.

OpenAI’s big idea is to teach the LLM to post-hoc explain how it solved a problem. This is an extension of chain of thought.

It looks very similar in nature to β€œrationale generation”, an explanation technique that has been around since 2018.

www.technologyreview.com/2025/12/03/1...

04.12.2025 14:12 β€” πŸ‘ 12    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Trump is MAGA for Mamdani.
I've never seen another politician being as adept as Mamdani when it comes to coming back to the point.

How is he this good?
My hot take: it's his training as a rapper.

Somewhere there's a thesis to be written on nerdy rappers who end up as professionals.

22.11.2025 23:23 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Send it to me. I'll happily respond and b/cc you.

18.11.2025 04:23 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ“£ (pls repost, self plugs welcomed!):

Looking for papers on the precarity of *knowledge/white-collar workers* under AI.

Lots on gig work + AI, but less on how non-unionized, higher-income knowledge workers face rising risks from AI systems they help create.

Got recs?
#academicSky #HCI #AI

15.11.2025 17:20 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Test of Time awards eat Best Paper awards for lunch πŸ”₯πŸ”₯πŸ”₯ Given how fast moving Computer Science is as a field, work that stands the test of time is the real deal πŸ’―

Congrats @markriedl.bsky.social! So wholesome to see fellow EI-lab alum @matthewguz.bsky.social up there as well!

12.11.2025 23:02 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ’‘ Because the best advisors don't create followers. They create leaders. Then they beam with pride when the world notices.

πŸ™ Grateful for mentors like Mark who do exactly that.

🌱 And hoping I can pay that forward to my own students, present and future.

#academicsky #Mentorship #PhD #Gratitude

06.11.2025 18:37 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

And honestly? My quiet prayer is that someday, I want to be at a conference, and someone the same thing.
And I'll find the nearest pen. Just like Mark did. ✏️

If that ever happens, I'll know I did at least one part of this job right.

Because the best advisors...

1/3

06.11.2025 18:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ“ From the moment I started working with him, he pushed me to establish my identity.

When I wrote bios, he'd say: "Lead with your work. Don't start with 'I'm advised by Mark Riedl.'"

So seeing this badge ("Mark (Upol's Advisor)") felt like a full-circle moment. πŸ’«

1/2

06.11.2025 18:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

My mentor @markriedl.bsky.social posted this from the AIES conference. Made my day!

Turns out people were opening conversations by saying: "Hey… you're Upol's advisor, right?"

On the surface, hilarious.

Underneath, it's the mentoring philosophy he's practiced for years.

Here's why:

1/n

06.11.2025 18:37 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Bro, I love the energy. (South) Asians go hard. Love that people are finally seeing our work ethic at the highest place in NYC. I love everything you're doing.

But for the sake of all humanity, I hope you go to sleep and get some rest! We can't afford a burned out Zohran!

06.11.2025 15:58 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

There are many ways to motivate people.
We've seen what fear and division do.
Now we've a chance to see what hope and unity can do.

05.11.2025 20:49 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Didn't realize Bsky logged me out. Came back to this banger. Made my day!

04.11.2025 20:58 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Peak academic mood: ghosting Nobel Prize for solitude.

✨ The real prize was peace.

07.10.2025 19:07 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Academics after a paper deadline.

#academicSky

25.09.2025 21:13 β€” πŸ‘ 14    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

It is eye opening to see the same CS conference venues loudly talking about "accessibility" and "allyship" are forcing people to travel (regardless of visa/health issues) and banning hybrid efforts.

23.09.2025 21:15 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We need to stop pretending brilliant science comes from brilliant minds working brilliantly at all times. Most of the stuff in academia is this unglamorous persistence from people who keep showing up.

7/7
n=7
#academicSky #mentalHealth #grit

22.09.2025 16:51 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If you're in the fog: Try the smallest possible "show up" today. 🌱
If you know someone struggling: Check on your people.

πŸ’ͺ Because sometimes the gap between "almost didn't exist" and "submitted to CHI" is just... showing up.

6/n

22.09.2025 16:51 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0