Aditya and Joy wearing regalia and posing in front of a bright red background!
Big day today with Joy Ming graduating!! A new doctor in town! Canβt wait to see all the incredible things Joy will take on next. So proud!
23.05.2025 21:28 β π 5 π 0 π¬ 0 π 0
Thank you to all our participants, co-organizers, student volunteers, funders, and partners who made this possible. And to Joy Ming for the beautiful visual summaries.
23.05.2025 16:00 β π 0 π 0 π¬ 0 π 0
Our conversations spanned:
π· Meaningful use cases of AI in high-stakes global settings
π· Interdisciplinary methods across computing and humanities
π· Partnerships between academia, industry, and civil society
π· The value of local knowledge, lived experiences, and participatory design
23.05.2025 16:00 β π 0 π 0 π¬ 1 π 0
Over three days, we explored what it means to design and govern pluralistic and humanistic AI technologies β ones that serve diverse communities, respect cultural contexts, and center social well-being. The summit was part of the Global AI Initiative at Cornell.
23.05.2025 16:00 β π 0 π 0 π¬ 1 π 0
Yesterday we wrapped up the Thought Summit on LLMs and Society at Cornell β an energizing and deeply reflective gathering of researchers, practitioners, and policymakers from across institutions and geographies.
23.05.2025 16:00 β π 2 π 0 π¬ 1 π 0
Thank you Dhanaraj for attending the Thought Summit and sharing your thoughts on how we can design AI for All!
23.05.2025 15:56 β π 1 π 0 π¬ 0 π 0
This was the week of reflection, new ideas, and a renewed sense of urgency to design AI systems that serve marginalized communities globally. Can't wait for what's next.
02.05.2025 01:05 β π 1 π 0 π¬ 0 π 0
ASHABot: An LLM-Powered Chatbot to Support the Informational Needs of Community Health Workers | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
You will be notified whenever a record that you have chosen has been cited.
Pragnya Ramjee presented work (with Mohit Jain at MSR India) on deploying LLM tools for community health workers in India. In collaboration with Khushi Baby, we show how thoughtful AI design can (and cannot) bridge critical informational gaps in low-resource settings.
dl.acm.org/doi/10.1145/...
02.05.2025 01:05 β π 1 π 0 π¬ 2 π 0
"Who is running it?" Towards Equitable AI Deployment in Home Care Work | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
You will be notified whenever a record that you have chosen has been cited.
Ian RenΓ© Solano-Kamaiko presented our study on how algorithmic tools are already shaping home care workβoften invisibly. These systems threaten workersβ autonomy and safety, underscoring the need for stronger protections and democratic AI governance.
dl.acm.org/doi/10.1145/...
02.05.2025 01:05 β π 1 π 0 π¬ 1 π 0
Exploring Data-Driven Advocacy in Home Health Care Work | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
Joy Ming presented our award-winning paper on designing advocacy tools for home care workers. In this work, we unpack tensions between individual and collective goals and highlight how to use data responsibly in frontline labor organizing.
dl.acm.org/doi/10.1145/...
02.05.2025 01:05 β π 1 π 0 π¬ 1 π 0
AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
You will be notified whenever a record that you have chosen has been cited.
Dhruv presented our cross-cultural study on AI writing tools and their Western-centric biases. We found that AI suggestions disproportionately benefit American users and subtly nudge Indian users toward Western writing normsβraising concerns about cultural homogenization.
dl.acm.org/doi/10.1145/...
02.05.2025 01:05 β π 0 π 0 π¬ 1 π 0
"Ignorance is not Bliss": Designing Personalized Moderation to Address Ableist Hate on Social Media | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
Sharon Heung presented our work on personalizing moderation tools to help disabled users manage ableist content online. We showed how users want control over filtering and framingβwhile also expressing deep skepticism toward AI-based moderation.
dl.acm.org/doi/10.1145/...
02.05.2025 01:05 β π 0 π 0 π¬ 1 π 0
Dhruv, Sharon, Aditya, Jiamin, Ian, and Joy in front of buildings and trees surrounded by a lush green landscape.
Just wrapped up an incredible week at #CHI2025 in Yokohama with Joy Ming, @sharonheung.bsky.social, Dhruv Agarwal, and Ian RenΓ© Solano-Kamaiko! We presented several papers that push the boundaries of what Globally Equitable AI could look like in high-stakes contexts.
02.05.2025 01:05 β π 11 π 0 π¬ 1 π 0
The Great Language Flattening
Chatbots learned from human writing. Now itβs their turn to influence us.
As these tools become more common, itβs critical to ask: Whose voice is being amplifiedβand whose is being erased? www.theatlantic.com/technology/a...
02.05.2025 00:41 β π 1 π 0 π¬ 1 π 0
Excited to see our research in The Atlantic and Fast Company!
Our work, presented at #CHI2025 this week, shows how AI writing suggestions often nudge people toward Western styles, unintentionally flattening cultural expression and nuance.
02.05.2025 00:41 β π 5 π 0 π¬ 1 π 0
Excited to be at #CHI2025 with Joy Ming, Sharon Heung, Dhruv Agarwal, and Ian Rene Solano-Kamaiko!
Our lab will be presenting several papers Globally Equitable AI β centering equity, culture, and inclusivity in high-stakes contexts. π
If youβll be there, would love to connect! ποΈ
26.04.2025 09:32 β π 3 π 0 π¬ 0 π 0
Huge congratulations to Mahika Phutane for leading this work, and Ananya Seelam for her contributions!
Weβre thrilled to share this at ACM FAccT 2025.
Read the full paper: lnkd.in/eCsAupvK
12.04.2025 20:57 β π 0 π 0 π¬ 0 π 0
Our findings make a clear case: AI moderation systems must center disabled peopleβs expertise, especially when defining harm and safety.
This isnβt just a technical problemβitβs about power, voice, and representation.
12.04.2025 20:57 β π 1 π 0 π¬ 1 π 0
Disabled participants frequently described these AI explanations as βcondescendingβ or βdehumanizing.β
The models reflect a clinical, outsider gazeβrather than lived experience or structural understanding.
12.04.2025 20:57 β π 0 π 0 π¬ 1 π 0
AI systems often underestimate ableismβeven in clear-cut cases of discrimination or microaggressions.
And when they do explain their decisions? The explanations are vague, euphemistic, or moralizing.
12.04.2025 20:57 β π 0 π 0 π¬ 1 π 0
Methodology of our paper, starting with creating a dataset containing ableist and non-ableist post, followed by collecting and analyzing ratings and explanations from AI models and disabled and non-disabled participants.
We studied how AI systems detect and explain ableist contentβand how that compares to judgments from 130 disabled participants.
We also analyzed explanations from 7 major LLMs and toxicity classifiers. The gaps are stark.
12.04.2025 20:57 β π 0 π 0 π¬ 1 π 0
The image of our Arxiv Photo with paper title and author list: Mahika Phutane, Ananya Seelam, and Aditya Vashistha
Our paper, βCold, Calculated, and Condescendingβ: How AI Identifies and Explains Ableism Compared to Disabled People, has been accepted at ACM FAccT 2025!
A quick thread on what we found:
12.04.2025 20:57 β π 8 π 0 π¬ 1 π 0
Excited to be at @umich.edu this week to speak at Democracyβs Information Dilemma event, and at the Social Media and Society conference! Hard to believe itβs been since 2016 that I was last here. Canβt wait for engaging conversations, new ideas, and reconnecting with colleagues old and new!
02.04.2025 21:47 β π 2 π 0 π¬ 0 π 0
The FATE group at @msftresearch.bsky.social NYC is accepting applications for 2025 interns. π₯³π
For full consideration, apply by 12/18.
jobs.careers.microsoft.com/global/en/jo...
Interested in AI evaluation? Apply for the STAC internship too!
jobs.careers.microsoft.com/global/en/jo...
25.11.2024 13:31 β π 73 π 35 π¬ 4 π 1
Thank you Shannon!
23.11.2024 16:32 β π 7 π 0 π¬ 0 π 0
Hi Shannon, Iβd love to be added!
23.11.2024 16:10 β π 1 π 0 π¬ 1 π 0
An exciting opportunity for PhD students to spend a summer at our campus in NYC and be part of our new Security, Trust, and Safety (SETS) Initiative! Deadline: January 21, 2025.
22.11.2024 15:32 β π 9 π 3 π¬ 0 π 0
I'd love to be added too! Thank you!
22.11.2024 15:24 β π 1 π 0 π¬ 1 π 0
I do research on some tech stuff
asst prof @ cornell info sci | fairness in tech, public health & services | alum of MSR, Stanford ICME, NERA Econ, MIT Math | she/her | koenecke.infosci.cornell.edu
π PhD student @ischool.uw.edu (he/him)
π€ Interested in pluralistic alignment & social media algorithms
Academic Website: https://sohamde.in/ | πSeattle, WA
Prof. Georgetown University, Tech and Welfare, ICTD, Critical AI, author: Patching Development.
ex - Microsoft developer, UC Berkeley(Ph.D)
interested in Real Utopias
Tech and social change #sociotechnical, #labortech, #welfare, #AI
www.rajeshveera.org
Sr. Principal Research Manager at Microsoft Research, NYC // Machine Learning, Responsible AI, Transparency, Intelligibility, Human-AI Interaction // WiML Co-founder // Former NeurIPS & current FAccT Program Co-chair // Brooklyn, NY // http://jennwv.com
Assistant professor of CS at UC Berkeley, core faculty in Computational Precision Health. Developing ML methods to study health and inequality. "On the whole, though, I take the side of amazement."
https://people.eecs.berkeley.edu/~emmapierson/
Professor at Cornell Tech. Co-founder of Clinic to End Tech Abuse. Computer security, tech abuse, cryptography.
https://rist.tech.cornell.edu/
I work on human-centered {security|privacy|computing}. Associate Professor (w/o tenure) at @hcii.cmu.edu. Director of the SPUD (Security, Privacy, Usability, and Design) Lab. Non-Resident Fellow @cendemtech.bsky.social
Tech, facts, and carbonara.
mantzarlis.com
He teaches information science at Cornell. http://mimno.infosci.cornell.edu
ethics, politics, and policy in tech
faculty, information science @ cornell
www.danielsusser.info
Computational Social Science & Social Computing Researcher | Assistant Prof @illinoisCDS @UofIllinois | Prev @MSFTResearch | Alum @ICatGT @GeorgiaTech @IITKgp
Assistant Professor at George Mason University | Ph.D. from Indiana University Bloomington | specializing in security, privacy, HCI, AR/VR, and accessibility | Prev Institutions : University of Denver, ParityTech, XRSI, and American Express. π
PhD candidate at Cornell University. HCI researcher examining trust & safety issues in the Majority World.
Learning. Discovery. Engagement. Share your #Cornell moments with our Big Red community πΎ
Law/tech, surveillance, work, truckers. Faculty Cornell Information Science, Cornell Law / Fellow @NewAmerica / Data Driven: http://tinyurl.com/57v559mv / www.karen-levy.net
Professor of Computer Science, Stanford - HCI & Design, Co-Founder & Co-Director Stanford HAI @stanfordhai.bsky.social
HCI Prof at CMU HCII. Research on augmented intelligence, participatory AI, & complementarity in human-human and human-AI workflows.
thecoalalab.com