Jobs! First, we hope to be hiring in Computer Science for the @cornelltech.bsky.social campus:
academicjobsonline.org/ajo/jobs/30804
Focus on security, SysML, and NLP.
Please share!
@imadityav.bsky.social
Assistant Professor at Cornell. Research in HCI4D, Social Computing, Responsible AI, and Accessibility. https://www.adityavashistha.com/
Jobs! First, we hope to be hiring in Computer Science for the @cornelltech.bsky.social campus:
academicjobsonline.org/ajo/jobs/30804
Focus on security, SysML, and NLP.
Please share!
Aditya and Joy wearing regalia and posing in front of a bright red background!
Big day today with Joy Ming graduating!! A new doctor in town! Canβt wait to see all the incredible things Joy will take on next. So proud!
23.05.2025 21:28 β π 7 π 0 π¬ 0 π 0Thank you to all our participants, co-organizers, student volunteers, funders, and partners who made this possible. And to Joy Ming for the beautiful visual summaries.
23.05.2025 16:00 β π 0 π 0 π¬ 0 π 0Our conversations spanned:
π· Meaningful use cases of AI in high-stakes global settings
π· Interdisciplinary methods across computing and humanities
π· Partnerships between academia, industry, and civil society
π· The value of local knowledge, lived experiences, and participatory design
Over three days, we explored what it means to design and govern pluralistic and humanistic AI technologies β ones that serve diverse communities, respect cultural contexts, and center social well-being. The summit was part of the Global AI Initiative at Cornell.
23.05.2025 16:00 β π 0 π 0 π¬ 1 π 0Yesterday we wrapped up the Thought Summit on LLMs and Society at Cornell β an energizing and deeply reflective gathering of researchers, practitioners, and policymakers from across institutions and geographies.
23.05.2025 16:00 β π 3 π 0 π¬ 1 π 0Thank you Dhanaraj for attending the Thought Summit and sharing your thoughts on how we can design AI for All!
23.05.2025 15:56 β π 1 π 0 π¬ 0 π 0This was the week of reflection, new ideas, and a renewed sense of urgency to design AI systems that serve marginalized communities globally. Can't wait for what's next.
02.05.2025 01:05 β π 1 π 0 π¬ 0 π 0Pragnya Ramjee presented work (with Mohit Jain at MSR India) on deploying LLM tools for community health workers in India. In collaboration with Khushi Baby, we show how thoughtful AI design can (and cannot) bridge critical informational gaps in low-resource settings.
dl.acm.org/doi/10.1145/...
Ian RenΓ© Solano-Kamaiko presented our study on how algorithmic tools are already shaping home care workβoften invisibly. These systems threaten workersβ autonomy and safety, underscoring the need for stronger protections and democratic AI governance.
dl.acm.org/doi/10.1145/...
Joy Ming presented our award-winning paper on designing advocacy tools for home care workers. In this work, we unpack tensions between individual and collective goals and highlight how to use data responsibly in frontline labor organizing.
dl.acm.org/doi/10.1145/...
Dhruv presented our cross-cultural study on AI writing tools and their Western-centric biases. We found that AI suggestions disproportionately benefit American users and subtly nudge Indian users toward Western writing normsβraising concerns about cultural homogenization.
dl.acm.org/doi/10.1145/...
Sharon Heung presented our work on personalizing moderation tools to help disabled users manage ableist content online. We showed how users want control over filtering and framingβwhile also expressing deep skepticism toward AI-based moderation.
dl.acm.org/doi/10.1145/...
Dhruv, Sharon, Aditya, Jiamin, Ian, and Joy in front of buildings and trees surrounded by a lush green landscape.
Just wrapped up an incredible week at #CHI2025 in Yokohama with Joy Ming, @sharonheung.bsky.social, Dhruv Agarwal, and Ian RenΓ© Solano-Kamaiko! We presented several papers that push the boundaries of what Globally Equitable AI could look like in high-stakes contexts.
02.05.2025 01:05 β π 12 π 0 π¬ 1 π 0www.fastcompany.com/91324551/cha...
Kudos to Dhruv Agarwal for leading this work and such fun collaboration with @informor.bsky.social!
As these tools become more common, itβs critical to ask: Whose voice is being amplifiedβand whose is being erased? www.theatlantic.com/technology/a...
02.05.2025 00:41 β π 1 π 0 π¬ 1 π 0Excited to see our research in The Atlantic and Fast Company!
Our work, presented at #CHI2025 this week, shows how AI writing suggestions often nudge people toward Western styles, unintentionally flattening cultural expression and nuance.
Excited to be at #CHI2025 with Joy Ming, Sharon Heung, Dhruv Agarwal, and Ian Rene Solano-Kamaiko!
Our lab will be presenting several papers Globally Equitable AI β centering equity, culture, and inclusivity in high-stakes contexts. π
If youβll be there, would love to connect! ποΈ
Huge congratulations to Mahika Phutane for leading this work, and Ananya Seelam for her contributions!
Weβre thrilled to share this at ACM FAccT 2025.
Read the full paper: lnkd.in/eCsAupvK
Our findings make a clear case: AI moderation systems must center disabled peopleβs expertise, especially when defining harm and safety.
This isnβt just a technical problemβitβs about power, voice, and representation.
Disabled participants frequently described these AI explanations as βcondescendingβ or βdehumanizing.β
The models reflect a clinical, outsider gazeβrather than lived experience or structural understanding.
AI systems often underestimate ableismβeven in clear-cut cases of discrimination or microaggressions.
And when they do explain their decisions? The explanations are vague, euphemistic, or moralizing.
Methodology of our paper, starting with creating a dataset containing ableist and non-ableist post, followed by collecting and analyzing ratings and explanations from AI models and disabled and non-disabled participants.
We studied how AI systems detect and explain ableist contentβand how that compares to judgments from 130 disabled participants.
We also analyzed explanations from 7 major LLMs and toxicity classifiers. The gaps are stark.
The image of our Arxiv Photo with paper title and author list: Mahika Phutane, Ananya Seelam, and Aditya Vashistha
Our paper, βCold, Calculated, and Condescendingβ: How AI Identifies and Explains Ableism Compared to Disabled People, has been accepted at ACM FAccT 2025!
A quick thread on what we found:
Excited to be at @umich.edu this week to speak at Democracyβs Information Dilemma event, and at the Social Media and Society conference! Hard to believe itβs been since 2016 that I was last here. Canβt wait for engaging conversations, new ideas, and reconnecting with colleagues old and new!
02.04.2025 21:47 β π 2 π 0 π¬ 0 π 0NEW YEAR, NEW PLATFORM? π
Bowers is joining the Bluesky community! Follow to stay updated on technology innovation, collaborative research, and faculty expertise.
The FATE group at @msftresearch.bsky.social NYC is accepting applications for 2025 interns. π₯³π
For full consideration, apply by 12/18.
jobs.careers.microsoft.com/global/en/jo...
Interested in AI evaluation? Apply for the STAC internship too!
jobs.careers.microsoft.com/global/en/jo...
Thank you Shannon!
23.11.2024 16:32 β π 7 π 0 π¬ 0 π 0Hi Shannon, Iβd love to be added!
23.11.2024 16:10 β π 1 π 0 π¬ 1 π 0An exciting opportunity for PhD students to spend a summer at our campus in NYC and be part of our new Security, Trust, and Safety (SETS) Initiative! Deadline: January 21, 2025.
22.11.2024 15:32 β π 9 π 3 π¬ 0 π 0