Shamya Karumbaiah's Avatar

Shamya Karumbaiah

@shamya-karumbaiah.bsky.social

EdPsych Prof at UW-Madison equitable human-centered AI for teaching and learning himalayas, meditation, cat mom (she/her)

68 Followers  |  174 Following  |  1 Posts  |  Joined: 16.02.2025  |  1.7869

Latest posts by shamya-karumbaiah.bsky.social on Bluesky

After a hiatus, the AI Now Landscape Report is back: Artificial Power examines the fallout from the recent AI hype cycle and maps out another path available to us - one that puts the public, not profits, at the center.

Read more here: ainowinstitute.org/publications...

03.06.2025 14:44 β€” πŸ‘ 40    πŸ” 14    πŸ’¬ 0    πŸ“Œ 0
Post image

For more, don’t miss this Friday’s conversation on the consequences of AI hype with @karenhao.bsky.social, @emilymbender.bsky.social, & @alexhanna.bsky.social! RSVP and join us June 6 @ 1 pm ET. datasociety.net/events/chall...

02.06.2025 14:24 β€” πŸ‘ 18    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
Preview
The AI Con by Emily M. Bender and Alex Hanna in Conversation with Emily Mills

Madison! Tonight, @emilymbender.bsky.social and I will be at @roomofonesownbooks.bsky.social, in conversation with the great @millbot.bsky.social about our book! Come through!

roomofonesown.com/event/2025-0...

02.06.2025 13:46 β€” πŸ‘ 19    πŸ” 8    πŸ’¬ 2    πŸ“Œ 1

Shoutout to my wonderful collaborators - Mariana Castro, Diego Roman, Cynthia Baeza, and WI teachers - whose work on translanguaging theory and bilingual pedagogy lays out the foundation necessary to start envisioning ethical use of multilingual and multicultural AI in classrooms.

02.04.2025 20:33 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Research project/Challenge 
The AIAL (AI Accountability Lab) is seeking to appoint a Postdoctoral Researcher to work on developing a justice-oriented audit framework synthesising computational methods, theories of justice, and existing regulations to premeditatively focus audits towards meaningful accountability. The goal of the framework is to provide audit practitioners with practical tools such as guiding questions and rubrics that shape perspectives towards rigorous justice oriented audits.

The position corresponds to work in one or more of the following areas:
Accountability
Mapping accountability mechanisms and governance structures and their alignments with fundamental rights and freedoms and legal frameworks.
Challenging existing accountability mechanisms that do not consider or sufficiently address social inequalities, power and resource asymmetries. 
Developing new methods for ensuring accountability beyond technical and organisational considerations that provide empirical evidence for holding stakeholders accountable for AI development, provision, and deployments.
Auditing
Developing audit methodologies for specific stages in the AI lifecycle focused on ensuring justice, accountability, and transparency beyond merely satisfying legal requirements.
Development of verifiable, replicable, and reproducible design methodologies and frameworks and using these in the execution of audits. 
Developing audit tools and frameworks to evaluate AI development and deployments with a specific focus on risk and harm mitigations beyond technical and organisational issues.

Research project/Challenge The AIAL (AI Accountability Lab) is seeking to appoint a Postdoctoral Researcher to work on developing a justice-oriented audit framework synthesising computational methods, theories of justice, and existing regulations to premeditatively focus audits towards meaningful accountability. The goal of the framework is to provide audit practitioners with practical tools such as guiding questions and rubrics that shape perspectives towards rigorous justice oriented audits. The position corresponds to work in one or more of the following areas: Accountability Mapping accountability mechanisms and governance structures and their alignments with fundamental rights and freedoms and legal frameworks. Challenging existing accountability mechanisms that do not consider or sufficiently address social inequalities, power and resource asymmetries. Developing new methods for ensuring accountability beyond technical and organisational considerations that provide empirical evidence for holding stakeholders accountable for AI development, provision, and deployments. Auditing Developing audit methodologies for specific stages in the AI lifecycle focused on ensuring justice, accountability, and transparency beyond merely satisfying legal requirements. Development of verifiable, replicable, and reproducible design methodologies and frameworks and using these in the execution of audits. Developing audit tools and frameworks to evaluate AI development and deployments with a specific focus on risk and harm mitigations beyond technical and organisational issues.

We are seeking to appoint a Postdoctoral Researcher to work on developing a justice-oriented audit framework synthesising computational methods, theories of justice, and existing regulations to premeditatively focus audits towards meaningful accountability. www.adaptcentre.ie/careers/post...

02.04.2025 15:26 β€” πŸ‘ 14    πŸ” 10    πŸ’¬ 2    πŸ“Œ 1
Preview
Karumbaiah Awarded Grant from Spencer Foundation to Study how AI Tools Can Better Support Multilingual Students IDS affiliate Shamya Karumbaiah was recently awarded a Spencer Foundation Grant to study how generative AI educational tools can better support multilingual students. Karumbaiah, an assistant professo...

Congrats to IDS affiliate @shamya-karumbaiah.bsky.social on being awarded a Spencer Foundation Grant to study how generative AI tools can better support multilingual learners in school!

ids.wisc.edu/2025/03/31/k...

02.04.2025 13:41 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1
Preview
The mechanisms of AI hype and its planetary and social costs - AI and Ethics Our global landscape of emerging technologies is increasingly affected by artificial intelligence (AI) hype, a phenomenon with significant large-scale consequences for the global AI narratives being c...

"Hence, when it comes to the AI hype, those without complete credibility are able to present themselves as AI experts given the demand for AI skills via the technologically deterministic narrative that is presented."

link.springer.com/article/10.1...

21.03.2025 17:22 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Scientific Consensus on AI Bias

We might be moving into a new era of AI "DOGE-ness" but the science of AI hasn't changed. It can still perpetuate bias and amplify discrimination and we can't executive-order that away no matter how hard we try. Join the statement by hundreds of scientists here.

www.aibiasconsensus.org

21.03.2025 13:30 β€” πŸ‘ 24    πŸ” 10    πŸ’¬ 1    πŸ“Œ 1
Preview
Bias or Insufficient Sample Size? Improving Reliable Estimation of Algorithmic Bias for Minority Groups | Proceedings of the 15th International Learning Analytics and Knowledge Conference You will be notified whenever a record that you have chosen has been cited.

New from IDS affiliate @shamya-karumbaiah.bsky.social, Jaeyoon Choi, and Jeffrey Matayoshi: Bias or Insufficient Sample Size? Improving Reliable Estimation of Algorithmic Bias for Minority Groups

dl.acm.org/doi/10.1145/...

26.02.2025 15:11 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Just shared this with the data visualization class I’m teaching this semester β€” it’s a really great demonstration of how little things change the entire visualization and the narratives we can tell with it!

14.02.2025 14:05 β€” πŸ‘ 73    πŸ” 20    πŸ’¬ 4    πŸ“Œ 0
A woman’s place is in a safe city: Designing feminist cities through Nirbhaya Funds” by Radhika Radhakrishnan in association with the MIT Data+Feminism Lab. In the backdrop, a train runs between Mumbai and Kolkata, with protesters holding up placards demanding justice and safety for cis-women, trans, and queer persons in both cities.

A woman’s place is in a safe city: Designing feminist cities through Nirbhaya Funds” by Radhika Radhakrishnan in association with the MIT Data+Feminism Lab. In the backdrop, a train runs between Mumbai and Kolkata, with protesters holding up placards demanding justice and safety for cis-women, trans, and queer persons in both cities.

We are excited to launch "A Woman's Place is in a Safe City," a data story on the use of #NirbhayaFunds for digital surveillance in India, in collaboration with @mitdusp.bsky.social Data+Feminism Lab, POV Mumbai @thesafecityapp.bsky.social & 3 anonymised Kolkata-based NGOs. bit.ly/3EvqV3R 🧡Read on:

14.02.2025 13:57 β€” πŸ‘ 33    πŸ” 15    πŸ’¬ 1    πŸ“Œ 1

@shamya-karumbaiah is following 20 prominent accounts