Personalized AI is rerunning the worst part of social media's playbook
The incentives, risks, and complications of AI that knows you
AI companies are starting to promise personalized assistants that βknow you.β Weβve seen this playbook before β it didnβt end well.
In a guest post for @hlntnr.bsky.socialβs Rising Tide, I explore how leading AI labs are rushing toward personalization without learning from social mediaβs mistakes
21.07.2025 18:32 β π 14 π 5 π¬ 0 π 3
OpenAI slashes AI model safety testing time
Testers have raised concerns that its technology is being rushed out without sufficient safeguards
From CDTβs @mbogen.bsky.social: βAs #AI companies are racing to put out increasingly advanced systems, they also seem to be cutting more and more corners on safety, which doesnβt add up.β www.ft.com/content/8...
11.04.2025 18:29 β π 22 π 12 π¬ 1 π 0
Graphic for CDT AI Gov Lab's report, "Assessing AI: Surveying the Spectrum of Approaches to Understanding and Auditing AI Systems." Illustration of a collection of AI "tools" and "toolbox" β a hammer and red toolbox β and a stack of checklists with a pencil.
NEW REPORT: CDT AI Governance Labβsβs Assessing AI reportAudits looks at the rise of complex automated systems which demand a robust ecosystem for managing risks and ensuring accountability. cdt.org/insights/ass... cc: @mbogen.bsky.social
16.01.2025 17:37 β π 9 π 3 π¬ 1 π 0
howdy!
the Georgetown Law Journal has published "Less Discriminatory Algorithms." it's been very fun to work on this w/ Emily Black, Pauline Kim, Solon Barocas, and Ming Hsu.
i hope you give it a read β the article is just the beginning of this line of work.
www.law.georgetown.edu/georgetown-l...
18.11.2024 16:40 β π 50 π 15 π¬ 4 π 4
A non-profit bringing together academic, civil society, industry, & media organizations to address the most important and difficult questions concerning AI.
Educator, Author of The Daycare Myth: What We Get Wrong About Early Care and Education (and What We Should Do About It)
Making data & AI work for people & society.
Sign up for our fortnightly newsletter: https://nuffieldfoundation.tfaforms.net/149
Striving for a safer world since 1945
science policy | nuclear weapons | clean energy | STEM education | artificial intelligence | data privacy | much much more
AI Policy Specialist at @scientistsorg.bsky.social I think about AI policy in the US and Brazil- all views are my own. Lifelong Cruzeirense ππ¦π«
EU law and tech policy, mostly musings on content moderation, platform governance, privacy, AI. Now: Co-Founder & Director @awo.agency. Fellow at Vrije Universiteit Brussel. Then: EU Parliament. UN. Mozilla. PhD in EU law.
Researcher at GDM and at the GenLaw Center.
I just want things to work (:
https://katelee168.github.io/
ML researcher, MSR + Stanford postdoc, future Yale professor
https://afedercooper.info
Corporate disclosures and standards for AI products and models, led by Tim OβReilly & Ilan Strauss @ the Social Science Research Council
https://asimovaddendum.substack.com
https://t.co/1rlsMOTEp7
Executive Director http://witness.org : video + tech + human rights; 'Prepare, Don't Panic' initiative on media manipulation/generative AI. TED speaker on deepfakes https://gen-ai.witness.org, @samgregory on Twitter/X. PhD Westminster.
Director of AI Research at Apple. Board Chair for Partnership on AI. Photographer. Musician.
Sociologist of emergent tech. Dog content enthusiast. https://www.jennyldavis.com/
ITH. UK AISI.
TSR. Dad. Views my own?
<placeholder outro/>
AI, national security, China. Part of the founding team at @csetgeorgetown.bsky.social⬠(opinions my own). Author of Rising Tide on substack: helentoner.substack.com
We advance science and technology to benefit humanity.
http://microsoft.com/research
VP and Distinguished Scientist at Microsoft Research NYC. AI evaluation and measurement, responsible AI, computational social science, machine learning. She/her.
One photo a day since January 2018: https://www.instagram.com/logisticaggression/
AI, sociotechnical systems, social purpose. Research director at Google DeepMind. Cofounder and Chair at Deep Learning Indaba. FAccT2025 co-program chair. shakirm.com
Writing about tech policy. Senior Policy Advisor at NTIA, research at Center for Democracy & Technology.