Advancing science- and evidence-based AI policy
Policy must be informed by, but also facilitate the generation of, scientific evidence
What kind of AI governance do we need? Our new piece in @science.org answers this: we need policy grounded in evidence and built to generate more of it. Evidence-based policymaking is not a sloganβitβs a design challenge for democratic governance in the age of AI www.science.org/doi/10.1126/... π§΅
31.07.2025 23:27 β π 100 π 46 π¬ 6 π 2
Note that the data collection ended right before ChatGPT was released, so my guess is that the percentages are no longer small.
18.07.2025 01:00 β π 3 π 0 π¬ 0 π 0
Could AI slow science?
Confronting the production-progress paradox
Fabulous post by @randomwalker.bsky.social & Sayash raising the same concern many of us have about whether we're on the right track with how we're using AI for science. Everyone should read it, take a deep breath & think through the implications.
www.aisnakeoil.com/p/could-ai-s...
17.07.2025 16:05 β π 156 π 70 π¬ 6 π 11
Understanding Social Media Recommendation Algorithms
Iβm reading a very well written 2023 paper on social media recommender systems from @randomwalker.bsky.social I had completely forgotten that in the 00s βneither Facebook nor Twitter had the ability to reshare or retweet posts in your feed.βWhat a huge shift!
knightcolumbia.org/content/unde...
10.07.2025 15:51 β π 2 π 4 π¬ 0 π 0
Weβre hiring at Princeton on AI and society, working with Arvind Narayanan or me depending on fit.
I think current AI developments are all a huge deal but am very unexcited by current state of the AGI and/or AI safety discourse.
Please share as you see fit.
puwebp.princeton.edu/AcadHire/app...
20.06.2025 12:36 β π 84 π 52 π¬ 2 π 2
After consideration, I will post occasionally, but heavily censor what I share compared to other sites.
I tried making the transition, but talking about AI here is just really fraught in ways that are tough to mitigate & make it hard to have good discussions (the point of social!). Maybe it changes
26.05.2025 04:25 β π 428 π 25 π¬ 76 π 35
Two Paths for A.I.
The technology is complicated, but our choices are simple: we can remain passive, or assert control.
For @newyorker.com, Joshua Rothman spoke with @randomwalker.bsky.social and @sayash.bsky.social, authors of AI Snake Oil and a recently published paper βAI as Normal Technologyβ, which argues that practical obstacles will slow AIβs uses and potential: www.newyorker.com/culture/open...
28.05.2025 18:06 β π 16 π 3 π¬ 1 π 2
"A hypothesis on the accelerating decline of reading:
* Broadly speaking, people read for pleasure/entertainment and for learning/obtaining information.
* Reading for pleasure has been declining for a while and is being replaced by videos (very sharply among young people). This trend will surely continue.
* Reading for obtaining information is getting intermediated by chatbots. We are in the very early stages of this shift, so I think people underappreciate the magnitude of what's coming. It's not just that AI replacing traditional web search. Even when it comes to reading news articles, business documents, or scientific papers, the vision that tech companies are pushing on us is AI summarization + synthesis + Q&A.
* We don't have to accept this, but I predict that most people will. It's a tradeoff between speed/convenience and accuracy/depth of understanding β the same tradeoff that was once offered to us when it became possible to search the web to look up a quick fact as opposed to reading about the topic in depth in an encyclopedia.
* Just as most people in most cases prefer a shallow web search over deeper reading, most people in most cases will prefer AI-intermediated access to knowledge. Traditional reading won't disappear, but people will do it vastly less often, except in hobbyist reading communities and professions where traditional reading is needed.
* The decline of reading-for-pleasure (due to video) and reading-for-information (due to AI) will accelerate each other, as reading text without an intermediary will come to be seen as a chore.
* Personally, I find this sad. But while it's tempting to moralize all this, I think that's unproductive. Yelling at individuals to resist new media has been done for centuries and has never worked.
* Even if people individually rationally choose these tradeoffs, I think we collectively lose something; critical reading skills are arguably essential for a democracy. We need to figure out what to do about that.
clear, depressing set of observations from @randomwalker.bsky.social - "The decline of reading-for-pleasure (due to video) and reading-for-information (due to AI) will accelerate each other, as reading text without an intermediary will come to be seen as a chore."
22.05.2025 14:31 β π 334 π 113 π¬ 14 π 18
Moving towards informative and actionable social media research
Social media is nearly ubiquitous in modern life, and concerns have been raised about its putative societal impacts, ranging from undermining mental health and exacerbating polarization to fomenting v...
New preprint with @jbakcoleman.bsky.social @lewan.bsky.social @randomwalker.bsky.social @orbenamy.bsky.social @lfoswaldo.bsky.social where we argue for a complex-system perspective to understand the causal effects of social media on society and for a triangulation of methods
arxiv.org/abs/2505.09254
15.05.2025 06:31 β π 77 π 28 π¬ 2 π 3
I'm excited that I can finally share what I've been working on for the past 9 months:
The United Nations 2025 Human Development Report: "A matter of choice: People and possibilities in the age of AI" π§΅
hdr.undp.org/content/huma...
06.05.2025 09:03 β π 110 π 28 π¬ 6 π 5
AGI is not a milestone
There is no capability threshold that will lead to sudden impacts
βAGI is not a milestone because it is not actionable. A company declaring it has achieved, or is about to achieve, AGI has no implications for how businesses should plan, what safety interventions we need, or how policymakers should react.β
@randomwalker.bsky.social
open.substack.com/pub/aisnakeo...
01.05.2025 11:59 β π 6 π 1 π¬ 1 π 0
Okay just started @randomwalker.bsky.social and @sayash.bsky.social's new essay and this is π₯π₯π₯.
"Resilience as the overarching approach to catastrophic risk" -- yes thank you exactly this.
kfai-documents.s3.amazonaws.com/documents/c3...
24.04.2025 20:41 β π 14 π 2 π¬ 1 π 0
text says "ML Reproducibility Challenge Princeton University, New Jersey, USA, August 21 2025"
We are hosting @reproml.org 2025 on Aug. 21. There will be invited talks, oral presentations, and poster sessions. Keynote speakers include @randomwalker.bsky.social, @soumithchintala.bsky.social, @jfrankle.com, @jessedodge.bsky.social, @stellaathena.bsky.social
Register now: bit.ly/4cP8vIq
24.04.2025 18:57 β π 1 π 1 π¬ 0 π 0
In this clip from our event last week, @randomwalker.bsky.social describes how we can map out the landscape of AI along two dimensions: how well the AI tool works, and how harmful (or benign) it is.
Watch a full recording of the event: youtu.be/C3TqcUEFR58
24.04.2025 15:21 β π 12 π 4 π¬ 1 π 1
AI as Normal Technology
A new paper that we will expand into our next book
IMO, the most important piece on AI of the last 6 months and I recommend it to everyone. A genuinely careful consideration of the technology and its intersections with culture and labor from @randomwalker.bsky.social and @sayash.bsky.social Authors of AI Snake Oil substack.com/home/post/p-...
19.04.2025 12:37 β π 102 π 27 π¬ 4 π 3
AI as Normal Technology
Truly thoughtful and essential analysis of the AI field from @randomwalker.bsky.social @sayash.bsky.social. States what many felt, but haven't articulated. Pairs well with Shazeda Ahmed's "epistemic culture of AI safety" and others' work on risk and anti-trust.
knightcolumbia.org/content/ai-a...
17.04.2025 16:25 β π 5 π 1 π¬ 1 π 2
AI as Normal Technology
In a new essay from our "Artificial Intelligence and Democratic Freedoms" series, @randomwalker.bsky.social & @sayash.bsky.social make the case for thinking of #AI as normal technology, instead of superintelligence. Read here: knightcolumbia.org/content/ai-a...
15.04.2025 14:34 β π 38 π 17 π¬ 1 π 5
On Thursday, April 17 at 5:30 PM EDT, we welcome @randomwalker.bsky.social to discuss his latest book, "AI Snake Oil," co-authored with Sayash Kapoor. The presentation will be followed by a discussion with @dacemoglumit.bsky.social.
Register here: bit.ly/4cGF3Eo
14.04.2025 15:01 β π 8 π 3 π¬ 1 π 3
Why an overreliance on AI-driven modelling is bad for science
Without clear protocols to catch errors, artificial intelligenceβs growing role in science could do more harm than good.
New commentary in @nature.com from professor Arvind Narayanan (@randomwalker.bsky.social) & PhD candidate Sayash Kapoor (@sayash.bsky.social) about the risks of rapid adoption of AI in science - read: "Why an overreliance on AI-driven modelling is bad for science" π
#CITP #AI #science #AcademiaSky
09.04.2025 18:19 β π 18 π 10 π¬ 0 π 0
On 4/10 and 4/11, we're hosting our symposium "AI and Democratic Freedoms." Excited to have panelists @atoosakz.bsky.social, @randomwalker.bsky.social, @alondra.bsky.social, & Deirdre K. Mulligan join moderator
@shaynelongpre.bsky.social to kick it off. RSVP: www.eventbrite.com/e/artificial...
02.04.2025 19:58 β π 13 π 5 π¬ 1 π 0
"Computer scientists Arvind Narayanan and Sayash Kapoor will present βUnderstanding AI: What It Can and Cannot Doβ at 12 p.m. on April 3. The program will be streamed on Baltimore County Public Library's YouTube channel & bit.ly/AISnakeOil. No registration needed"
msla.maryland.gov/Pages/press-...
30.03.2025 11:19 β π 44 π 23 π¬ 3 π 3
Book Review: AI Snake Oil: What Artificial Intelligence Can Do, What It Canβt, and How to Tell the Difference, by Arvind Narayanan and Sayash Kapoor - Alexya Martinez, 2025
NEW #JMCQReviewπ¨In this book, @randomwalker.bsky.social and @sayash.bsky.social dissect the promises and the pitfalls of predictive AI. Read the @jmcquarterly.bsky.social review by Alexya Martinez #commsky
journals.sagepub.com/doi/10.1177/...
24.03.2025 21:00 β π 5 π 1 π¬ 1 π 1
Thank you for making this. Is the source available? I would like to learn how to build something like this.
20.03.2025 13:04 β π 9 π 0 π¬ 1 π 0
The Wuhan lab is STILL doing unsafe research that could trigger a pandemic AND getting prestigious pubs as incentive.π«
To fix this for the future, we have to admit we were deliberately misled on the possibility of a lab leak in the past. Horrid but true.
Gift link
www.nytimes.com/2025/03/16/o...
16.03.2025 12:22 β π 226 π 77 π¬ 24 π 30
We focus so much of our attention on algorithms, but not enough on the design of social networks.
Loved this insight from @randomwalker.bsky.social on what makes TikTok feel so different here: knightcolumbia.org/blog/tiktoks...
14.03.2025 00:04 β π 16 π 4 π¬ 1 π 1
What are 3 concrete steps that can improve AI safety in 2025? π€β οΈ
Our new paper, βIn House Evaluation is Not Enoughβ has 3 calls-to-actions to empower evaluators:
1οΈβ£ Standardized AI flaw reports
2οΈβ£ AI flaw disclosure programs + safe harbors.
3οΈβ£ A coordination center for transferable AI flaws.
1/π§΅
13.03.2025 15:59 β π 11 π 8 π¬ 1 π 1
Director, Princeton Language and Intelligence. Professor of CS.
I study algorithms/learning/data applied to democracy/markets/society. Asst. professor at Cornell Tech. https://gargnikhil.com/. Helping building personalized Bluesky research feed: https://bsky.app/profile/paper-feed.bsky.social/feed/preprintdigest
Lawyer, coder, baker.
Govt (Deputy US CTO for Obama and Biden), non-profit (Trust & Safety Professional Assn & Foundation, Data & Society, Public.resource), startup person (Google, Twitter).
Curious tinkerer.
@amac on the other thing.
Now in London.
Author: Recoding America, Founder: Code for America, Co-founder: USDS and USDR.
The Center for Information Technology Policy (CITP) is a nexus of expertise in technology, engineering, public policy, & the social sciences. Our researchers work to better understand and improve the relationship between technology & society. Princeton U.
Safe and robust AI/ML, computational sustainability. Former President AAAI and IMLS. Distinguished Professor Emeritus, Oregon State University. https://web.engr.oregonstate.edu/~tgd/
Professor of Psychology & Human Values at Princeton | Cognitive scientist curious about technology, narratives, & epistemic (in)justice | They/She π³οΈβπ
www.crockettlab.org
It is said that there may be seeming disorder and yet no real disorder at all
I lead Cohere For AI. Formerly Research
Google Brain. ML Efficiency, LLMs,
@trustworthy_ml.
Professor and Head of Machine Learning Department at Carnegie Mellon. Board member OpenAI. Chief Technical Advisor Gray Swan AI. Chief Expert Bosch Research.
Research Scientist @DeepMind | Previously @OSFellows & @hrdag. RT != endorsements. Opinions Mine. Pronouns: he/him
A LLN - large language Nathan - (RL, RLHF, society, robotics), athlete, yogi, chef
Writes http://interconnects.ai
At Ai2 via HuggingFace, Berkeley, and normal places
Professor at Wharton, studying AI and its implications for education, entrepreneurship, and work. Author of Co-Intelligence.
Book: https://a.co/d/bC2kSj1
Substack: https://www.oneusefulthing.org/
Web: https://mgmt.wharton.upenn.edu/profile/emollick
Political philosopher of AI. Assistant Prof @ UW-Madison. Previous: Harvard Tech & Human Rights Fellow, Princeton postdoc, Oxford DPhil π€ In Berlin May-August π