Sayash Kapoor's Avatar

Sayash Kapoor

@sayash.bsky.social

CS PhD candidate at Princeton. I study the societal impact of AI. Website: cs.princeton.edu/~sayashk Book/Substack: aisnakeoil.com

7,974 Followers  |  950 Following  |  36 Posts  |  Joined: 23.06.2023  |  1.6755

Latest posts by sayash.bsky.social on Bluesky

(1/4) Ever wondered what tech policy might look like if it were informed by research on collective intelligence and complex systems? πŸ§ πŸ§‘β€πŸ’»

Join @jbakcoleman.bsky.social, @lukethorburn.com, and myself in San Diego on Aug 4th for the Collective Intelligence x Tech Policy workshop at @acmci.bsky.social!

19.05.2025 11:01 β€” πŸ‘ 18    πŸ” 12    πŸ’¬ 1    πŸ“Œ 3
Preview
Why an overreliance on AI-driven modelling is bad for science Without clear protocols to catch errors, artificial intelligence’s growing role in science could do more harm than good.

New commentary in @nature.com from professor Arvind Narayanan (@randomwalker.bsky.social) & PhD candidate Sayash Kapoor (@sayash.bsky.social) about the risks of rapid adoption of AI in science - read: "Why an overreliance on AI-driven modelling is bad for science" πŸ”—

#CITP #AI #science #AcademiaSky

09.04.2025 18:19 β€” πŸ‘ 18    πŸ” 10    πŸ’¬ 0    πŸ“Œ 0
Preview
AI as Normal Technology

In a new essay from our "Artificial Intelligence and Democratic Freedoms" series, @randomwalker.bsky.social & @sayash.bsky.social make the case for thinking of #AI as normal technology, instead of superintelligence. Read here: knightcolumbia.org/content/ai-a...

15.04.2025 14:34 β€” πŸ‘ 38    πŸ” 17    πŸ’¬ 1    πŸ“Œ 5
Preview
Why an overreliance on AI-driven modelling is bad for science Without clear protocols to catch errors, artificial intelligence’s growing role in science could do more harm than good.

β€œThe rush to adopt AI has consequences. As its use proliferates…some degree of caution and introspection is warranted.”

In a comment for @nature.com, @randomwalker.bsky.social and @sayash.bsky.social warn against an overreliance on AI-driven modeling in science: bit.ly/4icM0hp

16.04.2025 15:42 β€” πŸ‘ 6    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Preview
Why an overreliance on AI-driven modelling is bad for science Without clear protocols to catch errors, artificial intelligence’s growing role in science could do more harm than good.

Science is not collection of findings. Progress happens through theories.As we move from findings to theories things r less amenable to automation. Proliferation of scientific findings based on AI hasn't acceleratedβ€”& might even have inhibitedβ€”higher levels of progress www.nature.com/articles/d41...

09.04.2025 15:45 β€” πŸ‘ 124    πŸ” 49    πŸ’¬ 3    πŸ“Œ 3
x.com

This is the specific use case I have in mind (Operator shouldn't be the *only* thing developers use, but rather that it can be a helpful addition to a suite of tools): x.com/random_walke...

03.02.2025 18:12 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
AI companies are pivoting from creating gods to building products. Good. Turning models into products runs into five challenges

It is also better for end users. As
@randomwalker.bsky.social and I have argued, focusing on products (rather than just models) means companies must understand user demand and build tools people want. It leads to more applications that people can productively use: www.aisnakeoil.com/p/ai-compani...

03.02.2025 18:10 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Finally, the new product launches from OpenAI (Operator, Search, Computer use, Deep research) show that it doesn't just want to be in the business of creating more powerful AI β€” it also wants a piece of the product pie. This is a smart move as models become commoditized.

03.02.2025 18:10 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This also highlights the need for agent interoperability: who would want to teach a new agent 100s of tasks from scratch? If web agents become widespread, preventing agent lock-in will be crucial.

(I'm working on fleshing out this argument with
@sethlazar.org + Noam Kolt)

03.02.2025 18:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Seen this way, Operator is a *tool* to easily create new web automation using natural language.

It could expand the web automation that businesses already use, making it easier to create new ones.

So it is quite surprising that Operator isn't available on ChatGPT Teams yet.

03.02.2025 18:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
OpenAI allows you to delegate daily tasks to Operator

OpenAI allows you to delegate daily tasks to Operator

Instead of thinking of Operator as a "universal assistant" that completes all tasks, it is better to think of it as a task template tool that automates specific tasks (for now).

Once a human has overseen a task a few times, we can estimate Operator's ability to automate it.

03.02.2025 18:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
screenshot of the save task template for Operator

screenshot of the save task template for Operator

OpenAI also allows you to "Save" tasks you completed using Operator. Once you complete a task and provide feedback to complete it successfully, you don't need to repeat it the next time.

I can imagine this becoming powerful (though it's not very detailed right now).

03.02.2025 18:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
x.com

3) In many cases, the challenge isn't Operator's ability to complete a task, it is eliciting human preferences. Chatbots aren't a great form factor for that.

But there are many tasks where reliability isn't important. This is where today's agents shine. For example: x.com/random_walke...

03.02.2025 18:08 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Could more training data lead to automation without human oversight? Not quite:

1) Prompt injection remains a pitfall for web agents. Anyone who sends you an email can control your agent.
2) Low reliability means agents fail on edge cases

03.02.2025 18:08 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

But being able to see agent actions and give feedback with a human in the loop converts Operator from an unreliable agent, like the Humane Pin or Rabbit R1, to a workable but imperfect product.

Operator is as much as UX advance as it is a tech advance.

03.02.2025 18:08 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In the end, Operator struggled to file my expense reports even after an hour of trying and prompting. Then I took over, and my reports were filed 5 minutes later.

This is the bind for web agents today: not reliable enough to be automatable, not quick enough to save time.

03.02.2025 18:08 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

OpenAI also trained Operator to ask the user for feedback before taking consequential actions, though I am not sure how robust this is β€” a simple instruction to avoid asking the user changed its behavior, and I can easily imagine this being exploited by prompt injection attacks.

03.02.2025 18:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Operator tries to delete receipts.

Operator tries to delete receipts.

But things went south quickly. It couldn't match the receipts to the amounts. Even after prompts directing it to missing receipts, it couldn't download them. It almost deleted previous receipts from other expenses!

03.02.2025 18:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
screenshot of concur with the categories for the expense filled in

screenshot of concur with the categories for the expense filled in

It navigated to the correct URLs, asked me to log into my OpenAI and Concur accounts. Once in my accounts, it downloaded receipts from the correct URL, and even started uploading the receipts under the right headings!

03.02.2025 18:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
screenshot of a conversation with Operator

screenshot of a conversation with Operator

I asked Operator to file reports for my OpenAI and Anthropic API expenses for the last month. This is a task I do manually each month, so I knew exactly what it would need to do. To my surprise, Operator got the first few steps exactly right:

03.02.2025 18:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
screenshot of Operator writing "Hello World" in an online notepad.

screenshot of Operator writing "Hello World" in an online notepad.

OpenAI's Operator is a web agent that can solve arbitrary tasks on the internet *with human supervision*. It runs on a virtual machine (*not* your computer). Users can see what the agent is doing on the browser in real-time. It is available to ChatGPT Pro subscribers.

03.02.2025 18:05 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Graph of web tasks along difficulty and severity (cost of errors)

Graph of web tasks along difficulty and severity (cost of errors)

I spent a few hours with OpenAI's Operator automating expense reports. Most corporate jobs require filing expenses, so Operator could save *millions* of person-hours every year if it gets this right.

Some insights on what worked, what broke, and why this matters for the future of agents 🧡

03.02.2025 18:04 β€” πŸ‘ 34    πŸ” 10    πŸ’¬ 6    πŸ“Œ 3
Preview
Is AI progress slowing down? Making sense of recent technology trends and claims

Excellent post discussing whether "AI progress is slowing down".

www.aisnakeoil.com/p/is-ai-prog...

And if you're not subscribed to @randomwalker.bsky.social and @sayash.bsky.social 's great newsletter, what are you waiting for?

19.12.2024 23:57 β€” πŸ‘ 56    πŸ” 15    πŸ’¬ 0    πŸ“Œ 1
Book cover

Book cover

Excited to share that AI Snake Oil is one of Nature's 10 best books of 2024! www.nature.com/articles/d41...
The whole first chapter is available online:
press.princeton.edu/books/hardco...
We hope you find it useful.

18.12.2024 12:12 β€” πŸ‘ 130    πŸ” 30    πŸ’¬ 4    πŸ“Œ 6
Preview
We Looked at 78 Election Deepfakes. Political Misinformation is not an AI Problem. Technology Isn’t the Problemβ€”or the Solution.

Grateful to @katygb.bsky.social for feedback on the draft. Read the full essay (w/@randomwalker.bsky.social): www.aisnakeoil.com/p/we-looked-...

16.12.2024 15:11 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Screenshot from the blog post

Screenshot from the blog post

Improving the information environment is inextricably linked to the larger project of shoring up democracy and its institutions. No quick fix can β€œsolve” our information problems. But we should reject the simplistic temptation to blame AI.

16.12.2024 15:10 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

But blaming technology is not a fix. Political polarization has led to greater mistrust of the media. People prefer sources that confirm their worldview and are less skeptical about content that fits their worldview. Journalism revenues have fallen drastically.

16.12.2024 15:09 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So why do we keep hearing warnings about an AI-fueled misinformation apocalypse? Blaming technology is appealing since it makes solutions seem simple. If only we could roll back harmful tech, we could drastically improve the information environment!

16.12.2024 15:09 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Screenshot of a 1912 news articles with the headline "would make use of fake photos crime", from https://newsletter.pessimistsarchive.org/p/the-1912-war-on-fake-photos

Screenshot of a 1912 news articles with the headline "would make use of fake photos crime", from https://newsletter.pessimistsarchive.org/p/the-1912-war-on-fake-photos

We've heard warnings about new tech leading to waves of misinfo before. GPT-2 in 2019, LLaMA in 2023, Pixel 9 this year, and even photo editing and re-touching back in 1912. None of the predicted waves of misinfo materialized.

16.12.2024 15:09 β€” πŸ‘ 9    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Screenshot from Rest Of World article about the impact of AI in the 2024 Indian elections: https://restofworld.org/2024/exporter-india-deepfake-trolls/

Screenshot from Rest Of World article about the impact of AI in the 2024 Indian elections: https://restofworld.org/2024/exporter-india-deepfake-trolls/

Screenshot from CIGI article about the impact of AI in the 2024 Indonesia elections: https://www.cigionline.org/articles/its-time-to-reframe-disinformation-indonesias-elections-show-why/

Screenshot from CIGI article about the impact of AI in the 2024 Indonesia elections: https://www.cigionline.org/articles/its-time-to-reframe-disinformation-indonesias-elections-show-why/

Similar trends were seen worldwide. In India, AI was used for trolling rather than misinformation. In Indonesia, AI was used to create cartoon avatars that softened a candidate's image. Of course, the cost of creating avatars without AI is minuscule for presidential campaigns.

16.12.2024 15:08 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@sayash is following 20 prominent accounts