Ethan Mollick's Avatar

Ethan Mollick

@emollick.bsky.social

Professor at Wharton, studying AI and its implications for education, entrepreneurship, and work. Author of Co-Intelligence. Book: https://a.co/d/bC2kSj1 Substack: https://www.oneusefulthing.org/ Web: https://mgmt.wharton.upenn.edu/profile/emollick

31,135 Followers  |  145 Following  |  1,716 Posts  |  Joined: 07.09.2024  |  2.2845

Latest posts by emollick.bsky.social on Bluesky

Preview
Slowed canonical progress in large fields of science | PNAS In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quanti...

Paper in the first tweet, which pre-dates ChatGPT: www.pnas.org/doi/full/10....

06.10.2025 00:47 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

A lot of people are worried about a flood of trivial but true findings, but we should be just as concerned about how to handle a flood of interesting and potentially true findings. The selection & canonization process in science has been collapsing already, with no good solution

06.10.2025 00:46 โ€” ๐Ÿ‘ 9    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Science isn't just a thing that happens. We can have novel discoveries flowing from AI-human collaboration every day (and soon, AI-led science), and we really have not built the system to absorb those results and translate them into streams of inquiry and translations to practice

06.10.2025 00:46 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Very soon, the blocker to using AI to accelerate science is not going to be the ability of AI (expect to see this soon), but rather the systems of science, as creaky as they are.

The scientific process is already breaking under a flood of human-created knowledge. How do we incorporate AI usefully?

06.10.2025 00:46 โ€” ๐Ÿ‘ 24    ๐Ÿ” 1    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1
Post image Post image

Both these are true.

05.10.2025 20:43 โ€” ๐Ÿ‘ 31    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

The state of LLMs is messy: Some AI features (like vision) lag others (like tool use) while others have blind spots (imagegen and clocks). And the expensive "heavy thinking" models are now very far ahead of all the other AIs that most people use, capable of real work

None of this is well-documented

05.10.2025 20:40 โ€” ๐Ÿ‘ 78    ๐Ÿ” 7    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 2
Post image

Deleted this, not because it is wrong but because I probably should wait for a pre-publication or other confirmation of the proof before disseminating widely.

05.10.2025 17:27 โ€” ๐Ÿ‘ 49    ๐Ÿ” 1    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Lean (proof assistant) - Wikipedia

Without using Lean or specialized math tools. en.wikipedia.org/wiki/Lean_(p...

05.10.2025 17:14 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

The obsession with AI for transformational use cases obscures the fact that there are a ton of small, but very positive and very meaningful, use cases across many fields.

In this case, AI note-taking significantly reduces burnout among doctors & increases their ability to focus on their patients.

04.10.2025 15:58 โ€” ๐Ÿ‘ 85    ๐Ÿ” 9    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 2

It seems very likely there is an LLM involved in the pipeline between prompt and output.

03.10.2025 20:16 โ€” ๐Ÿ‘ 15    ๐Ÿ” 1    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 0

I assume

03.10.2025 20:09 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Or a corgi

03.10.2025 20:09 โ€” ๐Ÿ‘ 9    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Huh, Sora 2 knows a lot of things:

โ€œEthan Mollick parachuting into a volcano, explains the three forms of legitimation from DiMaggio, Paul; Powell, Walter. (April 1983). "The iron cage revisited: institutional isomorphism and collective rationality in organizational fields"

(Only 15 second limit)

03.10.2025 20:05 โ€” ๐Ÿ‘ 76    ๐Ÿ” 7    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 0

Has any company made real progress on new formal organizational/process approaches to software development with AI at the team or firm level? Agile broke, what is the sequel?

03.10.2025 14:36 โ€” ๐Ÿ‘ 26    ๐Ÿ” 1    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 0

Please say more!

03.10.2025 14:34 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Paper: arxiv.org/pdf/2509.20328

03.10.2025 13:05 โ€” ๐Ÿ‘ 15    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image

This seems like a pretty big finding on AI generalization: If you train an AI model on enough video, it seems to gain the ability to reason about images in ways it was never trained to do, including solving mazes & puzzles.

The bigger the model, the better it does at these out-of-distribution tasks

03.10.2025 12:59 โ€” ๐Ÿ‘ 98    ๐Ÿ” 17    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 5

I think 1922 Eliot might find this interesting, but 1941 Eliot would hate it

I am not a representative of AI, it does both good and bad things. I post about both, but mostly I share so that people know what these systems can do. If everything I post fills you with rage you should probably block me?

03.10.2025 12:46 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Or a German Expressionist video of a butternut squash.

03.10.2025 03:32 โ€” ๐Ÿ‘ 9    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Or a seapunk aesthetic reading of Death by Water from The Wasteland.

03.10.2025 03:31 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Preview
What Authors Need to Know About the $1.5 Billion Anthropic Settlement Updated October 2, 2025 IMPORTANT: The Works List Is Now Live on the Settlement Website The searchable Works List and Claim Forms are now available at www.anthropiccopyrightsettlement.com.ย Find more i...

Also big fan of your work. Here is more information on the Anthropic settlement: authorsguild.org/advocacy/art...

03.10.2025 03:03 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I feel like copyright & training data is a complicated topic, both legally & ethically, I am not personally bothered by the labs using my books in the training data but I get why many people are

Thus it comes down to the courts & the Anthropic suit shows that rights holders can win against the labs

03.10.2025 03:00 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

You can also produce the opposite of viral content using Sora 2. Here is Robert Frost rapping โ€œNothing Gold Can Stayโ€

03.10.2025 02:43 โ€” ๐Ÿ‘ 12    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The labs learned from the Studio Ghibli thing that images & video could produce viral moments that turn into user gain.

The Sora 2 launch is the ultimate implementation of this: gated invites, an app that selects for virality, reasons to share with friends, provocative contentโ€ฆ

03.10.2025 02:38 โ€” ๐Ÿ‘ 12    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Video thumbnail

After a lot of use, Sora 2 is incredibly impressive as a video generator but pushed into a narrow niche:
1) Optimized for viral short form video, both in UX & output
2) Built to be one-and-done, when most video gen is selecting among variants
3) Makes fun stuff the first time, at the cost of control

03.10.2025 02:35 โ€” ๐Ÿ‘ 45    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Deleted the post since I think I framed it badly - sorry

03.10.2025 01:50 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I agree. I did a bad job framing.

03.10.2025 00:42 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

My co-author Lennart Meincke had GPT-5 Pro look over a paper before we submitted it to a journal. It caught a tiny error in the citations that we missed (apparently it estimated the volume)

A big difference from constant hallucinations, especially GPT5 Pro; though not error-free.

03.10.2025 00:40 โ€” ๐Ÿ‘ 43    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I think I did a bad job here making it sound like AI never makes up citations, which wasnโ€™t really my intent. Will delete and repost differently.

03.10.2025 00:24 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

5 Pro?

02.10.2025 22:59 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@emollick is following 20 prominent accounts