Paper in the first tweet, which pre-dates ChatGPT: www.pnas.org/doi/full/10....
06.10.2025 00:47 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0@emollick.bsky.social
Professor at Wharton, studying AI and its implications for education, entrepreneurship, and work. Author of Co-Intelligence. Book: https://a.co/d/bC2kSj1 Substack: https://www.oneusefulthing.org/ Web: https://mgmt.wharton.upenn.edu/profile/emollick
Paper in the first tweet, which pre-dates ChatGPT: www.pnas.org/doi/full/10....
06.10.2025 00:47 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0A lot of people are worried about a flood of trivial but true findings, but we should be just as concerned about how to handle a flood of interesting and potentially true findings. The selection & canonization process in science has been collapsing already, with no good solution
06.10.2025 00:46 โ ๐ 9 ๐ 0 ๐ฌ 2 ๐ 0Science isn't just a thing that happens. We can have novel discoveries flowing from AI-human collaboration every day (and soon, AI-led science), and we really have not built the system to absorb those results and translate them into streams of inquiry and translations to practice
06.10.2025 00:46 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0Very soon, the blocker to using AI to accelerate science is not going to be the ability of AI (expect to see this soon), but rather the systems of science, as creaky as they are.
The scientific process is already breaking under a flood of human-created knowledge. How do we incorporate AI usefully?
Both these are true.
05.10.2025 20:43 โ ๐ 31 ๐ 2 ๐ฌ 0 ๐ 1The state of LLMs is messy: Some AI features (like vision) lag others (like tool use) while others have blind spots (imagegen and clocks). And the expensive "heavy thinking" models are now very far ahead of all the other AIs that most people use, capable of real work
None of this is well-documented
Deleted this, not because it is wrong but because I probably should wait for a pre-publication or other confirmation of the proof before disseminating widely.
05.10.2025 17:27 โ ๐ 49 ๐ 1 ๐ฌ 3 ๐ 0Without using Lean or specialized math tools. en.wikipedia.org/wiki/Lean_(p...
05.10.2025 17:14 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0The obsession with AI for transformational use cases obscures the fact that there are a ton of small, but very positive and very meaningful, use cases across many fields.
In this case, AI note-taking significantly reduces burnout among doctors & increases their ability to focus on their patients.
It seems very likely there is an LLM involved in the pipeline between prompt and output.
03.10.2025 20:16 โ ๐ 15 ๐ 1 ๐ฌ 4 ๐ 0I assume
03.10.2025 20:09 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0Or a corgi
03.10.2025 20:09 โ ๐ 9 ๐ 2 ๐ฌ 1 ๐ 0Huh, Sora 2 knows a lot of things:
โEthan Mollick parachuting into a volcano, explains the three forms of legitimation from DiMaggio, Paul; Powell, Walter. (April 1983). "The iron cage revisited: institutional isomorphism and collective rationality in organizational fields"
(Only 15 second limit)
Has any company made real progress on new formal organizational/process approaches to software development with AI at the team or firm level? Agile broke, what is the sequel?
03.10.2025 14:36 โ ๐ 26 ๐ 1 ๐ฌ 6 ๐ 0Please say more!
03.10.2025 14:34 โ ๐ 6 ๐ 0 ๐ฌ 1 ๐ 0Paper: arxiv.org/pdf/2509.20328
03.10.2025 13:05 โ ๐ 15 ๐ 2 ๐ฌ 1 ๐ 0This seems like a pretty big finding on AI generalization: If you train an AI model on enough video, it seems to gain the ability to reason about images in ways it was never trained to do, including solving mazes & puzzles.
The bigger the model, the better it does at these out-of-distribution tasks
I think 1922 Eliot might find this interesting, but 1941 Eliot would hate it
I am not a representative of AI, it does both good and bad things. I post about both, but mostly I share so that people know what these systems can do. If everything I post fills you with rage you should probably block me?
Or a German Expressionist video of a butternut squash.
03.10.2025 03:32 โ ๐ 9 ๐ 1 ๐ฌ 1 ๐ 0Or a seapunk aesthetic reading of Death by Water from The Wasteland.
03.10.2025 03:31 โ ๐ 8 ๐ 0 ๐ฌ 3 ๐ 0Also big fan of your work. Here is more information on the Anthropic settlement: authorsguild.org/advocacy/art...
03.10.2025 03:03 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0I feel like copyright & training data is a complicated topic, both legally & ethically, I am not personally bothered by the labs using my books in the training data but I get why many people are
Thus it comes down to the courts & the Anthropic suit shows that rights holders can win against the labs
You can also produce the opposite of viral content using Sora 2. Here is Robert Frost rapping โNothing Gold Can Stayโ
03.10.2025 02:43 โ ๐ 12 ๐ 0 ๐ฌ 1 ๐ 0The labs learned from the Studio Ghibli thing that images & video could produce viral moments that turn into user gain.
The Sora 2 launch is the ultimate implementation of this: gated invites, an app that selects for virality, reasons to share with friends, provocative contentโฆ
After a lot of use, Sora 2 is incredibly impressive as a video generator but pushed into a narrow niche:
1) Optimized for viral short form video, both in UX & output
2) Built to be one-and-done, when most video gen is selecting among variants
3) Makes fun stuff the first time, at the cost of control
Deleted the post since I think I framed it badly - sorry
03.10.2025 01:50 โ ๐ 8 ๐ 0 ๐ฌ 0 ๐ 0I agree. I did a bad job framing.
03.10.2025 00:42 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0My co-author Lennart Meincke had GPT-5 Pro look over a paper before we submitted it to a journal. It caught a tiny error in the citations that we missed (apparently it estimated the volume)
A big difference from constant hallucinations, especially GPT5 Pro; though not error-free.
I think I did a bad job here making it sound like AI never makes up citations, which wasnโt really my intent. Will delete and repost differently.
03.10.2025 00:24 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 05 Pro?
02.10.2025 22:59 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0