Computer-mediated carcinisation
20.05.2025 01:19 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0@mattbeane.bsky.social
Studying work involving intelligent machines, especially robots. @MITSloan PhD, @Ucsb Asst Prof, @Stanford and @MIT Digital Fellow, @Tedtalks @Thinkers50
Computer-mediated carcinisation
20.05.2025 01:19 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0This includes many of my papers, too. The point I am making is the findings in careful academic research likely represents a lower bound of AI capabilities at this point.
15.05.2025 22:16 โ ๐ 51 ๐ 4 ๐ฌ 3 ๐ 1I canโt
i just โฆ
i canโt
www.404media.co/anthropic-cl...
I bet if someone *has* succeeded, it's via spinning up an elicitation-GPT that just drilled you for critical intel, wouldn't let you weasel out via under/overspecified output, then dumped it all back to you in standardized format so you could think faster - basically exporting your extraction algo.
30.01.2025 20:34 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Exactly. If we overheard Dario, Sam, and Demis chatting about certain well known AI critics, I'd be willing to bet they'd be expressing gratitude. Proving a grouch wrong is a real motivator.
29.01.2025 19:05 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Hi Everyone!
We're hosting our Wharton AI and the Future of Work Conference on 5/21-22. Last year was a great event with some of the top papers on AI and work.
Paper submission deadline is 3/3. Come join us! Submit papers here: forms.gle/ozJ5xEaktXDE...
Exciting new hobby project in the offing related to AI and skill. Involves a childhood passion, a wild leap into the unknown, made real via an order from Amazon just now. Will be 100% cool, I will be documenting things, sharing eventually. Feels like April 2023 again!
15.01.2025 05:07 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0The Silo is so good. Just superb. This generation's answer to the BSG remake.
13.01.2025 01:44 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0My hobby horse. You can simulate a rocket all you want, and use more energy on computation than the actual rocket would, but you won't get to orbit until you ignite rocket fuel. What if all the energy we are spending on simulating learning is not the juice we really need to make intelligence?
09.01.2025 08:49 โ ๐ 58 ๐ 11 ๐ฌ 8 ๐ 0The GPT-4 barrier was comprehensively broken Some of those GPT-4 models run on my laptop LLM prices crashed, thanks to competition and increased efficiency Multimodal vision is common, audio and video are starting to emerge Voice and live camera mode are science fiction come to life Prompt driven app generation is a commodity already Universal access to the best models lasted for just a few short months โAgentsโ still havenโt really happened yet Evals really matter Apple Intelligence is bad, Appleโs MLX library is excellent The rise of inference-scaling โreasoningโ models Was the best currently available LLM trained in China for less than $6m? The environmental impact got better The environmental impact got much, much worse The year of slop Synthetic training data works great LLMs somehow got even harder to use Knowledge is incredibly unevenly distributed LLMs need better criticism Everything tagged โllmsโ on my blog in 2024
Here's my end-of-year review of things we learned out about LLMs in 2024 - we learned a LOT of things simonwillison.net/2024/Dec/31/...
Table of contents:
In 2024 we learned a lot about how AI is impacting work. People report that they're saving 30 minutes a day using AI (aka.ms/nfw2024), and randomized controlled trials reveal theyโre creating 10% more documents, reading 11% fewer e-mails, and spending 4% less time on e-mail (aka.ms/productivity...).
31.12.2024 19:39 โ ๐ 16 ๐ 4 ๐ฌ 1 ๐ 0Independent evaluations of OpenAIโs o3 suggest that it passed math & reasoning benchmarks that were previously considered far out of reach for AI including achieving a score on ARC-AGI that was associated with actually achieving AGI (though the creators of the benchmark donโt think it o3 is AGI)
20.12.2024 18:26 โ ๐ 141 ๐ 30 ๐ฌ 13 ๐ 8Just *one* of the reasons that Blindsight was ahead of its time. Way ahead.
20.12.2024 16:36 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Massive congrats!! So excited to check it out.
14.12.2024 14:42 โ ๐ 3 ๐ 0 ๐ฌ 3 ๐ 1Wow!
10.12.2024 20:54 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Join me by the fireside this Friday with Matt Beane as we dive into one of todayโs biggest workforce challenges: upskilling at scale. ๐
Linke below to hear the full discussion on Friday, December 13 at 11 am EST!
linktr.ee/RitaMcGrath
@mattbeane.bsky.social
I propose a workshop.
Most engineers/CS working on AI presume away well established, profound brakes on AI diffusion.
Most social scientists presume away how AI use could reshape those brakes.
Let's gather these groups, examine these brakes 1-by-1, make grounded predictions.
Models like o1 suggest that people wonโt generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed
Most folks donโt regularly have a lot of tasks that bump up against the limits of human intelligence, so wonโt see it
Grateful for the opportunity to visit and learn from the professionals at the L&DI conference. And very glad to hear you found my talk so valuable, Garth! Means a lot.
04.12.2024 14:02 โ ๐ 1 ๐ 1 ๐ฌ 2 ๐ 0I made an HRI Starter Pack!
If you are a Human-Robot Interaction or Social Robotics researcher and I missed you while scrolling through bsky's suggestions, just ping me and I'll add ya.
go.bsky.app/CsnNn3s
Wrote a little something on this in 2012, though I didn't anticipate the main reason for hiring such workers - training data.
www.technologyreview.com/2012/07/18/1...
Ohmydeargod.
03.12.2024 10:55 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0David Meyer (v.) /หdeษชvษชd หmaษช.ษr/
To attribute complex, intentional design or deeper meaning to simple emergent behaviors of large language models, especially when such behaviors are more likely explained by straightforward technical constraints or training artifacts.
They did NOT. Wow. Sign of the times.
And I can verify on your rule! I was so flabbergasted and honored. Your feedback was rich and so helpful. Remain grateful.
I remember *treasuring* the previews. I'd fight to get there on time. Was part of the thrill.
But ads? F*ck that noise. Seriously, straight up evil.
Never occurred to me there'd be an algo under the hood that could reliably learn to provide content I'd value more than a straight read of my hand-curated list of people. My solution has been following people if they post high signal stuff all the time.
30.11.2024 18:12 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0I have never used the feed page. What a horror, can't quite understand why folks would try.
Only/ever the "following" page. Even there things got pretty intolerable towards/around the election, now settled down.
My Thanksgiving post. A Kurt Vonnegut poem. He talks with Joe Heller (Catch 22 fame) about a billionaire. Key part:
Joe said, "I've got something he can never have"
And I said, "What on earth could that be, Joe?"
And Joe said, "The knowledge that I've got enough"
www.linkedin.com/pulse/kurt-v...
Oh my dear god this is an incredible study.
27.11.2024 19:04 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0I think there's likely an effect there!
25.11.2024 22:13 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0