Every author writing like this should be required to rewrite abstracts in plain English and read it aloud to an audience of their peers, before they can publish it.
Summary: Conjectural with nice diagrams but no quantitative measures and ignores prior literature.
arxiv.org/pdf/2510.26745
03.11.2025 16:14 β π 0 π 0 π¬ 0 π 0
Tom just wants Democrats to be the βresponsible adultsβ while the other party throws childish tantrums. Every time the right drifts further off the edge, he calls for moving the center to meet them - so moderates like him can still feel centered in an unhinged party.
02.11.2025 21:03 β π 0 π 0 π¬ 0 π 0
Unfortunately, at this point, any βAxIβ naming is tainted. Whether we use βGeneralβ, βSuperβ, βHyperβ or [Insert], itβs an academic distinction without real world difference. That it also attempts to name a class of models that donβt actually exist, reinforces the conjuring of hype over substance.
02.11.2025 19:12 β π 0 π 0 π¬ 0 π 0
LLMs have no utility - is not something I can subscribe to.
LLMs have low utility relative to investment - is my stance.
Utility is tied to cost. Cut LLM spend by 1000x with SSLMs and value equation shifts. Smaller cheaper task-tuned models FTW.
Yes, you still have to pay the humans. Sorry?
02.11.2025 15:46 β π 1 π 1 π¬ 0 π 0
Non-monotonic events can still produce a monotonic trend, thatβs the outcome we are seeing.
Caring about 10s of thousands of American lives lost in a war isnβt a radical left stance, itβs compassion and empathy.
If you think itβs a far left move, then the Overton window is already badly skewed.
01.11.2025 17:54 β π 2 π 0 π¬ 0 π 0
Ahh, you mean the other party moved far right first, forcing the Democrats to move lefter to counter their BS narratives?
Great example of Newtonβs Third Law applied to societal dynamics.
01.11.2025 16:45 β π 20 π 2 π¬ 2 π 0
Or spitballing here - it just feels like the Democrats moved left, because the other party moved far right?
01.11.2025 16:22 β π 42 π 2 π¬ 1 π 0
Karpathyβs tweet is a live demo of the learning loop he promotes. Consciously or not, he is channeling:
- Kolb: Experimental learning theory
- Feynman: Explain in your own words
- Dweck: Growth mindset scale
The medium is the message.
01.11.2025 16:16 β π 0 π 0 π¬ 0 π 0
https://www.librarycat.org/lib/gbooch
In the research for Computing, my multi-part documentary that examines the intersection of computing and what it means to be human, I've collected almost 6,000 books to help inform my storytelling. You can browse my entire collection here
t.co/fw6RXUYR2l
31.10.2025 23:23 β π 68 π 14 π¬ 4 π 1
Donβt have an exact number, but 150+ trick or treaters tonight. One of them:
K (Kid): Trick or treat?
M (Me): Trick.
K: Huh?
M: Whatβs the trick?
K: You give me candy.
She was the youngest one of the evening. Cutie pie at her best!
01.11.2025 04:58 β π 0 π 0 π¬ 0 π 0
Finally β¦
www.wsj.com/tech/ai/larg...
31.10.2025 21:11 β π 1 π 0 π¬ 0 π 1
Yeah, we are saying the same thing - see my previous comment to an earlier post.
31.10.2025 15:31 β π 1 π 0 π¬ 0 π 0
I get where you are coming from. Yes we will eventually get to a point where 10x improvement in model efficacy will enable a proportional decrease in dataset size, almost a prerequisite for any semblance of human like intelligence. But that wonβt happen till the commercial disillusionment with LLMs.
31.10.2025 15:24 β π 1 π 0 π¬ 0 π 0
Sorry, if I am misunderstanding your comment - but, in the field all useful models are intelligent models because commerce says so. Yes, in the lab thatβs an important distinction to maintain. In the field, I have never heard of NNs being referred to as anything other than ML, outside of VC offices.
31.10.2025 15:10 β π 1 π 0 π¬ 1 π 0
I hear what you are saying and why. However much of the business utility from ML in the 2010s came from scale. Average models trained on massive data often outperformed well designed models trained on limited data. Yes future learning techniques might shift that balance but likely not decouple it.
31.10.2025 14:04 β π 1 π 0 π¬ 1 π 0
Yes. I say that because interpreting exactly why the model scored it the way it did is much harder given the highly distributed representations. In fact itβs near impossible task for humans assuming real world internet scale datasets.
31.10.2025 00:54 β π 1 π 0 π¬ 1 π 0
Yes - on reliability, but back off on interpretability - even for good old CNN models itβs hard to interpret the how, but easy to verify repeatable reliability.
30.10.2025 22:15 β π 1 π 0 π¬ 1 π 0
Emergent Introspective Awareness in Large Language Models
In their own words:
βSeveral caveats should be noted: The abilities we observe are highly unreliable; failures of introspection remain the norm.β
transformer-circuits.pub/2025/introsp...
30.10.2025 20:11 β π 0 π 0 π¬ 0 π 0
Sheβs making the classic laymanβs mistake of thinking DeepMind is synonymous with AI. If she had actually read even the first paragraph of their paper, she might have clued in itβs a great example of purely statistical machine learning, but thatβs probably asking too much.
arxiv.org/pdf/2506.10772
30.10.2025 16:02 β π 2 π 0 π¬ 0 π 0
You know how they are going to react to this? Layoff another 100k human engineers to build another super data center because they are convinced they are on the brink of breakthrough - just needs a bit more juice.
Newsflash, boys: AI broke thru back in 2023, now you are just chasing the ghost of AI.
30.10.2025 00:20 β π 0 π 0 π¬ 0 π 0
Allow me to summarize:
βIf you get elected on a policy stick to it without apologizing. That's what elections are for.β
bsky.app/profile/avik...
28.10.2025 20:28 β π 0 π 0 π¬ 0 π 0
When did making kids go hungry become a Christian value?
27.10.2025 23:17 β π 21738 π 5678 π¬ 1390 π 303
There is substantial overlap in crypto then NFT now AI investors and it all ties back to their infrastructure investment. Sunk cost fallacy motivates them to keep the bubble afloat.
27.10.2025 16:27 β π 1 π 0 π¬ 0 π 0
And on the broader point: itβs not just an absence of alternatives, but also the choice not to build multi-cloud routing into infrastructure design.
27.10.2025 16:15 β π 2 π 0 π¬ 0 π 0
True, a deeper issue is lack of alternatives to hyperscalers. But thereβs also a narrower design question: βwhy didnβt Signal hedge with multi-cloud redundancy?β Even if it compromises NRT guarantees temporarily, degraded fallback seems preferable to a total outage for customers.
27.10.2025 16:15 β π 5 π 0 π¬ 1 π 0
Hereβs the succinct version:
bsky.app/profile/avik...
27.10.2025 14:28 β π 0 π 0 π¬ 0 π 0
The cost of skipping abstraction now will be far worse post integration while quality lags, delays, cost overruns, vendor lock in and the big one - data seepage. Yes, thatβs happening.
Build portability now - if you are serious about LLM investment. Abstraction isnβt optional - itβs foundational.
27.10.2025 14:16 β π 0 π 0 π¬ 1 π 0
Open source xLMs are rapidly closing the gap with foundation models specially for custom tasks. Stop wiring directly to vendor APIs.
Yes, it might delay your product launch by a few months and that maybe a deal breaker for startups. But if you are an enterprise, you should have no excuse.
27.10.2025 14:16 β π 0 π 0 π¬ 1 π 0
On Bsky when posting about scientific papers or articles - I have two modes:
1. Has substance? Put on my scholar hat, assess it in its own register and respond with rigor.
2. Is basic? Hat stays off and I keep it casual because itβs not worth the time.
On βAIβ these days, most fall into the 2nd.
27.10.2025 04:24 β π 0 π 0 π¬ 0 π 0
Wait till he gets to the touch problem.
26.10.2025 22:55 β π 0 π 0 π¬ 0 π 0
What does tomorrow look like?
The nonprofit organization behind the Python programming language. For help with Python code: http://python.org/about/help/
On Mastodon: @ThePSF@fosstodon.org
Since 1889 ποΈ
Sign up for our newsletters and alerts: http://wsj.com/newsletters
Got a tip? http://wsj.com/tips
Follow our staff: https://go.bsky.app/2ppWqxF
I write TheFinanceNewsletter.com, trusted by 100,000+ readers.
Follow to get smarter with the economy, finance & money.
Here to promote Dems everywhere Ste. Gen. 60 Miles South of St. Louis @piperformissouri.bsky.social @jesspiperfanpage.bsky.social @we-stand-united-mo.bsky.social @lesliejones4mo.bsky.social @iquitcigs.bsky.social @melrea.bsky.social
Former journalist running for Congress (IL-09) because we deserve Democrats who actually do something | katforillinois.com
Journalist at Mother Jones and elsewhere, author, writer of social media posts on micro-blogging sites. Tips to Amerlan@motherjones.com or on Signal, username annamerlan.30
Disability Reporter @ Mother Jones β’ jmetraux@motherjones.com β’ She/her β’ Berkeley Journalism alum β’ Signal: @juliametraux.49
Author page: https://www.motherjones.com/author/julia-metraux/
Free monthly newsletter: bit.ly/3Ee9lRO
Journalist at large, focused on education, gender, the courts / recently: @motherjones.com / Szilagy.14 on Signal or sarahszilagy@protonmail.com
Mother Jones reporter. Covering corruption, foreign influence. Derrick White enthusiast. Signal: danfriedman.67
articles editor at Mother Jones (@motherjones.com), CIR union UAW 2103 (@cir-union.bsky.social)
Editor-in-Chief of Mother Jones/Center for Investigative Reporting/Reveal. Past president of the American Society of Magazine Editors. Mom. Upzoning, street tree, and rescue dog enthusiast. Views, mine.
Mother Jones senior editor
Special Projects Editor & reporter at Mother Jones focused on business investigations. Spreadsheets are my love language. hlevintova@motherjones.com, DM for Signal.
Investigative reporter @Motherjones DC ex OpenSecrets.org tip me at rchoma @ motherjones.com
Video reporter and producer at The Center for Investigative Reporting
How to reach me:
Signal: @raeoflion.08
Email: rdeleon[ a t] cir [ d o t] org or raedeleon[ a t] protonmail[ d o t ] com
National correspondent at Mother Jones. tmurphy@motherjones.com.
Signal: timothypmurphy.41
Proton Mail: timothypmurphy@proton.me
Reporter at Mother Jones covering immigration, Latin America, and more. Falo portuguΓͺs.
Proton Mail: isabelaalhadeff@proton.me
Signal: isabeladias.14
https://www.motherjones.com/author/isabela-dias/