X is where the MAGA movement talks to itself in public and where its leading lights cultivate their online personas. They regularly drive themselves insane β or, at least, further to the right. Thinking in terms of a 2016-ish threat model, the risks associated with a huge number of popular political accounts masquerading as Americans on X lay less with the voting public than with the new set of elites that stew in the platform's memes all day, interpreting and internalizing what they see there as evidence of broad support or new political consensus. If we imagine this as proof of a sort of organized scheme or foreign-influence operation, in other words, its target wouldn't be the median American lurker. It would be J.D. Vance, a man who frequently talks, and presumably thinks, in posts.
I'm deeply skeptical of mainstream disinformation discourse, and it's pretty clear that most of the foreign X accounts are just engagment farming. But to the extent they're politically important, it's to X's gathered elites, not the masses nymag.com/intelligence...
26.11.2025 16:42 β π 152 π 49 π¬ 7 π 3
Fitting, since the circa 2016 Russian Facebook activity got orders of magnitude more traction in the Discourse than it did among regular people.
26.11.2025 16:50 β π 103 π 19 π¬ 3 π 2
X is where the MAGA movement talks to itself in public and where its leading lights cultivate their online personas. They regularly drive themselves insane β or, at least, further to the right. Thinking in terms of a 2016-ish threat model, the risks associated with a huge number of popular political accounts masquerading as Americans on X lay less with the voting public than with the new set of elites that stew in the platform's memes all day, interpreting and internalizing what they see there as evidence of broad support or new political consensus. If we imagine this as proof of a sort of organized scheme or foreign-influence operation, in other words, its target wouldn't be the median American lurker. It would be J.D. Vance, a man who frequently talks, and presumably thinks, in posts.
I'm deeply skeptical of mainstream disinformation discourse, and it's pretty clear that most of the foreign X accounts are just engagment farming. But to the extent they're politically important, it's to X's gathered elites, not the masses nymag.com/intelligence...
26.11.2025 16:42 β π 152 π 49 π¬ 7 π 3
A great deal of what made these demos appealing, from a searcher's perspective, had little to do with the underlying technology at all. What we saw were glimpses of a Google experience with fewer misleading ads, less scrolling, and fewer clicks. One of the nicest things about these search-engine demos of the future was how much they resembled, on the surface, the search engines of the relatively recent past, before their parent companies made them worse to use. They're clean. They engage with queries in a way that makes intuitive sense, rather than through the odd, scrambled input-output metaphor of Google and Bing, circa 2023. Before: You searched, they found. Now: You type in a term or some words or a question or whatever and they, uhh, provide results.
Soon, according to these demos: You'll ask, they'll answer.
Back in early 2023, when LLM Google/Bing were still in demo, I made a similar argument: At least as much as they were "AI search," they were search without ads. It's very plausible to me that perceivable progress gets outweighed by perceivable monetization nymag.com/intelligence... 2/
25.11.2025 17:27 β π 8 π 1 π¬ 1 π 0
Yet the lack of profits at these Al companies has an upside for the user - at least for now. Searching for things like candles on ChatGPT is so unusually pleasant at the moment - particularly compared to Amazon and Google - precisely because its search function has not yet been warped by the pressure to generate the kind of profits needed to recoup past losses. However, if these Al tools are to survive financially, the economic reality is that they will have to change.
Provocative title here from @rufusrock.bsky.social but it's hard to disagree on this point 1/ phttps://asimovaddendum.substack.com/p/are-llms-the-best-that-they-will?r=3f5ape&triedRedirect=true
25.11.2025 17:27 β π 1 π 2 π¬ 1 π 1
i think about that post a lot
21.11.2025 18:31 β π 3 π 0 π¬ 0 π 0
That insiders seem to agree that we could be in a massive bubble is, counterintuitively, not very useful:
Whether or not they mean it, and whether or not they re right, their incentives as leaders of megascale start-ups and public tech companies are such that raising, spending, and committing as much money as possible for as long as possible is probably the rational, self-interested choice either way. Anxious, skeptical, or merely satisfied investors looking for excuses to pull back or harvest gains don't have to look hard, and there's evidence some are; before its earnings report, Peter Thiel's investment firm unloaded its position in Nvidia, and Softbank cashed out of the chip-maker at around the same time. Similarly, OpenAI's ability to send public companies' stocks soaring by announcing massive
"commitments" seems to be fading - Oracle's recent $300 billion valuation bump, based on some shockingly optimistic guidance it offered investors in September, has since gone negative.
Huang has two typical responses to all this. One speaks for itself: Look at all those GPUs we're selling. The other is more direct. "There's been a lot of talk about an Al bubble. From our vantage point," he said after earnings,
"we see something very different." In other words: No, it's not. The "virtuous cycle" is just beginning, and the accelerating potential of the most versatile technology the world has ever seen will one day expose complaints about incremental model updates and hand-wringing about data-center deals as shortsighted and insignificant. Huang continues to be able to speak with authority and tell a story that, for investors, still has juice.
For everyone else, though, neither side of this wildly polarized, high-stakes bet sounds ideal. If this really is a bubble, and it deflates even a little, it could send the American economy into a serious slump, with consequences for almost everyone, getting rid of plenty of jobs the old-fashioned way. If it doesn't - and Huang's sanitized visions of mass automation rapidly start to spread across the economy, justifying all that CapEx and all those strange deals and then some - well, aren't we getting laid off anyway?
on the load-bearing AI bubble, and the subtle shift from warnings about x-risk to warnings about good old financial risk nymag.com/intelligence...
21.11.2025 16:11 β π 14 π 2 π¬ 0 π 0
That insiders seem to agree that we could be in a massive bubble is, counterintuitively, not very useful:
Whether or not they mean it, and whether or not they re right, their incentives as leaders of megascale start-ups and public tech companies are such that raising, spending, and committing as much money as possible for as long as possible is probably the rational, self-interested choice either way. Anxious, skeptical, or merely satisfied investors looking for excuses to pull back or harvest gains don't have to look hard, and there's evidence some are; before its earnings report, Peter Thiel's investment firm unloaded its position in Nvidia, and Softbank cashed out of the chip-maker at around the same time. Similarly, OpenAI's ability to send public companies' stocks soaring by announcing massive
"commitments" seems to be fading - Oracle's recent $300 billion valuation bump, based on some shockingly optimistic guidance it offered investors in September, has since gone negative.
Huang has two typical responses to all this. One speaks for itself: Look at all those GPUs we're selling. The other is more direct. "There's been a lot of talk about an Al bubble. From our vantage point," he said after earnings,
"we see something very different." In other words: No, it's not. The "virtuous cycle" is just beginning, and the accelerating potential of the most versatile technology the world has ever seen will one day expose complaints about incremental model updates and hand-wringing about data-center deals as shortsighted and insignificant. Huang continues to be able to speak with authority and tell a story that, for investors, still has juice.
For everyone else, though, neither side of this wildly polarized, high-stakes bet sounds ideal. If this really is a bubble, and it deflates even a little, it could send the American economy into a serious slump, with consequences for almost everyone, getting rid of plenty of jobs the old-fashioned way. If it doesn't - and Huang's sanitized visions of mass automation rapidly start to spread across the economy, justifying all that CapEx and all those strange deals and then some - well, aren't we getting laid off anyway?
on the load-bearing AI bubble, and the subtle shift from warnings about x-risk to warnings about good old financial risk nymag.com/intelligence...
21.11.2025 16:11 β π 14 π 2 π¬ 0 π 0
with a little distance I think we'll see this as the fresh archetypal story of our age
19.11.2025 16:04 β π 9 π 0 π¬ 1 π 0
if you're a maximally public figure you have to buy/build a level of sycophancy that regular/outside-the-training-data/minimal-online-presence LLM users can basically already access for free
19.11.2025 16:03 β π 116 π 28 π¬ 11 π 4
he didn't miss all of them
18.11.2025 22:14 β π 48 π 9 π¬ 2 π 1
jeff epstein and so many ai guys just barely missed each other
18.11.2025 22:11 β π 90 π 10 π¬ 2 π 2
indeed, this is the one useful thing about Grok and Grokipedia nymag.com/intelligence...
17.11.2025 21:39 β π 67 π 20 π¬ 1 π 3
Max Read notes that these companies are posting in a familiar style. "If you still spend time on X.com, you will recognize here precisely the same clipped, decontextualized, link-free, moronically breathless style of tweeting deployed by 'financial news for low-trust illiterates' accounts like Unusual Whales or ZeroHedge (or, for that matter, in a slightly different arena, PopCrave)," he writes. To be slightly more generous about it, accounts like this have, on a platform where outside links barely exist, mainstream media brands are no longer visible, and commentary crowds out timely updates, stepped in to fill a market need, as strange and distorted as it may be.
As Read points out, Kalshi and Polymarket are also speaking the language of their most devoted users - "epistemically captured right-wing hobbyist gamblers and speculators β and meeting them where they are, on X, which is another way of saying it's just good marketing.
But, in the spirit of prediction, I think material like this will soon be more than a sideshow and already sketches an outline of a new, betting-centric political media. Gambling took over sports coverage, after all. Why won't it take over everything else?
If more people put more money into markets like Kalshi, it's easy enough to imagine how a sportslike media transformation might unfold through sponsorships, normalization in existing media, constant metacoverage of prediction odds, and content produced by the prediction platforms themselves. Aside from its posting habits, Polymarket already has a podcast; on the websites of shrinking newspapers around the country, you can already find aggregated articles sharing the current betting odds for the 2028 election. (J.D. Vance +225, Gavin Newsom +350, and Alexandria Ocasio-Cortez +900 in case you were curious and know what that means.) Why wouldn't fast-growing companies in a wildly lucrative but competitive industry hire or sponsor some influencers to make some content, draw some attention, and convert some customers?
Why shouldn't they convene, say, a Jubilee in which participants argue about their bets on polities rather than politics as such? I've got $5,000 on J.D. Vance 2028. Change my mind.
not too hard to imagine a comprehensively gambling-obsessed political media, maybe soon nymag.com/intelligence...
13.11.2025 17:17 β π 29 π 10 π¬ 1 π 3
This is the most cursed idea Iβve ever seen and itβs immediately obvious that this is where weβre headed
13.11.2025 17:21 β π 67 π 11 π¬ 2 π 1
Max Read notes that these companies are posting in a familiar style. "If you still spend time on X.com, you will recognize here precisely the same clipped, decontextualized, link-free, moronically breathless style of tweeting deployed by 'financial news for low-trust illiterates' accounts like Unusual Whales or ZeroHedge (or, for that matter, in a slightly different arena, PopCrave)," he writes. To be slightly more generous about it, accounts like this have, on a platform where outside links barely exist, mainstream media brands are no longer visible, and commentary crowds out timely updates, stepped in to fill a market need, as strange and distorted as it may be.
As Read points out, Kalshi and Polymarket are also speaking the language of their most devoted users - "epistemically captured right-wing hobbyist gamblers and speculators β and meeting them where they are, on X, which is another way of saying it's just good marketing.
But, in the spirit of prediction, I think material like this will soon be more than a sideshow and already sketches an outline of a new, betting-centric political media. Gambling took over sports coverage, after all. Why won't it take over everything else?
If more people put more money into markets like Kalshi, it's easy enough to imagine how a sportslike media transformation might unfold through sponsorships, normalization in existing media, constant metacoverage of prediction odds, and content produced by the prediction platforms themselves. Aside from its posting habits, Polymarket already has a podcast; on the websites of shrinking newspapers around the country, you can already find aggregated articles sharing the current betting odds for the 2028 election. (J.D. Vance +225, Gavin Newsom +350, and Alexandria Ocasio-Cortez +900 in case you were curious and know what that means.) Why wouldn't fast-growing companies in a wildly lucrative but competitive industry hire or sponsor some influencers to make some content, draw some attention, and convert some customers?
Why shouldn't they convene, say, a Jubilee in which participants argue about their bets on polities rather than politics as such? I've got $5,000 on J.D. Vance 2028. Change my mind.
not too hard to imagine a comprehensively gambling-obsessed political media, maybe soon nymag.com/intelligence...
13.11.2025 17:17 β π 29 π 10 π¬ 1 π 3
it refuses if you mention the name "Donald Trump" and says it's "still learning." Easy to work around but lol
12.11.2025 18:56 β π 14 π 2 π¬ 0 π 0
dying at the AI options on the new Epstein files
12.11.2025 18:48 β π 21 π 3 π¬ 2 π 1
As absurd and undignified as Grokipedia's founder-centric origin story may be β How good could Wikipedia be if its page about me is so rude? β Elon Musk's attempt to remake his own information environment is instructive and, if not exactly candid, usefully transparent (or at least poorly concealed). You won't hear Musk joking about "his own fictionalized version of reality" in 2025 β now he prefers to speak in messianic terms about apocalyptic threats, no matter the subject. But Grokipedia, and Musk's AI projects in general, invite us to see LLMs as powerful and intrinsically biased ideological tools, which, whatever you make of Grok's example, they always are.
We know an awful lot about what Elon Musk thinks about the world, and we know that he wants his own products to align with his greater project. In Grok and Grokipedia, we get to see clearly what it looks like when particular ideologies are intentionally encoded into AI products that are then deployed widely and to openly ideological ends. We also get to recognize how thoroughly familiar parts of the spectacle are, as chatbots rehash the same pitches to audiences, and invite many of the same obvious criticisms, as newspapers, TV channels, and social-media platforms before them β when Fox offered its
"fair and balanced" alternative to other cable networks, Mark Zuckerberg claimed to be returning to his company's "free speech" roots, or the New York Times reminded us that the
"truth" is hard, actually. Now, it's AI companies winking as they tell us to trust them, engaging in flattering marketing, and giving in to paternalistic temptations without much awareness of how their predecessors' decades of similar efforts helped lead the public to a state of profound institutional cynicism.
As novel and versatile as LLM-based chatbots are, their relationship to the outside world is recognizably and deeply editorial, like a newspaper or, more recently, an algorithmically sorted-and-censored social network. (It's helpful to think of OpenAI's "bias evaluation" process, or Grokipedia's top-down reactionary political correctness, as less of a systemic audit than a straightforward edit.) What ChatGPT says about politics β or anything β is ultimately what the people who created it say it should say, or allow it to say; more specifically, human beings at OpenAI are deciding what neutral answers to those 500 prompts might look like and instructing their model to follow their lead. OpenAl's incoherent appeal to objective neutrality is an effort to avoid this perception and one that anyone who runs a major media outlet or social-media platform knows won't fool people for long.
OpenAI would probably prefer not to be evaluated by these punishing and polarized standards, so, as many other organizations have tried before, it's claiming to exist outside them. On that task, I suspect ChatGPT will fail.
In one specific way, Grok might be the most honest and transparent AI project out there nymag.com/intelligence...
11.11.2025 14:24 β π 42 π 10 π¬ 3 π 1
The Age of Anti-Social Media Is Here
The social-media era is over. Whatβs coming will be much worse.
it's linked, but you can think of this as an incidental companion to @damonberes.com's excellent post here www.theatlantic.com/magazine/202...
11.11.2025 14:49 β π 4 π 0 π¬ 1 π 0
one way to really mainline this incoherence is to try to synthesize OpenAI's public statements. It's building an neutral, unbiased... sex chat?
11.11.2025 14:39 β π 4 π 0 π¬ 0 π 0
thanks man!
11.11.2025 14:37 β π 0 π 0 π¬ 0 π 0
Luckily for OpenAI, ChatGPT's future doesn't hinge on creating a universal chatbot that everyone sees as unbiased β it'll settle for being seen as useful, entertaining, or reasonable and trustworthy to enough people. Research papers and "bias evaluations" aside, the product and its users are veering away from shared experiences and into personalized, bespoke forms of interaction in which chatbots gradually profile their users and provide them with information that's more relevant to their specific experiences or more sensitive to their personal preferences or both. Frequent chatbot users know that popular models can drift into sycophancy, which is a powerful and general sort of bias. They also know they can be commanded to inhabit different identities, political or otherwise (you can ask ChatGPT to talk to you like a dead French poststructuralist if you want or ask it to talk to you like Mr. Beast. Soon, reportedly, you'll be able to ask it to pleasure you sexually). Still, for all their dazzling newness and versatility, AI chatbots are in many ways continuing the project started by late-stage social media, extending the logic of machine-learning recommendations into a familiar human voice. It's not just that output neutrality is difficult to obtain for systems like this. It's that they're incompatible with the very concept.
In that sense, Grokipedia β like X and Grok β is also a warning. Sure, it's part of an excruciatingly public example of one man's gradual isolation from the world inside a conglomerate-scale system of affirming, adulatory, and ideologically safe feeds, chatbots, and synthetic media, a situation that would be funny if not for Musk's desire and power to impose his vision on the world. (To calibrate this a bit, imagine predicting the "Wikipedia rewritten to be more conservative by Elon Musk's anti-PC chatbot" scenario in the run-up to, say, his purchase of Twitter.
It would have sounded insane, and you would have too.) But what Musk can build for himself nor is something that consumer AI tools, including his, will soon allow regular people to build for themselves, or which will be constructed for them by default: A world mediated not just by publications or social networks but by omnipurpose AI products that assure us they're "maximally truth-seeking" or "objective" as they simply tell us what we want to hear.
The thing about Elon Musk's full stack, conglomerate-scale sycophancy machine is that we'll all be able to have our own versions soon. That's AI progress, baby!
11.11.2025 14:24 β π 11 π 1 π¬ 2 π 0
As absurd and undignified as Grokipedia's founder-centric origin story may be β How good could Wikipedia be if its page about me is so rude? β Elon Musk's attempt to remake his own information environment is instructive and, if not exactly candid, usefully transparent (or at least poorly concealed). You won't hear Musk joking about "his own fictionalized version of reality" in 2025 β now he prefers to speak in messianic terms about apocalyptic threats, no matter the subject. But Grokipedia, and Musk's AI projects in general, invite us to see LLMs as powerful and intrinsically biased ideological tools, which, whatever you make of Grok's example, they always are.
We know an awful lot about what Elon Musk thinks about the world, and we know that he wants his own products to align with his greater project. In Grok and Grokipedia, we get to see clearly what it looks like when particular ideologies are intentionally encoded into AI products that are then deployed widely and to openly ideological ends. We also get to recognize how thoroughly familiar parts of the spectacle are, as chatbots rehash the same pitches to audiences, and invite many of the same obvious criticisms, as newspapers, TV channels, and social-media platforms before them β when Fox offered its
"fair and balanced" alternative to other cable networks, Mark Zuckerberg claimed to be returning to his company's "free speech" roots, or the New York Times reminded us that the
"truth" is hard, actually. Now, it's AI companies winking as they tell us to trust them, engaging in flattering marketing, and giving in to paternalistic temptations without much awareness of how their predecessors' decades of similar efforts helped lead the public to a state of profound institutional cynicism.
As novel and versatile as LLM-based chatbots are, their relationship to the outside world is recognizably and deeply editorial, like a newspaper or, more recently, an algorithmically sorted-and-censored social network. (It's helpful to think of OpenAI's "bias evaluation" process, or Grokipedia's top-down reactionary political correctness, as less of a systemic audit than a straightforward edit.) What ChatGPT says about politics β or anything β is ultimately what the people who created it say it should say, or allow it to say; more specifically, human beings at OpenAI are deciding what neutral answers to those 500 prompts might look like and instructing their model to follow their lead. OpenAl's incoherent appeal to objective neutrality is an effort to avoid this perception and one that anyone who runs a major media outlet or social-media platform knows won't fool people for long.
OpenAI would probably prefer not to be evaluated by these punishing and polarized standards, so, as many other organizations have tried before, it's claiming to exist outside them. On that task, I suspect ChatGPT will fail.
In one specific way, Grok might be the most honest and transparent AI project out there nymag.com/intelligence...
11.11.2025 14:24 β π 42 π 10 π¬ 3 π 1
guy with no tools who doesnβt know how many cylinders his car has after ten minutes of ChatGPT: I can fix this
guyβs girlfriend after consulting her ChatGPT with memory mode enabled: This is exactly like when we went camping and thereβs actually a term for this behavior
07.11.2025 22:42 β π 104 π 9 π¬ 5 π 4
This week, Amazon answered clearly in the affirmative, in the form of a lawsuit against Perplexity. The language is spicy:
Amazon brings this action to stop Perplexity AI, Inc's ("Perplexity" or "Defendant" persistent, covert, and unauthorized access into Amazon's protected computer systems in violation of federal and California computer fraud and abuse statutes. This case is not about stifling innovation; it is about unauthorized access and trespass. It is about a company that, after repeated notice, chose to disguise an automated "agentic" browser as a human user, to evade Amazon's technological barriers, and to access private customer accounts without Amazon's permission.
Fraud! Abuse! Trespass! Evasion! This is a threatening lawsuit
β it later refers to Perplexity as an "intruder" β but it's also a useful window into Amazon's thinking on AI and where its business fits into a theoretical world where more people do more things with chatbots and agents. It might not be right to say the company is worried, exactly. But it's certainly clear about what it doesn't want to happen, which happens to be precisely what some AI companies clearly want.
Most broadly, AI companies are hoping to end up in a situation where their products are the default interface for all sorts of things, and browser agents represent a small but aggressive move in that direction. A world where all purchases flow entirely through another company's interface, chatbot or otherwise, is something that Amazon would prefer to avoid, or at least have a say in, which is perhaps why the company's lawyers are talking about Perplexity as if it's built an automated ticket-scalping app, a sneaker-sniping bot, a data scraper, a data-exfiltration tool, or rip-off interface for its product catalogue. Earlier this week, Perplexity argued on its site:
Amazon wants to block you from using your own AI assistant to shop on their platform ... Amazon should love this. Easier shopping means more transactions and happier customers.
But Amazon doesn't care. They're more interested in serving you ads, sponsored results, and influencing your purchasing decisions with upsells and confusing offers.
Suggesting that "Amazon should love this" and then describing the ways that it might cause them to make less money is sort of funny, but between the lawsuit and Perplexity's response we can get a pretty clear sense of what's going on here: an attempt by Amazon to stop a marginal player from setting a precedent it doesn't want, and a clear signal to bigger ones β OpenAI, Google, and other firms Amazon already competes and partners with - that, if they want their tools to touch Amazon, they'll have to do it on Amazon's terms.
This is a sort of conflict we can expect to see a lot of, and soon.
AI companies want to mediate the entire economy. Their competitors aren't happy about this nymag.com/intelligence...
07.11.2025 15:33 β π 7 π 3 π¬ 1 π 0
it does drive me sort of nuts how people talk about AI agents, and a lot of new AI products in general, as if they're deploying into a static environment. people react when robots start showing up!
07.11.2025 15:33 β π 4 π 1 π¬ 1 π 0
This week, Amazon answered clearly in the affirmative, in the form of a lawsuit against Perplexity. The language is spicy:
Amazon brings this action to stop Perplexity AI, Inc's ("Perplexity" or "Defendant" persistent, covert, and unauthorized access into Amazon's protected computer systems in violation of federal and California computer fraud and abuse statutes. This case is not about stifling innovation; it is about unauthorized access and trespass. It is about a company that, after repeated notice, chose to disguise an automated "agentic" browser as a human user, to evade Amazon's technological barriers, and to access private customer accounts without Amazon's permission.
Fraud! Abuse! Trespass! Evasion! This is a threatening lawsuit
β it later refers to Perplexity as an "intruder" β but it's also a useful window into Amazon's thinking on AI and where its business fits into a theoretical world where more people do more things with chatbots and agents. It might not be right to say the company is worried, exactly. But it's certainly clear about what it doesn't want to happen, which happens to be precisely what some AI companies clearly want.
Most broadly, AI companies are hoping to end up in a situation where their products are the default interface for all sorts of things, and browser agents represent a small but aggressive move in that direction. A world where all purchases flow entirely through another company's interface, chatbot or otherwise, is something that Amazon would prefer to avoid, or at least have a say in, which is perhaps why the company's lawyers are talking about Perplexity as if it's built an automated ticket-scalping app, a sneaker-sniping bot, a data scraper, a data-exfiltration tool, or rip-off interface for its product catalogue. Earlier this week, Perplexity argued on its site:
Amazon wants to block you from using your own AI assistant to shop on their platform ... Amazon should love this. Easier shopping means more transactions and happier customers.
But Amazon doesn't care. They're more interested in serving you ads, sponsored results, and influencing your purchasing decisions with upsells and confusing offers.
Suggesting that "Amazon should love this" and then describing the ways that it might cause them to make less money is sort of funny, but between the lawsuit and Perplexity's response we can get a pretty clear sense of what's going on here: an attempt by Amazon to stop a marginal player from setting a precedent it doesn't want, and a clear signal to bigger ones β OpenAI, Google, and other firms Amazon already competes and partners with - that, if they want their tools to touch Amazon, they'll have to do it on Amazon's terms.
This is a sort of conflict we can expect to see a lot of, and soon.
AI companies want to mediate the entire economy. Their competitors aren't happy about this nymag.com/intelligence...
07.11.2025 15:33 β π 7 π 3 π¬ 1 π 0
As I spent time watching ChatGPT bonk its way through various web interfaces, I also found myself thinking of self-driving cars. A browser that pretends to be a person at the input level
- moving a cursor, scrolling a human GUI - felt less like a Waymo, in which an unattended steering wheel turns as a result of actions taken by systems closer to the road, than a regular car with a humanoid robot sitting in the driver's seat. Again, it's pretty interesting to watch! But it also makes you wonder: Why are we doing this that way?
Aren't there better ways for machines to talk to each other?
The answer, Miller suggests, comes down to the
"incentives of their makers more than the intrinsic value of the technology." (The Browser Company was recently acquired by software-maker Atlassian.) For OpenAI, building systems that can execute complex commands on behalf of users is the whole ball game - it's the path to wide-ranging automation and/or AGI, depending on which definition of the term the company is going with that day. An AI model that can provide useful information in a chat window, or handle tasks clearly outlined by the user, is a useful product, but the prospect of an Al model that can productively and proactively interact with the world around it in ways comparable to a human - or, more specifically, an employee β is where trillion-dollar valuations come from.
To get where it wants to go, though, OpenAl has a number of challenges. Some are widely discussed and frustratingly hard to pin down, revolving around benchmarks, varying definitions of model capability, and predictions about scaling. Others are more banal: To answer questions more usefully, for example, chatbots tend to do better if they have more data about users; likewise, to execute tasks on their behalf, they need to operate in an environment where users are logged in to the various services they use to work and live their lives. They need a breathtaking amount of access and permission, in other words. ChatGPT isolated in a chat window doesn't have that, and it takes a long time to draw out of users, if they're willing to offer it at all. ChatGPT as a default browser
- authenticated in dozens of different sites, payment methods at the ready, or perhaps even logged into a work environment - does. (Such agents also create, as many in the AI space have pointed out, a potential security nightmare.)
So many bizarre AI products are like this but it's really important to understand why: Companies like OpenAI want the world to give in to their total success before they've actually achieved it nymag.com/intelligence...
30.10.2025 15:13 β π 19 π 6 π¬ 2 π 3