Oh, _that's_ why. I figured it was because the ad depicted neighborhood houses participating in an AI-driven dragnet.
20.02.2026 05:30 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0@qethanm.bsky.social
NEW BOOK: Twin Wolves: Balancing risk and reward to make the most of AI https://twinwolvesai.com/ research: (ML+gen)AI, risk, complexity, finance history work: https://WorkWithQ.com newsletter: https://complex-machinery.com (some posts in ๐ซ๐ท ๐ฉ๐ช ๐ท๐บ)
Oh, _that's_ why. I figured it was because the ad depicted neighborhood houses participating in an AI-driven dragnet.
20.02.2026 05:30 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0Oh, _that's_ why. I figured it was because the ad depicted neighborhood houses participating in an AI-driven dragnet.
20.02.2026 05:30 โ ๐ 1 ๐ 1 ๐ฌ 0 ๐ 0My top reads from this past week:
- when AI translates your marriage
- an "AI-powered" school shows some cracks
- CEOs get quiet
- repatriating stolen artifacts
โฆ and more:
inothernews.complex-machinery.com/archive/015-...
Yes, there are a ton of other questions to explore. Like "who might be harmed by this?" Or "would anyone pay for it?" Or "is there a better/faster/cheaper way to implement this." And so on.
But if you expect people to use what you build, you have to start with "would they even want it?"
Free idea for product teams!
Before you build that AI-powered feature, ask:
"Who would want this? Would anyone outside of this room think it's a good idea?"
It doesn't matter if you've uncovered some AI wonder and it hits all of the model eval metrics. If people hate using it, you've failed.
My top reads from this past week:
- when AI translates your marriage
- an "AI-powered" school shows some cracks
- CEOs get quiet
- repatriating stolen artifacts
โฆ and more:
inothernews.complex-machinery.com/archive/015-...
Video games, music, and D&D have all been falsely accused of causing violence. So I am careful about pointing fingers at technology.
But in cases of AI psychosis ... there are chat logs. Reams of chat logs. Hard to deny those receipts.
More reporting from @mharrisondupre.bsky.social :
Video games, music, and D&D have all been falsely accused of causing violence. So I am careful about pointing fingers at technology.
But in cases of AI psychosis ... there are chat logs. Reams of chat logs. Hard to deny those receipts.
More reporting from @mharrisondupre.bsky.social :
This is similar to what I warn people about super-charging their business with AI:
If you only speed up one department, you'll just overwhelm everyone downstream...
In the latest Complex Machinery:
genAI companies are grasping for use cases, pointing the technology at non-problems in the hope that we get excited. It's not a good look.
Also: an explainer on Google's rare 100-year bond.
newsletter.complex-machinery.com/archive/055-...
#dataBS
As a longtime ML/AI practitioner + sometimes-linguist, I usually raise an eyebrow when I see articles on machine translation.
@kashhill.bsky.social 's does not disappoint. She gets to the harsh realities of the tech: good in some places, but human language is still a lot for a machine to handle.
Hence why I am mindful of how I communicate beyond my native language.
Body language and an attempt to be extremely polite can sometimes compensate for simple/missing vocabulary (which can easily come off as rude...)
Yes! In many cases, mediocre is just fine. We can get by with basic phrases for a short exchange.
A deep personal relationship really stress-tests that idea, though. The couple in your article can work with limited _language_ skills because they exhibit superior _communication_ skills.
As a longtime ML/AI practitioner + sometimes-linguist, I usually raise an eyebrow when I see articles on machine translation.
@kashhill.bsky.social 's does not disappoint. She gets to the harsh realities of the tech: good in some places, but human language is still a lot for a machine to handle.
I wager people on #dataBS have their own stories of "I saw through someone's ML/AI/recsys nonsense" ...
Time to share
Favorite example: card issuer triggering a fraud alert because I "had never purchased from this merchant before."
Way to tell me that your fraud systems rely on on six months' data ...
Indeed.
Repeating "[X] is the future!!" at peak volume makes it true('ish) for a while -- a little longer if you pump $$ into marketing -- but reality eventually sets in.
Buyers want you to solve their actual problems. Not the half-problems you've dreamed up for them.
Excerpt from linked article: > BuzzFeed News reported that Grubhubโs app was averaging 6,000 orders per minute during the promo, and there were widespread reports of overwhelmed restaurants simply turning off their delivery channels. Many operators said they didnโt even know the promo was happening and that they were completely under-staffed and unprepared for the lunch deluge.
Yes! Going TikTok-famous is similar (though not exactly the same) GrubHub's free lunch deal from a few years ago. (They didn't warn the affected restaurants, apparently.)
Spike in volume, regulars turned away, venues overwhelmed. Success?
www.restaurantbusinessonline.com/marketing/gr...
Anyway, that @downtownjoshbrown.bsky.social post is timely. My next newsletter has a segment on this same idea, focused on genAI use cases.
There's still time to subscribe: newsletter.complex-machinery.com
Indeed.
Repeating "[X] is the future!!" at peak volume makes it true('ish) for a while -- a little longer if you pump $$ into marketing -- but reality eventually sets in.
Buyers want you to solve their actual problems. Not the half-problems you've dreamed up for them.
The aforementioned playbook investigation:
www.reuters.com/investigatio...
When I hear "Meta," I think "systemic rot." I sense a culture where growth is the only objective function.
(Remember a few months back, when journalists surfaced Meta's playbook for fending off regulators?)
Excerpt from Rebranding Data, 2021: >> Right now most of Al's 30,000-foot altitude is hype. When the hype fades-when changing the name fails to keep the field aloft-that hype dissipates. At that point you'll have to sell based on what Al can really do, instead of a rosy, blurry picture of what might be possible. >> >> This is when you might remind me of the old saying: "Make hay while the sun shines." | would agree, to a point. So long as you're able to cash out on the Al hype, even if that means renaming the field a few more times, go ahead. But that's a short-term plan. >> >> Long-term survival in this game means knowing when that sun will set and planning accordingly. How many more name-changes do we get? How long before regulation and consumer privacy frustrations start to chip away at the faรงade? How much longer will companies be able to paper over their Al-based systems' mishaps?
Excerpt from "Rebranding Data" (2021) >> Where to next? >> >> If you're building Al that's all hype, then these questions may trouble you. Post-bubble Al (or whatever we call it then) will be judged on meaningful characteristics and harsh realities: "Does this actually work?" and "Do the practitioners of this field create products and analyses that are genuinely useful?" (For the investors in the crowd, this is akin to judging a company's stock price on market fundamentals.) Surviving long-term in this field will require that you find and build on realistic, worthwhile applications of Al. >> >> Does our field need some time to sort that out? I figure we have at least one more name change before we lose altitude. We'll need to use that time wisely, to become smarter about how we use and build around data. We have to be ready to produce real value after the hype fades.
Closing paragraphs of a Radar piece I wrote in 2021. I've been thinking about it quite a bit as of late.
If renaming the data field resets the hype curve, thereby staving off a correction ...
... how many renames does it have left?
www.oreilly.com/radar/rebran...
FFS I briefly mistook this for the _other_ article @kevincollier.bsky.social wrote about an AI-backed toy leaking data.
As an outsider (but longtime software+AI practitioner) this rash of leaks smells like companies are focused on the AI parts of the product ... at the expense of app dev basics.
[looks at reading pile]
[looks at writing pile]
[nods]
It must be a trip for management teams. You put out in-line earnings, look at the share price and you're down 15% because a CA-based start-up called AI-loco just told the internet your entire business model can be recreated in 5 minutes with no people
12.02.2026 22:09 โ ๐ 8 ๐ 2 ๐ฌ 0 ๐ 0A rare-for-me post on my other big interest: linguistics.
The latest Culture Study podcast is well worth a listen if you have even the slightest interest in the topic.
(It's especially fun if you understand a decent level of French, German/Dutch, or ... law? Yes, law.)
The missing piece of the "AI is coming for everyone's job" headlines:
There's never a discussion of _why_ AI is taking the job.
Is it because it actually does the job well?
Or because some hopeful exec is making today's decision based on what AI _might_ do next year?
There's a big difference.
We can generally speak of "concentrations of belief" --
We see this in everything from financial markets, to politics, to social fads, and so on.
What I find fascinating is what drives a given belief: Market fundamentals? Pure ego/emotion? Willful or accidental ignorance? And so on.
The missing piece of the "AI is coming for everyone's job" headlines:
There's never a discussion of _why_ AI is taking the job.
Is it because it actually does the job well?
Or because some hopeful exec is making today's decision based on what AI _might_ do next year?
There's a big difference.