Yeah. Baltimore, lol
22.11.2025 15:09 — 👍 0 🔁 1 💬 1 📌 0@pierrejulian.bsky.social
Former teacher, now a SaaS slave, looking for a way out of the machine. Propaganda victim. GenAI is a massive scam. Spam followers will be blocked. Ukraine 🇺🇦 will win.
Yeah. Baltimore, lol
22.11.2025 15:09 — 👍 0 🔁 1 💬 1 📌 0Poland’s Karol Nawrocki comments on Trump’s peace plan, saying any deal must start with a simple truth — Russia breaks its own agreements. He adds the plan needs Kyiv’s approval and warns that peace cannot come at the price of Moscow’s goals.
22.11.2025 15:05 — 👍 280 🔁 35 💬 5 📌 2Oof I’m saving that last one. This a too good.
21.11.2025 22:05 — 👍 0 🔁 0 💬 0 📌 0In an era filled with tech dipshits who never developed emotionally past the age of 13 & use their wealth to become odious monsters ...
... listen to Steve Wozniak.
Rewarding a dictator-aggressor always leads to another war. One that is usually even worse.
21.11.2025 18:15 — 👍 23114 🔁 6737 💬 730 📌 298Not quite MAGA, but I wonder where that totally real bloke @blackintheempir is based. Strong anti-Ukraine tweets, boosted by Jimmy Dore.
21.11.2025 21:50 — 👍 9 🔁 0 💬 1 📌 0Moscow is a very expensive city to feed and keep warm, and this „peace plan” bullshit is to ensure Putin can still keep their support. It will be a cold-ass winter in Moscow, thanks to the flamingoes and other flying objects.
21.11.2025 21:44 — 👍 1 🔁 0 💬 0 📌 0If Russia and the US were genuinely willing for NATO states, including the US, to provide an "Art. 5-style" security guarantee they could just suggest that Ukraine joins NATO. They are doing the opposite, which tells us all we need to know.
21.11.2025 10:13 — 👍 411 🔁 76 💬 10 📌 22007 subprime mortgage bros, we are so back
20.11.2025 21:18 — 👍 4 🔁 2 💬 0 📌 0Business are willing to accept broken code if it means they don't have to pay humans anymore
19.11.2025 23:23 — 👍 502 🔁 63 💬 31 📌 2Weinstein too? Just when you think it can’t get worse…
19.11.2025 20:53 — 👍 1 🔁 0 💬 0 📌 0President Zelenskyy came to the White House, looking for help from his longstanding ally.
He was ambushed, humiliated and berated for not being grateful enough.
Prince Bone Saw bin 9/11 came to the White House, for the first time since he ordered the murder of Jamal Khashoggi.
He was revered.
Yeah the same way they want to restore the empire and subjugate the world, but the fact that they keep bringing up “security concerns” while being a shameless aggressor speaks to their endless cynicism and the rest of the world’s pathetic weakness in this regard.
19.11.2025 14:33 — 👍 5 🔁 0 💬 0 📌 0Oh ffs. I have a peace plan. For starters, GTFO of Ukraine, return the kids, put the aggressors on trial and pay reparations. See how russians have conveniently added NATO out Central Europe as another demand? They’re still laughing at the world.
19.11.2025 08:48 — 👍 78 🔁 4 💬 1 📌 0Ruin them! From @mollycrabapple.bsky.social:
“Hey authors! Check to see if Anthropic stole your book to train their slop generator on. You’re entitled to $1500 per stolen Work.
Look up your work, and if you’re in the database, file a claim”
secure.anthropiccopyrightsettlement.com/lookup/
Just disgusting
18.11.2025 11:03 — 👍 0 🔁 0 💬 0 📌 0They show you surveillance tech and tell you how to be a more effective creep.
17.11.2025 21:25 — 👍 36 🔁 3 💬 2 📌 0Why Are We In A Bubble? You’re tired, I’m tired, we’re all god damn tired of the AI bubble. It’s been three years and very little has actually happened other than hundreds of billions of dollars being incinerated to build data centers or sustain the reckless and directionless ambitions of Sam Altman and OpenAI, or any number of near-identical AI startups that all claim to be worth somewhere between $5 billion and $500 billion. Yet what truly upsets me about AI is the lies. Despite the fact that Large Language Models cannot really be trusted to do any particular job without endless prompt engineering and tinkering, so many people talk about them as if they’re the second coming of Christ, capable of doing just about anything if you put your mind to it. In reality, LLMs are capable of some stuff but not a lot of stuff and most people you talk to use it as either a coding assistant (that they have to either give very simple tasks or constantly check up on) or as a replacement for Google search that may or may not be better. Yet for years people have said things about LLMs that they don’t actually do, and when pressed to explain further, they’ve said that “these models are getting exponentially better.” We have seen a dereliction of duty by those who are meant to know the truth — the media and analysts — who are either ignorant of reality or refuse to live within it, saying instead that every bit of proof that we’re in a bubble is just “doomerism” or “skepticism” or proof of some sort of corruption or animosity toward the future, rather than an attempt to speak of what’s actually happening and inform the public. LLMs are limited! What they do today is similar to what they did a year ago, and what they’ll do a year in the future! The time of LLM innovation is over, this era is over, and every day we extend it only beckons further retail investors to throw themselves into an active volcano after being convinced that it’ll transform them into a golden god. It’s insu…
It is insane to me that we still, to this day, have people claiming that “agents are coming,” or still peddling out the dog-brained “Uber and Amazon Web Services burned a lot of money” talking points, or somehow justifying the hundreds of billions of dollars burned by saying that “ChatGPT is really popular,” as if that makes up for the theft of millions of people’s art or the destruction of our planet. We’ve known that OpenAI was burning billions of dollars since the middle of 2024, as we’ve known the same of Anthropic, as we’ve known that every AI company is unprofitable, as we’ve known that models will hallucinate forever, as we’ve known that the “scaling laws” paradigm was a bucket of bullshit. These facts have sat baking in the sun for anywhere from a year and a half to two years, yet the media narrative has only fairly recently shifted to consider whether there’s any problem with an entire industry of unprofitable companies building wrappers on top of models made by unprofitable frontier AI labs. AI has proven that we live in an era of high information and low processing, and that the majority of people — the media, investors, analysts, and even government officials — simply take the top-line analysis from whatever the most “trustworthy” source they can find. As a result, TV networks like CNBC publish outright nonsense to back up things like OpenAI promising to pay $1.4 trillion in compute costs in five years with “trustworthy” analysts like Futurum’s Dan Newman, who had this to say about how OpenAI might afford it: This is a company that its biggest backers and investors believe will become the largest hyperscaler in the future. So it isn’t just going to compete for these software and for these ChatGPT and video making, uh, models, they're going to be focusing on taking over this AI layer, going down the path of Anthropic, playing the enterprise, developing these biggest models, and then ultimately developing a model to make money. This is a fucking analy…
This is why people are going to lose so much money! This is why people are being failed! The mainstream media is deliberately, aggressively pushing nonsense narratives with enough buzzwords to keep the bubble inflated. In reality, our society is built on a lot of beliefs that are extremely thinly-held, and LLMs took root because society has been primed by science fiction to believe that it’s inevitable that we’ll have an autonomous computer in our lifetimes. We are trained to follow smart-sounding people from birth, and what constitutes “smart’ is judged by a combination of right-sounding words and braggadocious confidence, and while that’s sort of worked before, Large Language Models feel as if they’re purpose-built to exploit these semiotic tropes. ChatGPT can convince an imbecile he knows quantum physics because it gives him the topline definition in simple enough terms that he can repeat. ChatGPT can give a convincing-enough impression of a person that it can convince somebody that it wants him to leave his wife. ChatGPT can give a convincing-enough impression of writing code that it can convince basically every investor and member of the media that it will replace software engineers in mere months. And because ChatGPT did a convincing-enough impression of a person writing something, Satya Nadella decided to buy as many GPUs as possible, and because Satya Nadella was doing a convincing-enough impression of a CEO who knew what the fuck he was doing, Sundar Pichai, Mark Zuckerberg and Andy Jassy immediately copied him, so that they too could do convincing-enough impressions of executives that knew what they were doing. And because we live with markets that are poisoned by growth-at-all-cost thinkers, they fell in line behind the hyperscalers, claiming that the “era of AI” was here without any proof that was the case, all because the already-existing businesses underpinning the magnificent 7 kept growing. That, and the media has entirely failed to hold the…
There is only one way to avoid this happening again: holding every single booster’s feet to the fire for months and months after the bubble bursts, and make sure that any other hype cycles they push are met with venom and receipts the likes of which God has never seen. If I sound like I have an axe to grind, it’s because I do. Regular people are already getting hurt — their communities destroyed by data centers, their art stolen, their loved-ones driven mad by AI psychosis, and, most likely, their 401Ks ravaged when this ends. I will have no mercy for those who failed to do their jobs. Will you?
The AI Bubble inflated by a dereliction of duty from those meant to seek out and publish the truth. LLMs have been sold on myth and outright lies about what they do or will do, and when the bubble bursts, retail investors will be left with the consequences.
www.wheresyoured.at/premium-the-...
No thanks.
18.11.2025 07:59 — 👍 0 🔁 0 💬 0 📌 0In a short piece for @techpolicypress.bsky.social, @abeba.bsky.social and I write #AIHype Is Steering EU Policy Off Course.
Stop peddling in unscientific discourse about “AGI” and “superintelligence.” Serve citizens. Don't cater to the whims of tech CEOs.
www.techpolicy.press/ai-hype-is-s...
Copilot icon found on Word document. Above it there is a list of functions: Summarize this document; Find key insights and questions; Prep me to discuss this document.
Just updated my laptop, opened up Word, & to my horror discovered the multi-coloured Copilot wart in bottom right corner of doc. NO, I don't want a summary of this document that I am writing; I supplied the insights & questions and just fuck right off about "prepping" me to talk about my own work.
15.11.2025 22:10 — 👍 86 🔁 14 💬 5 📌 3I can‘t believe that we‘re willingly handing over crucial parts of our workforce, data management and even education to a criminal version of Clippy
15.11.2025 21:56 — 👍 1261 🔁 225 💬 22 📌 10Oh, that paragraph reads like an OpenAI press release - no edits. Great job at “disrupting,” UNESCO.
15.11.2025 18:19 — 👍 0 🔁 0 💬 0 📌 0Every Single Vibe Coding Company Is A Scam Let’s be really blunt: you cannot, as a person who does not know how to code, pick up any vibe coding service and build software that functions in a secure way. Every single company selling vibe coding services is lying. In a just regulatory environment, Replit, Lovable and any other company that sells services that suggest that you can use LLMs to write software with no knowledge of coding would be brutally and relentlessly sued by the US government and shut down for good. These companies are built on the back of Large Language Models, probabilistic machine learning that guesses what you want it to do, with hallucinations — which in this case means “doing the wrong thing or lying and saying it did something when it didn’t — increasing the more compute they use. Practically-speaking, this means that nobody who can’t read or write code can actually use these tools to make “real” software, meaning that anybody using them is being conned. Instead, we’re left with stinking piles of manure that con customers on the regular.
Replit - $12.5m Revenue In September 2025 ($150m ARR), Agent 3 Launch Disastrous, Ripping Off Users With Impossible-To-Gauge Costs, Repugnant Company Replit has gone on a bit of a journey — from cloud-based IDE beloved by high school computer science educators, as Charlie Meyer told me last week on the Better Offline podcast, to money-incinerating stalwart of the vibe coding era. In September, it closed a $250m round, giving it a $3bn valuation. Fortunately, Replit had the common sense to cash the checks before its Agent 3 launch, which was widely panned by the faux-coders who actually use it. Described by Replit as “a breakthrough in autonomous software development,” and something that “feels like a real developer,” it has largely failed to win the hearts and minds of its users, who complained of it breaking their apps, taking inordinate amounts of time to do simple tasks, and being extremely expensive. The Register had a great article on this debacle, which ended with Replit adding the ability to control how much Agent 3 would run rampant with your codebase. I also want to be clear that I believe Replit is one of the ugliest startups I’ve seen in history. Its Subreddit is full of people furious with the speed and cost of Agent 3 and ongoing debates about whether it’s good or bad that Replit will randomly burn several dollars fucking up your app. One post, titled “HOW DOES IT GET MORE EXPENSIVE EVERY SINGLE DAY,” led to several people recommending the user not use Replit’s Agent to build their app. By the way, the reason all of this is happening is that coding models are not consistent at writing code, nor can they be relied upon to build functional, reliable and secure software. Worse still, because these are probabilistic models, each time you ask these models to do anything is yet another roll of the dice to see whether it will spit out what you want, each time costing you several god damn dollars.
Lovable - “Predicted” would hit $1 billion ARR ($83m a month) by August 2026, Actually at $100 Million ARR ($8.3m a month) As Of July 2025, Unclear How Much Actual Money It Makes Lovable is yet another vibe-coding company that claims to “build software products, using only a chat interface”: While this is technically something that it does, I believe this statement is horrifyingly misleading. Lovable’s Reddit is full of stories of users wasting hundreds of dollars, fighting to get it to do anything consistently, or simply running out of credits weeks before they refresh. As I’ve said: every vibe coding company is a scam. Anyway, earlier this week, Lovable CEO Anton Osika said the company he founded was nearing eight million users, with “100,000 new products built on Lovable every single day” — a line that stretches the definition of the word “product” in ways I didn’t previously think possible. It raised $200m in July of this year at a $1.8bn valuation — just eight months after it launched — and the company is rumoured to be planning another raise at a $5bn valuation. Impressive. Less impressive, however, is its revenue, with the company having only hit the $100m ARR mark in June -- which, I remind you, works out to $8.3m-a-month.
Every single vibe coding company is a scam, and both Replit and Lovable are evil. It's a blatant lie that you can use an LLM to build secure, functional software without coding knowledge. In a functioning regulatory environment Lovable and Replit would be illegal.
www.wheresyoured.at/premium-the-...
How does any of this happen? Nobody seems to know! Per The Journal: Anthropic then becomes a much more efficient business. In 2026, it forecasts dropping its cash burn to roughly one-third of revenue, compared with 57% for OpenAI. Anthropic’s burn rate falls further to 9% in 2027, while it stays the same for OpenAI. …hhhhooowwwww????? I’m serious! How? The Information tries to answer: Anthropic leaders also claim their company’s use of three different types of AI server chips—made by Nvidia, Google and Amazon, respectively—has helped their models operate more efficiently, according to an employee and another person with knowledge of the company’s plans. Anthropic assigns tasks to different chips depending on what each does best, according to one of the people. Is…that the case? Are there any kind of numbers to back this up? Because Business Insider just ran a piece covering documents involving startups claiming that Amazon’s chips had "performance challenges,” were “plagued by frequent service disruptions,” and “underperformed” NVIDIA H100 GPUs on latency, making them “less competitive” in terms of speed and cost.” One startup “found Nvidia's older A100 GPUs to be as much as three times more cost-efficient than AWS's Inferentia 2 chips for certain workloads,” and a research group called AI Singapore “determined that AWS’s G6 servers, equipped with NVIDIA GPUs, offered better cost performance than Inferentia 2 across multiple use cases.” I’m not trying to dunk on The Wall Street Journal or The Information, as both are reporting what is in front of them, I just kind of wish somebody there would say “huh, is this true?” or “will they actually do that?” a little more loudly, perhaps using previously-written reporting.
For example, The Information reported that Anthropic’s gross margin in December 2023 was between 50% and 55% in January 2024, CNBC stated in September 2024 that Anthropic’s “aggregate” gross margin would be 38% in September 2024, and then it turned out that Anthropic’s 2024 gross margins were actually negative 109% (or negative 94% if you just focus on paying customers) according to The Information’s November 2025 reporting. In fact, Anthropic’s gross margin appears to be a moving target. In July 2025, The Information was told by sources that “Anthropic recently told investors its gross profit margin from selling its AI models and Claude chatbot directly to customers was roughly 60% and is moving toward 70%,” only to publish a few months later (in their November piece) that Anthropic’s 2025 gross margin would be…47%, and would hit 63% in 2026. Huh? I’m not bagging on these outlets. Everybody reports from the documents they get or what their sources tell them, and any piece you write comes with the risk that things could change, as they regularly do in running any kind of business. That being said, the gulf between “38%” and “negative 109%” gross margins is pretty fucking large, and suggests that whatever Anthropic is sharing with investors (I assume) is either so rapidly changing that giving a number is foolish, or made up on the spot as a means of pretending you have a functional business. I’ll put it a little more simply: it appears that much of the AI bubble is inflated on vibes, and I’m a little worried that the media is being too helpful. These companies are yet to prove themselves in any tangible way, and it’s time for somebody to give a frank evaluation of where we stand. if I’m honest, a lot of this piece will be venting, because I am frustrated. When all of this collapses there will, I guarantee, be multiple startups that have outright lied to the media, and done so, in some cases, in ways that are equal parts obvious and brazen. My own work has receiv…
We live in an era of lies. LLMs are sold on the lie that they will one day be autonomous, and companies like OpenAI and Anthropic spread conflicting revenues and gross margins to muddy questionable businesses that burn billions of dollars.
www.wheresyoured.at/premium-the-...
Them flamingoes - they work.
14.11.2025 12:07 — 👍 3 🔁 0 💬 0 📌 0Due to mass hype and operational ambiguity, executives can use "AI" as a justification for job cuts, increased worker surveillance, speed-up, reducing hiring—with little accountability or repercussion. The *logic* of AI has often been as damaging to labor as the tech itself (see: DOGE.)
13.11.2025 17:58 — 👍 102 🔁 25 💬 2 📌 2Nope, not a scam at all.
14.11.2025 08:09 — 👍 0 🔁 0 💬 0 📌 0Scientists are fighting back: bsky.app/profile/kris...
13.11.2025 12:35 — 👍 6 🔁 2 💬 0 📌 0Leaders of EU institutions have a responsibility to citizens. By promoting AI hype in the midst of a dangerous #bubble, the European Commission risks proposing harmful policies, wasting public funds and being complicit in AI harms.
10.11.2025 09:48 — 👍 25 🔁 8 💬 1 📌 0