I need another 2000 views on my videos in the next week to remain monetised after I took time off to grieve for my dad.
Here's a link to my most popular video, so you might enjoy it if you give it a watch;
youtu.be/n-mCCaaxQH8
@millerofruination.bsky.social
She/Her They/Them 🏳️⚧️Mid 20s Gaming & anime enthusiast. An anarchist of sorts. Learning the ropes of making art. Minors, for both of our sakes, do not interact with me! I post NSFW BULSHIT VTuber Debut: Never
I need another 2000 views on my videos in the next week to remain monetised after I took time off to grieve for my dad.
Here's a link to my most popular video, so you might enjoy it if you give it a watch;
youtu.be/n-mCCaaxQH8
"Things are changing. Fast."
Why does every single AI evangelist write like this? Someone needs to start some sort of preliminary research into how using LLMs absolutely cooks your brain and causes you to write like you're selling iPods. They all fucking write like this!
That explains why bluesky just didn't fucking work most of yesterday 🙄
Hey bsky if you ever want to hire an actual programmer to work for you I'm available
Love Alyssa Liu, but it feels weird to see anything from zombie Teen Vogue
04.03.2026 16:04 — 👍 630 🔁 21 💬 8 📌 0
Sing songs about pretty girls, write songs about weird boys, draw a picture of your childhood pets, if you enjoy being creative nobody is allowed to tell you it sucks unless you're being offensive
Make art
1. The Supreme Court just handed down its 5th anti-trans decision in less than a year.
It could lead to forced outing of trans youth across the country, with 40 cases pending that it may impact.
It also forces CA teachers to misgender some trans students.
Subscribe to support our journalism.
reminding myself of this
03.03.2026 11:40 — 👍 5424 🔁 1733 💬 64 📌 111Let’s be explicit: Anthropic’s Claude (and its various models) are fully approved for use in the military, and, to quote its own blog post, “has supported American warfighters since June 2024 and has every intention of continuing to do so.” To be explicit about what “support” means, I’ll quote the Wall Street Journal: Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools. Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran. The command uses the tool for intelligence assessments, target identification and simulating battle scenarios even as tension between the company and Pentagon ratcheted up, the people said, highlighting how embedded the AI tools are in military operations. In reality, Claude is likely being used to go through a bunch of images and to answer questions about particular scenarios. There is very little specialized military training data, and I imagine many of the demands for “full access to powerful AI” have come as a result of Amodei and Altman’s bloviating about the “incredible power of AI.” More than likely, Centcom and the rest of the military pepper it with questions that allow it to justify acts that blow up schools, kill US servicemembers and threaten to continue the forever war that has killed millions of people and thrown the Middle East into near-permanent disarray. Nevertheless, Dario Amodei gets fawning press about being a patriot that deeply cares about safety less than a week after Anthropic dropped its safety pledge to not train an AI system unless it could guarantee in advance that its safety measures were accurate.
The reality of the negotiations was a little simpler, per the Atlantic. The Department of Defense had agreed to terms around not using Claude for mass domestic surveillance or fully autonomous killing machines (the former of which it’s not particularly good at and the latter of which it flat out cannot do), but, well, actually very much intended to use Claude for domestic surveillance anyway: On Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company’s AI to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with other details about your life. Anthropic’s leadership told Hegseth’s team that was a bridge too far, and the deal fell apart. Now, I’m about to give you another quote about autonomous weapons, and I really want you to pay attention to where I emphasize certain things for a subtle clue about Anthropic’s ethics: Anthropic had not argued that such weapons should not exist. To the contrary, the company had offered to work directly with the Pentagon to improve their reliability. Just as self-driving cars are now in some cases safer than those driven by humans, killer drones may some day be more accurate than a human operator, and less likely to kill bystanders during an attack. But for now, Anthropic’s leaders believe that their AI hasn’t yet reached that threshold. They worry that the models could lead the machines to fire indiscriminately or inaccurately, or otherwise endanger civilians or even American troops themselves. So, let’s be clear: Anthropic wants to help the military make more accurate kill drones, and in fact loves them. One might take this to be somewhat altruistic — Dario Amodei doesn’t want the US military to hit civilians — but remember: Anthropic is totally fine with the US military using Claude for anything …
The AI industry wants us to believe it’s more successful than it is, that LLMs are more powerful than they are, and that AI labs are "safety focused" when both Anthropic and OpenAI enthusiastically support using their tech to kill people.
www.wheresyoured.at/the-ai-bubble-is-an-information-war/
Lots of people have come up with very complex ways of arguing we’re in a “supercycle” or “AI boom” or some such bullshit, so I’m condensing some of these talking points and the ways to counteract them: OpenAI had $13.1bn in revenue in 2025! They only lost $8bn ! Did it? Based on my own reporting, which has been ignored (I guess it’s easier to do that than think about it?) by much of the press, OpenAI made $4.33bn through the end of September, and spent $8.67bn on inference in that period. Notice how I said “inference.” Training costs, data costs, and simply, the costs of doing business are in addition to that. OpenAI has 900m weekly active users! Yeah everybody is talking about AI 24/7 and ChatGPT is the one everybody talks about. Google Gemini Has 750m- Google changed Google Assistant to Gemini on literally everything, including Google Home, and force-fed it to users of Google Docs and Google Search. Claude Code is changing the world! It’s writing SaaS now! It’s replacing all coders! As I discussed both at the beginning of the Hater’s Guide To Private Equity and my free newsletter last week, software is not as simple as spitting out code, neither is it able to automatically clone the SaaS experience. Midwits and the illiterate claim that this somehow defeats my previous theses where I allegedly said the word “useless.” While I certainly goofed claiming generative AI had three quarters left in March 2024, my argument was that I thought that “generative AI [wouldn’t become] a society-altering technology, but another form of efficiency-driving cloud computing software that benefits a relatively small niche of people,” as I have said that people really do use them for coding. Even Claude Code, the second coming of Christ in the minds of some of Silicon Valley’s most concussed boosters, only made $203m in revenue ($2.5bn ARR) for a product that at times involves Anthropic spending anywhere from $8 to $13.50 for every dollar it makes.
People Doubted Amazon But It Made Lots Of Money In The End! No they didn’t. Benedict Evans defended Amazon’s business model. Jay Yarow of Business Insider defended it too. Practical Ecommerce called Amazon Web Services “Amazon’s cash cow” in October 2013. In April 2013, WIRED’s Marcus Wohlsen managed to name one skeptic — Paulo Santos, based in Portugal, who appears to have dropped off the map after 2024, but remained a hater long after AWS hit profitability in 2009. I cannot find any other skeptics of Amazon, and I cannot for the life of me find a single skeptic of AWS itself. AWS Cost A Lot Of Money So We Should Spend So Much Money On AI! I’m sick and fucking tired of this point so I went and did the work, which you can view here, to find every single year of capex that Amazon spent When you add together all of Amazon’s capital expenditures between 2002 and 2017, which encompasses its internal launch, 2006 public launch, and it becoming profitable in 2015, you get $37.8bn in total capex (or $52.1bn adjusted for inflation). For some context, OpenAI raised around $42bn in 2025 alone. The fact that we have multiple different supposedly well-informed journalists making the “Amazon spent lots of money!” point to this day is a sign that we’re fundamentally living in hell. Anyway, let’s talk about how much OpenAI has raised, and how none of that makes sense either.
AI boosters always use the same arguments, so I've pulled together answers - for example, they claim that AWS "cost a lot of money" to justify AI.
Amazon's inflation-adjusted capex from 2002-2017 was $52.1bn.
OpenAI raised $42bn in 2025.
www.wheresyoured.at/the-ai-bubble-is-an-information-war/
Free newsletter: The AI bubble's info war demands that we believe in things that aren’t true, like the economic fundamentals of the AI industry are perfectly sound, or that Sam Altman and Dario Amodei are anything other than warmongers.
www.wheresyoured.at/the-ai-bubble-is-an-information-war/
No. It’s a Question; And The Answer Is Yes
03.03.2026 22:57 — 👍 1 🔁 0 💬 0 📌 0Damn
03.03.2026 19:50 — 👍 2 🔁 0 💬 0 📌 0These are so stunning! How do you take such amazing texture photos?
03.03.2026 17:35 — 👍 1 🔁 0 💬 1 📌 0Green moss or lichens on black stone
Cracked gray sidewalk
Orange-brown rust on light gray metal
Concrete with a cloudy texture caused by evaporating moisture
today's Textures
03.03.2026 14:37 — 👍 142 🔁 22 💬 3 📌 0Don't forget the extra profit they earn by selling this technology to schools
03.03.2026 16:12 — 👍 36 🔁 9 💬 1 📌 0because I keep forgetting to mention it: LITERALLY ANYONE can nominate books for this prize! yes! that means you! you have until march 31st!
03.03.2026 16:54 — 👍 220 🔁 129 💬 0 📌 2
lolwat
Fuck that dipshit
The moral panic over ebikes while motor vehicle fatalities have shot up over the last 15 years…is annoying.
In the US,
32,479 traffic fatalities in 2011.
42,795 motor vehicle fatalities in 2022.
en.wikipedia.org/wiki/Motor_v...
Like this isn’t an X The Everything App QRT.
03.03.2026 17:26 — 👍 0 🔁 0 💬 0 📌 0Know your local fascists
03.03.2026 13:56 — 👍 28 🔁 9 💬 1 📌 1
AJUDA AQUI IMPORTANTE PRA QUEM JÁ USOU O CATARSE
é possível eu publicar um vídeo para a divulgação de um projeto meu usando uma música famosa? aqueles vídeos de apresentação do projeto, usar de fundo uma música famosa internacional?
Oh shit damn that rules!
03.03.2026 14:48 — 👍 0 🔁 0 💬 0 📌 0Aeons Torn bless your heart with blood unalloyed by the consanguineous curse. (sorry if you got incest dna)
03.03.2026 14:06 — 👍 0 🔁 0 💬 1 📌 0bella has been a victim of revenge porn over the last few days which is extremely fucked up and I hope you’ll join me in chipping in some $ to help pay for therapy and other support 🖖🏽
03.03.2026 13:13 — 👍 112 🔁 71 💬 1 📌 5Wait, how does a kink swap so drastically like that???
03.03.2026 13:48 — 👍 3 🔁 0 💬 1 📌 0They are literally running with “we have always been at war with Eurasia.” Unreal.
03.03.2026 13:35 — 👍 4862 🔁 1037 💬 154 📌 25
All Americans are targets now.
You have a big red target on your back, wherever you are.
Why do have to use the Microsoft Windows logo on your shirt 😭
03.03.2026 03:22 — 👍 0 🔁 0 💬 0 📌 0