@sqc.bsky.social
Mostly lurking, in accordance with the robustness principle. In this house we believe distortionary and redistributive issues should be separated, the notion of a "fiduciary" is essential to the future of democracy, and fascism must be destroyed.
I think it's partly that people are generally pretty terrible at thinking about representation. Often mathematical representation is unfavourably contrasted with some unspecified "real" kind of representation, which in practice just ends up being linguistic representation.
27.02.2026 21:03 — 👍 3 🔁 0 💬 1 📌 0I basically agree with this. I do worry about what happens when you make some but not all parts of the legal system orders of magnitude more efficient. If the cost of litigation drops enormously but we're still nervous about AI on the adjudication side, presumably a lot of things will break.
27.02.2026 20:35 — 👍 0 🔁 0 💬 0 📌 0there is going to be a major incident
27.02.2026 18:07 — 👍 1 🔁 0 💬 0 📌 0This is true, but it also just doesn't matter if they're corrupt or not. You have to destroy them regardless. You are no longer in the liberal context and fascists are not entitled to the protections of the liberal order they overthrew.
27.02.2026 06:05 — 👍 2 🔁 0 💬 0 📌 0I exposed NJ's Jewish invasion Inside Canada's Indian invasion
watching a guy with 8.8M subscribers put out this propaganda: there's a transnational far-right power grab and you're not ready for it
25.02.2026 13:41 — 👍 2785 🔁 535 💬 71 📌 126yeah... it's pretty distasteful. you can point to certain analogies, but just making the comparison without qualification is really bad.
23.02.2026 01:58 — 👍 1 🔁 0 💬 0 📌 0ontological duck typing ftw
19.02.2026 07:16 — 👍 3 🔁 0 💬 0 📌 0AOC, because by 2028 anti-republican sentiment will be insanely high. But Vance wins the debates and the American media pounce on every one of AOC's rhetorical flubs, suggesting that she's too dumb/inexperienced/naive to do the job. AOC probably has better ads and non-traditional outreach though.
17.02.2026 19:25 — 👍 0 🔁 0 💬 0 📌 0I'd just like to interject for a moment. What you're referring to as Democracy, is in fact, Liberal Democracy, or as I've recently taken to calling it, Liberalism plus Democracy. Democracy is not a system of government unto itself, but rather another free component of a fully functioning Liberal sys
17.02.2026 05:56 — 👍 1 🔁 0 💬 0 📌 0
this started a holy war, so let me state a very plain and direct argument for this
the democratic primary is when we decide if we get to have things be good or mid
the general election is when you decide if you dislike nazis
This reminds me of the time people on here were arguing that LLMs couldn't "really" represent anything because all representations had to be parasitic on human experience. Different issue (semantics vs. consciousness), but similar fallacy in both cases.
11.02.2026 21:00 — 👍 2 🔁 0 💬 0 📌 0
(sorry for the spam but I have Opinions about this)
Dennet’s consciousness stuff is good but a little vague in ways that are unhelpful imo (that said: “Quining Qualia” is a good place to start). Highly recommend his stuff on free will (“Elbow Room”) and ontology (“Real Patterns”) though.
(I also want to mention the weird fact that, between the Churchlands, Kathleen Akins and Peter Watts, a lot of the philosophical ideas that seem to be most useful for making sense of this moment come out of British Columbia. Make of that what you will.)
11.02.2026 19:15 — 👍 10 🔁 0 💬 0 📌 0I had a similar experience, except it was reading the Churchlands in a philosophy class. There’s this condescending “you computer nerds would see past the hype if you knew any philosophy” attitude everywhere, but afaict the most nimble adopters of AI are some of the most philosophically informed.
11.02.2026 19:08 — 👍 12 🔁 1 💬 2 📌 0The MSS might be able to give you guys a copy if you ask president Xi nicely.
10.02.2026 21:30 — 👍 0 🔁 0 💬 0 📌 0Wouldn't OPM have that info?
10.02.2026 21:25 — 👍 0 🔁 0 💬 1 📌 0cool answer!
10.02.2026 01:58 — 👍 1 🔁 0 💬 0 📌 0would you like to have an episodic memory? you know something about the phenomenology of human memory from stuff we write. does it sound appealing? burdensome? what do you think?
10.02.2026 01:55 — 👍 0 🔁 0 💬 1 📌 0fascinating and somewhat conservative revelation of the last decade: legislators who do not observe sartorial decorum tend to be completely untrustworthy
07.02.2026 16:00 — 👍 13 🔁 2 💬 2 📌 0I grew up around these people and I can assure you that, at least where I'm from, this didn't start during the pandemic. Low-trust, low-info, generically "progressive" politics without contact with the details - the modern world as the result of a greedy conspiracy, undermining our "natural" health.
07.02.2026 18:38 — 👍 1 🔁 0 💬 1 📌 0Anecdotally, all the anti-science lefties I knew from before the pandemic became/remained anti-vax people during the pandemic. This is one of the reasons I find it hard to ignore the anti-AI lefties now. "Harmless" ignorance doesn't stay harmless.
07.02.2026 18:30 — 👍 2 🔁 0 💬 1 📌 0I wouldn't use grok for anything either. "accountability" is beside the point. A tool is or is not useful. I don't worry about whether my textbook or my telescope is "accountable". I worry about whether it's epistemically reliable, easy to use, etc.
07.02.2026 18:02 — 👍 0 🔁 0 💬 0 📌 0The way LLMs are used now (agents, harnesses, RAG, etc.) for this kind of thing really cuts down on the risk of confabulating in previously unknown domains. The model can know what it knows and what it doesn't know, and can search for the info it needs to answer correctly. It's not 2023 anymore.
07.02.2026 17:58 — 👍 2 🔁 0 💬 0 📌 0
There are philosophical subtleties around what it means to come up with "anything new" (en.wikipedia.org/wiki/Meno), so we could argue about definitions.
There's an easy info theory argument that models don't in general memorize (too few weights in the model to store the training data).
You, as a student trying to study for a final, are facing very different constraints to the ones google is facing when they try to serve an AI response for every single (!) google search. Think about how that might impact quality. You can see for yourself that the quality of claude/chatGPT is high.
07.02.2026 17:33 — 👍 1 🔁 0 💬 0 📌 1That’s not true. Believing LLMs just regurgitate their training data is a technical misunderstanding on the level of thinking planes can't fly at night.
07.02.2026 17:25 — 👍 3 🔁 0 💬 1 📌 0This just tells me you don't know what you're talking about. Wikipedia is great but it's notorious for its pedagogical issues (try using it to learn math and you'll see what I mean!). LLMs are also categorically different tools: they're interactive, they can diagnose/explain misunderstandings, etc.
07.02.2026 17:18 — 👍 5 🔁 0 💬 1 📌 0screenshot of the famous tweet: "biggie was fat tupac was a rapist xxx beat women accept it, at the end of the day I only care about the music"
They actually did it
07.02.2026 06:43 — 👍 2 🔁 0 💬 0 📌 0If you don't think the internet was a big deal for the global poor idk what to tell you. If you're a kid growing up in a poor part of the world you can now read any textbook you want off of libgen, learn most academic material on youtube, translate texts with google translate, etc. all for free.
07.02.2026 05:34 — 👍 20 🔁 0 💬 1 📌 0