Personally, I believe that procreation is not, in and of itself, a moral concern. Although, of course, like with any action, there are secondary concerns to consider. Many pro- and anti-natalists focus their arguments on these secondary concerns, which is fine. Replacement anti-natalists do not.
09.10.2025 15:00 β π 3 π 0 π¬ 0 π 0
*Priest card
08.10.2025 14:02 β π 0 π 0 π¬ 0 π 0
Exactly! The Turing Test assumes humans are good at differentiating text with meaning from text without meaning. Our hubris told us we were, but we should have known better. The fact that humans are often fooled by cold readings and other cons should have been warning enough that we were wrong.
03.10.2025 13:36 β π 0 π 0 π¬ 0 π 0
That's why I wait for tech to be proven. It either:
1) Gives cheap and fast productivity boosts: I can integrate and catch up quickly
2) Gives expensive or slow productivity boosts: Nobody got significantly ahead while I waited
3) Gives no productivity boost: I didn't waste time/money on it
24.09.2025 15:40 β π 1 π 1 π¬ 0 π 0
Algorithmic social media streams do not work in favor of the user. Many people try to solve that with block lists, but that puts the power over your attention into someone else's hands. I prefer to just avoid algorithmic social media altogether, and use follow-only streams where possible.
24.09.2025 13:58 β π 4 π 0 π¬ 0 π 0
I haven't played the game in weeks now, and this balance change is not bringing me back. The game has been too stale for too long. It doesn't matter if it is balanced, if it's just the same stuff we were playing half a year ago.
17.09.2025 16:39 β π 0 π 0 π¬ 0 π 0
I have a feeling that in 10 years, I may need to launch the Center for the Alignment of Alignment Centers for AI Alignment Centers.
11.09.2025 15:41 β π 14 π 1 π¬ 1 π 0
The most popular LLMs all produce sycophantic output (which is a big part of why people are becoming addicted to them), so a simple prompt like "Write a glowing review for this book" would likely be enough to pattern-match to blurbs like these.
09.09.2025 14:46 β π 0 π 0 π¬ 0 π 0
I find it hard to believe they didn't know this mini set would be very low-impact, so the question is, why were they fine with that?
03.09.2025 19:48 β π 0 π 0 π¬ 0 π 0
It's easy to block certain words, but any content moderator knows it is hard to automate blocking the discussion of specific topics. The words you block can have legitimate uses ("I'm hosting a murder mystery party...") and the topics you don't want discussed can use words you didn't think to block.
26.08.2025 18:26 β π 0 π 0 π¬ 1 π 0
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
Adam Raine, 16, died from suicide in April after months on ChatGPT discussing plans to end his life. His parents have filed the first known case against OpenAI for wrongful death.
Overwhelming at times to work on this story, but here it is. My latest on AI chatbots: www.nytimes.com/2025/08/26/t...
26.08.2025 13:01 β π 4646 π 1745 π¬ 114 π 578
This isn't just about benevolence. They couldn't make it reliably safe if they tried, because of how LLMs work. The benevolent thing to do would be to not market them as chatbots at all, and not to train them to respond with anthropomorphic language.
26.08.2025 15:48 β π 16 π 0 π¬ 2 π 0
I would argue that they need to be carefully boxed in even if they are only attached to humans. Even simple chatbots are causing a lot of real-world problems due to the human mind being predisposed to believe their words have meaning
22.07.2025 19:43 β π 3 π 0 π¬ 0 π 0
Buff-wise, I'm not so sure. Yes, the quests aren't doing well, but the meta usually sucks when quests are doing well. Some could have minor adjustments to pull them up to a playable level (many are hot garbage as it stands), but they really don't need to be tier 1.
15.07.2025 18:43 β π 0 π 0 π¬ 0 π 0
More than anything, I think Menagerie Jug needs a change. After that, Careless Crafter and/or Resuscitate should probably change. Otherwise, OTK Priest will dominate.
Then the obvious, but not as important, Murloc Paladin and Loh. Both need a slight tone down, but probably nothing too serious.
15.07.2025 18:39 β π 1 π 0 π¬ 1 π 0
I really enjoyed it. Mr. Terrific in particular was a pleasant surprise. Hopefully, there are future projects where he gets more time to shine
15.07.2025 12:25 β π 0 π 0 π¬ 0 π 0
I remember learning about this competition when I was at University in 2001. It was a well-established event even then!
30.06.2025 21:24 β π 0 π 0 π¬ 0 π 0
Even in the case of pure memorization, LLMs do not learn the same way. If you show me a block of random words, I may exactly memorize a small number of them very fast, but I would not be able to reproduce the word frequencies in a large piece of text. The reverse is true for an LLM.
24.06.2025 21:53 β π 0 π 0 π¬ 3 π 0
I want to make that comparison because the court case, which this thread is about, was about that.
The court case isn't about what LLMs may be able to do. It is specifically about how LLMs learn now.
24.06.2025 21:22 β π 0 π 0 π¬ 1 π 0
How are any of these questions relevant to the original point, which is specifically about how humans and current LLMs respectively learn from text. I've never stated that the underlying mechanisms are different, just that the process is different (and that difference is important)
24.06.2025 21:16 β π 0 π 0 π¬ 0 π 0
How many times are you going to change the topic of discussion. This is not about LLMs potential, this is about how they learn right now.
24.06.2025 21:02 β π 1 π 0 π¬ 1 π 0
Once again, this is not what we are talking about. The claim was that current LLMs learn the same way humans do. Whether they could be updated to do so in future is irrelevant to the fact that they currently do not do so.
24.06.2025 21:00 β π 0 π 0 π¬ 0 π 0
If they could do it, they would have. The reason they haven't is because they are hard problems to solve. But none of those things relate to the topic here, which is how current LLMs do not learn the same way humans do.
24.06.2025 20:58 β π 0 π 0 π¬ 0 π 0
None of this is relevant to the original topic: Current LLMs learn by us directly updating their weights to make the output closer to what is expected; humans learn by contemplating meaning and then updating their understanding (which may be neural weights). That is what the case is about.
24.06.2025 20:55 β π 0 π 0 π¬ 1 π 0
Sure, that is arguably true (although we don't know the full extent of human learning mechanisms), but my point is that humans reflect on the meaning of a text before updating. This is why we can permanently learn a whole new concept from a single sentence, and LLMs cannot.
24.06.2025 20:44 β π 1 π 0 π¬ 1 π 0
1) I didn't realize a full theory of the brain had been published. Please point me to it.
2) I never said there was, and my statement still holds if what you said was true.
3) I am conscious of various high-level concepts. Those concepts must be stored somewhere (even if only temporarily).
24.06.2025 20:11 β π 0 π 0 π¬ 1 π 0
Even if you want to argue that an LLM has some conceptual model of the world, its training mechanism does not. All it does is adjust weights of the model until the output is what we want it to be. At no point in the training process can the model reflect on the meaning of the text.
24.06.2025 19:35 β π 1 π 0 π¬ 1 π 0
When an LLM is trained on a text, it just updates the weights of likely next words based on the word distribution in the text. When a human reads, they transform those words into concepts, evaluate those concepts, and then perhaps update their worldview. Very different modes of internal updating.
24.06.2025 19:29 β π 1 π 1 π¬ 4 π 0
There's a third thing at play as well, I think. Pure arrogance. Some of them genuinely believe they are the most intelligent humans alive, and that they've created something that is already more intelligent than everyone else (but everyone else is just too dumb to see it).
20.06.2025 14:11 β π 1 π 0 π¬ 0 π 0
The number of times I burnt the DH Questline reward...
20.06.2025 14:05 β π 0 π 0 π¬ 0 π 0