It does seem that way! :)
If you want an example of how to teach in a fun way, I've long been a fan of Lockhart's "A Mathematician's Lament." (link below)
Yes, radical change and making it fun will require a major investment in our education system.
So what? It's worth it.
worrydream.com/refs/Lockhar...
n/n
We may also have to let kids have more freedom in which topics they pursue so that they develop strong skills in some things even if it costs skills in others.
We might also have to consider removing standard grading, which is a sure way to make something fun into something stressful. Or at least we need to substantially re-think it so that it doesn't encouraging gaming it.
With LLMs making it easier to work than ever before, it seems the only way to save the next generation is to make school enjoyable.
We have to stop insisting on boring context-free work because that's "just how it is, you'll understand when you're older." Give them an immediate motive to do it.
The thing is, kids do often do that and can become quite skilled at various tasks even at young ages. They just don't realize it. (Consider the dinosaur phase.)
They don't realize it because they enjoy the "suffering." They'd do it independent of long-term skill outcome.
Kids regularly ask the question "why do I need to do this work if an AI can do the same task?" (Many versions of this question in prior gens.)
What isn't understood until much later (if at all) is to develop advanced skill you must first "suffer" with the basics for a long time.
A morning hot take:
I'm starting to think the only way to save the education of the next generation in the era of LLMs is to massively invest in its funding and radically change its approach to make it fun instead of stressful "work."
/n
Since the demonstration of the Turing completeness of AR transformers I was fully expecting something like this to be taken more seriously.
I was not expecting the log time inference though!
This is great, but like a proper nerd I'm going to ask questions about the joke :P
What does "no seed" mean since you can't actually have no seed? Library-default initialization? /dev/urandom?
Claude is generating the text it predicts an assistant would say.
It has no self, so you could say it cannot "play act" or it's all "play acting" depending your definition. If we didn't stop the model, it would also generate the text for the user's next turn and so on. It's all the same to it.
Incidentally, these properties make it a great language for people too.
Sorry, that strikes me as a terribly useless definition of consciousness.
You can declare anything if you pick useless definitions for things.
I don't think its meant as a distraction from that so much as I think they do it because it hypes their product. And I think that holds whether they're deliberately doing it for that reason, or because it makes them feel good about their work to believe it's true.
No, it does not need to be binary.
But LLMs lack enough properties tightly linked with conscious experience that it's not worth entertaining that they have any at all.
There are definitely people like this, but we should distinguish people with this position from those with the more sane position that Claude is not conscious.
For example, I've for very long held that computers can be conscious, but I definitely do not think Claude is.
UPDATE: Sunny, a U.S. citizen, is on her way home after being shipped across state lines overnight.
How the hell did this happen?
Press conference tomorrow at Broadview, where we will hear Sunny’s side of the story. More details soon.
Hanne Daguman said she "genuinely feared for [her] health" after being denied insulin, causing her to lose vision and collapse.
Hegseth: "The only ones who need to be worried right now are Iranians who think they're going to live"
This is on point.
It's mostly reflective of tech industry executives or others trying to sell something, rather than researchers, but I must sadly admit that some researchers act this way too.
AI research:
Researcher: Claude, please eat ten hamburgers.
Claude: Done! I have eaten ten hamburgers. The first two were delicious, but after that I began to experience bloating and the meat sweats.
Headline: Anthropic Says Claude has "A Fully Developed Digestive System"
I agree, and I think you can make this stronger by pointing out that there are _many_ different AI technologies.
For example, many of the the ethical concerns about AI tech involve training on people's data.
But there are also AI technologies that learn from self-generated experiences.
I need to stop reading news in the morning.
Nice write up!
Maybe you can answer this. I believe associative scans prefer the sequence dim first. But attention usually has it latter.
Delta Net's are a little different, but I think still have a chunk associative scan? So do hybrid models suffer from different dimension ordering prefs?
Wow. This is mind numbingly stupid.
An ICE detainee in Arizona has died of a TOOTH INFECTION after it went untreated for weeks, a local official says. He was a Haitian asylum seeker imprisoned in Florence, Arizona. @emilybregel.bsky.social reports.
tucson.com/news/local/b...
[TW: graphic fracture, sound of breaking bone]
Sen Tim Sheehy (R-Montana) badly breaking the arm of a Marine veteran protesting the war Iran.
Absolutely grotesque.
Jayapal to Noem: "I want to introduce you to just four of the US citizens unlawfully detained by ICE ... they're in this room with us."
Man, "rewrite it in rust" is being made pretty easy with llms.
(Ideally there'd be more significant design changes for a difference in language, but still...)