Gary Lupyan's Avatar

Gary Lupyan

@glupyan.bsky.social

1,717 Followers  |  137 Following  |  339 Posts  |  Joined: 07.09.2023  |  1.9207

Latest posts by glupyan.bsky.social on Bluesky

What *should* it look like though if perception *were* being penetrated by your knowledge that Kermit's just a puppet? ;)

15.02.2026 03:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

happy to share thoughts/make suggestions if you want more feedback !

14.02.2026 19:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yes it’s not just neuroscience! Scientific coding is gonna get a whole lot better and more usable (because let’s face it, the status quo is pretty pitiful)

14.02.2026 17:16 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I only get these news third-hand. We’re gonna have software on the moon or something?

14.02.2026 17:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

would love to hear your thoughts!

14.02.2026 14:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Oh, it's published! pubmed.ncbi.nlm.nih.gov/41499467/#/ lemme know if you need a PDF

14.02.2026 05:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So this is the rub, right? What are we talking about with this AGI stuff? A theorem prover w/o common sense? A competent domain-general conversation partner that gets tripped up by some math? One of them is a lot closer to the intelligence we think is general enough (i.e., us).

14.02.2026 04:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

I don’t know what I’m doing differently, but chatgpt, gemini, Claude, challenge me left and right. I don’t get emoji, I don’t get β€œthat’s a brilliant insight”… just straight up reasonable responses and gentle pushback (with the occasional going off the rails).

14.02.2026 04:04 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
We leverage these traces to present the first large-scale study (129,134 projects) of the adoption of coding agents on GitHub, finding an estimated adoption rate of 15.85%–22.60%, which is very high for a technology only a few months old–and increasing.

We leverage these traces to present the first large-scale study (129,134 projects) of the adoption of coding agents on GitHub, finding an estimated adoption rate of 15.85%–22.60%, which is very high for a technology only a few months old–and increasing.

At what point do the stochastic parroters admit that maybe (just maybe) they were wrong? arxiv.org/abs/2601.183...

14.02.2026 04:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

oh boy :)

14.02.2026 00:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

(title was a bit of an editorial clickbait thing)

13.02.2026 23:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
How might telepathy actually work outside the realm of sci-fi? | Aeon Essays Clear and direct telepathic communication is unlikely to be developed. But brain-to-brain links still hold great promise

And in part inspired by Mark’s essay, Andy Clark and I did another take on this at Aeon a few years after aeon.co/essays/how-m...

13.02.2026 20:52 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If one considers what people do as reasoning, then concluding that LLMs do not "really" reason because they're "just" pattern matching makes the incorrect assumption that human reasoning is something else entirely.

13.02.2026 20:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

People are talking past one another on this topic. *Human* reasoning is not sound or reliable. It's stochastic; it's context and content-dependent. We make it sound by relying on external symbol systems (+machines that can do what our brains can't). ...

13.02.2026 19:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

on this definition, humans do not have β€œnatural” general intelligence.

13.02.2026 18:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

human reasoning does not meet this bar of reasoning

13.02.2026 18:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

My lab at UW Madison is hiring a new lab manager! I’m looking for a motivated and detail-oriented person to help run the lab’s day-to-day, including our ongoing neuroimaging and behavioral studies. This is especially well suited for graduating undergrads thinking about grad school. Link below

12.02.2026 18:20 β€” πŸ‘ 27    πŸ” 34    πŸ’¬ 1    πŸ“Œ 0

:) Well... I do think there's a really interesting convergence afoot in cog sci!

09.02.2026 20:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Ours wanted to help set up for dinner but didn’t want to carry the pasta because that was just one thing and he wanted to do 2. (You can guess how many he ended up doing).

08.02.2026 13:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

For sure, and it's a fascinating development, but if you notice, what LLMs do is *partial* everything (very much like people imo): partial compositionality (e.g., Pavlick's work), ubiquitous context effects...

08.02.2026 04:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The unreasonable effectiveness of pattern matching We report on an astonishing ability of large language models (LLMs) to make sense of "Jabberwocky" language in which most or all content words have been randomly replaced by nonsense strings, e.g., tr...

See sect 4: arxiv.org/abs/2601.114... Briefly: Leibniz & Boole were after laws of the mind. But ironically they ended up creating formalisms that allowed our pattern-matching minds to *do* math & logic (much) better. Their formalisms are useful precisely because our minds are naturally crap at it

08.02.2026 01:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
What Is Intelligence? It has come as a shock to some AI researchers that a large neural net that predicts next words seems to produce a system with general intelligence. Yet this ...

πŸ’― have a look at @blaiseaguera.bsky.social recent book for a good take mitpress.mit.edu/978026204995...

07.02.2026 22:01 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

…as I’m putting together some slides citing Leibniz and Boole to make the point that AI/ML ppl are stuck in 17th/19th centuries in their thinking about what underlies human thinking

06.02.2026 22:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

πŸ˜†

06.02.2026 22:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

<sigh> πŸ‘¨β€πŸ¦³

05.02.2026 19:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Super interesting! With the Jabberwocky task, it can't anchor on specific tokens because they're completely obfuscated, but it can anchor to high-level patterns, e.g., inferred cluster of verb phrases w/ certain punctuation. Number are also useful. But as best I can tell, there isn't a must-have cue

05.02.2026 19:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Comic. [Person talking to person with black hat.] PERSON 1: Historically, it refers to a ceremony to predict the weather using a rodent. But nowadays people often use it to mean β€œa time loop experienced by one person.” PERSON 2: …What. [caption] Easily our weirdest holiday.

Comic. [Person talking to person with black hat.] PERSON 1: Historically, it refers to a ceremony to predict the weather using a rodent. But nowadays people often use it to mean β€œa time loop experienced by one person.” PERSON 2: …What. [caption] Easily our weirdest holiday.

Groundhog Day Meaning

xkcd.com/3202/

04.02.2026 21:35 β€” πŸ‘ 6942    πŸ” 1253    πŸ’¬ 57    πŸ“Œ 40
Preview
The unreasonable effectiveness of pattern matching We report on an astonishing ability of large language models (LLMs) to make sense of "Jabberwocky" language in which most or all content words have been randomly replaced by nonsense strings, e.g., tr...

How about this? arxiv.org/abs/2601.114...

04.02.2026 23:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The unreasonable effectiveness of pattern matching We report on an astonishing ability of large language models (LLMs) to make sense of "Jabberwocky" language in which most or all content words have been randomly replaced by nonsense strings, e.g., tr...

arxiv.org/abs/2601.114...

04.02.2026 23:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Fully aware that I’m opening a can of worms (and I should just find the time to read your paper in its entirety), but what is the basis of the claim that human behavior is stimulus free (or can be stimulus free)?

03.02.2026 15:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@glupyan is following 20 prominent accounts