What *should* it look like though if perception *were* being penetrated by your knowledge that Kermit's just a puppet? ;)
15.02.2026 03:03 β π 0 π 0 π¬ 0 π 0@glupyan.bsky.social
What *should* it look like though if perception *were* being penetrated by your knowledge that Kermit's just a puppet? ;)
15.02.2026 03:03 β π 0 π 0 π¬ 0 π 0happy to share thoughts/make suggestions if you want more feedback !
14.02.2026 19:20 β π 1 π 0 π¬ 0 π 0Yes itβs not just neuroscience! Scientific coding is gonna get a whole lot better and more usable (because letβs face it, the status quo is pretty pitiful)
14.02.2026 17:16 β π 3 π 0 π¬ 1 π 0I only get these news third-hand. Weβre gonna have software on the moon or something?
14.02.2026 17:14 β π 1 π 0 π¬ 1 π 0would love to hear your thoughts!
14.02.2026 14:28 β π 1 π 0 π¬ 1 π 0Oh, it's published! pubmed.ncbi.nlm.nih.gov/41499467/#/ lemme know if you need a PDF
14.02.2026 05:26 β π 1 π 0 π¬ 1 π 0So this is the rub, right? What are we talking about with this AGI stuff? A theorem prover w/o common sense? A competent domain-general conversation partner that gets tripped up by some math? One of them is a lot closer to the intelligence we think is general enough (i.e., us).
14.02.2026 04:51 β π 0 π 0 π¬ 2 π 0I donβt know what Iβm doing differently, but chatgpt, gemini, Claude, challenge me left and right. I donβt get emoji, I donβt get βthatβs a brilliant insightββ¦ just straight up reasonable responses and gentle pushback (with the occasional going off the rails).
14.02.2026 04:04 β π 2 π 0 π¬ 0 π 0We leverage these traces to present the first large-scale study (129,134 projects) of the adoption of coding agents on GitHub, finding an estimated adoption rate of 15.85%β22.60%, which is very high for a technology only a few months oldβand increasing.
At what point do the stochastic parroters admit that maybe (just maybe) they were wrong? arxiv.org/abs/2601.183...
14.02.2026 04:00 β π 1 π 0 π¬ 0 π 0oh boy :)
14.02.2026 00:18 β π 0 π 0 π¬ 0 π 0(title was a bit of an editorial clickbait thing)
13.02.2026 23:16 β π 0 π 0 π¬ 0 π 0And in part inspired by Markβs essay, Andy Clark and I did another take on this at Aeon a few years after aeon.co/essays/how-m...
13.02.2026 20:52 β π 3 π 0 π¬ 1 π 0If one considers what people do as reasoning, then concluding that LLMs do not "really" reason because they're "just" pattern matching makes the incorrect assumption that human reasoning is something else entirely.
13.02.2026 20:00 β π 0 π 0 π¬ 0 π 0People are talking past one another on this topic. *Human* reasoning is not sound or reliable. It's stochastic; it's context and content-dependent. We make it sound by relying on external symbol systems (+machines that can do what our brains can't). ...
13.02.2026 19:55 β π 0 π 0 π¬ 2 π 0on this definition, humans do not have βnaturalβ general intelligence.
13.02.2026 18:56 β π 1 π 0 π¬ 1 π 0human reasoning does not meet this bar of reasoning
13.02.2026 18:56 β π 0 π 0 π¬ 1 π 0My lab at UW Madison is hiring a new lab manager! Iβm looking for a motivated and detail-oriented person to help run the labβs day-to-day, including our ongoing neuroimaging and behavioral studies. This is especially well suited for graduating undergrads thinking about grad school. Link below
12.02.2026 18:20 β π 27 π 34 π¬ 1 π 0:) Well... I do think there's a really interesting convergence afoot in cog sci!
09.02.2026 20:37 β π 1 π 0 π¬ 0 π 0Ours wanted to help set up for dinner but didnβt want to carry the pasta because that was just one thing and he wanted to do 2. (You can guess how many he ended up doing).
08.02.2026 13:34 β π 1 π 0 π¬ 0 π 0For sure, and it's a fascinating development, but if you notice, what LLMs do is *partial* everything (very much like people imo): partial compositionality (e.g., Pavlick's work), ubiquitous context effects...
08.02.2026 04:13 β π 1 π 0 π¬ 0 π 0See sect 4: arxiv.org/abs/2601.114... Briefly: Leibniz & Boole were after laws of the mind. But ironically they ended up creating formalisms that allowed our pattern-matching minds to *do* math & logic (much) better. Their formalisms are useful precisely because our minds are naturally crap at it
08.02.2026 01:57 β π 1 π 0 π¬ 1 π 0π― have a look at @blaiseaguera.bsky.social recent book for a good take mitpress.mit.edu/978026204995...
07.02.2026 22:01 β π 1 π 0 π¬ 0 π 0β¦as Iβm putting together some slides citing Leibniz and Boole to make the point that AI/ML ppl are stuck in 17th/19th centuries in their thinking about what underlies human thinking
06.02.2026 22:29 β π 0 π 0 π¬ 2 π 0π
06.02.2026 22:26 β π 0 π 0 π¬ 1 π 0<sigh> π¨βπ¦³
05.02.2026 19:16 β π 1 π 0 π¬ 0 π 0Super interesting! With the Jabberwocky task, it can't anchor on specific tokens because they're completely obfuscated, but it can anchor to high-level patterns, e.g., inferred cluster of verb phrases w/ certain punctuation. Number are also useful. But as best I can tell, there isn't a must-have cue
05.02.2026 19:15 β π 1 π 0 π¬ 0 π 0Comic. [Person talking to person with black hat.] PERSON 1: Historically, it refers to a ceremony to predict the weather using a rodent. But nowadays people often use it to mean βa time loop experienced by one person.β PERSON 2: β¦What. [caption] Easily our weirdest holiday.
Groundhog Day Meaning
xkcd.com/3202/
How about this? arxiv.org/abs/2601.114...
04.02.2026 23:16 β π 0 π 0 π¬ 0 π 0Fully aware that Iβm opening a can of worms (and I should just find the time to read your paper in its entirety), but what is the basis of the claim that human behavior is stimulus free (or can be stimulus free)?
03.02.2026 15:25 β π 1 π 0 π¬ 1 π 0