It would be fun for a benchmark to focus on problems that are more "visual" - truths that are easy for humans to "see" but hard for them to prove formally
23.07.2025 11:36 β π 0 π 0 π¬ 0 π 0@danieljbutler.bsky.social
interests: software, neuroscience, causality, philosophy | ex: salk institute, u of washington, MIT | djbutler.github.io
It would be fun for a benchmark to focus on problems that are more "visual" - truths that are easy for humans to "see" but hard for them to prove formally
23.07.2025 11:36 β π 0 π 0 π¬ 0 π 0Isn't natural language still awfully close to a formal / symbolic domain? Human mathematical intuition seems grounded in spatiotemporal relationships, not natural language.
22.07.2025 13:20 β π 1 π 0 π¬ 1 π 0It could be called "turbulence"
17.07.2025 11:09 β π 1 π 0 π¬ 0 π 0The Bluesky Python SDK is so cool!
14.07.2025 01:55 β π 0 π 0 π¬ 0 π 0Length of chain of thought does indeed correlate to difficulty - see attached
29.06.2025 09:14 β π 0 π 0 π¬ 1 π 0Iβm genuinely confused by these statements. Chain of thought length absolutely does correlate to difficulty - generally the LLM will stop thinking when it reached a reasonable answer. Likewise in human reasoning!
29.06.2025 08:32 β π 0 π 0 π¬ 1 π 0The number of tokens doesn't necessarily stay the same, does it? LLMs can execute algorithms and output the stored values at intermediate steps as tokens, so the number of tokens / amount of computation scales up with the difficulty of the problem (size of the input, in the case of factorization)
28.06.2025 11:09 β π 0 π 0 π¬ 1 π 0But isnβt it just a constant amount of compute per token? Producing more tokens involves using more time and space. Chain of thought, etc.
28.06.2025 10:47 β π 0 π 0 π¬ 1 π 0By contrast, good explanatory scientific theories generalize to broader set of "perturbations" than just the types of experiments that went into constructing the theory. Watson and Crick's model of DNA was not just a way to predict x-ray diffraction patterns.
25.06.2025 23:35 β π 3 π 0 π¬ 1 π 0Totally right, you said something different. You're much more pro- this type of model learned from perturbation data. 
My concern is that you end up with a causal model, yes - but the perturbations are drawn from a very constrained distribution. The ML model can more or less memorize them.
Also notable that this type of work doesn't use any of the conditional independence assumptions that are common in the causal modeling community @alxndrmlk.bsky.social
24.06.2025 17:27 β π 1 π 0 π¬ 1 π 0@kordinglab.bsky.social argued in a recent talk that you can't learn a model from canned data that will let you simulate perturbation experiments.
bsky.app/profile/kemp...
But this type of model seems darn close.
Cool work out of @arcinstitute.org . My question is, do models like this let us perform novel in-silico experiments the way first-principles models do, or are they just clever way of extrapolating existing experimental data from one context to another?
24.06.2025 17:15 β π 0 π 0 π¬ 1 π 0Two sets of connecting fly neurons with fine, wispy arbors.
Cleaning up disk space, I found this image I made for someone not long after the release of the #HHMIJanelia #Drosophila hemibrain #connectome in 2020. It shows EPG neurons in pink providing inputs to PFL1 neurons in transparent grey. I'm not sure if the image was ever used.
18.06.2025 22:46 β π 11 π 2 π¬ 1 π 0Philip did mention a MW talk from Zurek I think
14.06.2025 13:55 β π 0 π 0 π¬ 1 π 0Do we know if the number of steps they can perform is related to how many steps they saw in their training data? Can RL fine-tuning increase the number of steps?
10.06.2025 11:52 β π 0 π 0 π¬ 0 π 0Does anyone know what species this is? Would love to know more about what structures play the role of nervous system and muscles
08.06.2025 22:04 β π 0 π 0 π¬ 0 π 0Who knew that Chargaff was into this stuff
08.06.2025 14:03 β π 0 π 0 π¬ 0 π 0Against reductionism: "Our understanding of the world is built up of innumerable layers. Each is worth exploring, as long as we do not forget that it is one of many. Knowing all there is to know about one layer (...) would not teach us much about the rest". Erwin Chargaff
08.06.2025 12:45 β π 38 π 10 π¬ 3 π 0Things that arenβt chocked full of information-bearing molecules
07.06.2025 17:31 β π 8 π 0 π¬ 0 π 0Because the kind of theories we want involve phenomena that span 3-4 orders of magnitude in space (synapses vs. brains) and 6-7 orders of magnitude in time (action potentials vs. skill acquisition)?
07.06.2025 14:40 β π 18 π 0 π¬ 0 π 0Thereβs a good definition of computational universality (Church-Turing) - why couldnβt there be one of general intelligence?
30.05.2025 13:14 β π 2 π 0 π¬ 0 π 0If constructor theory told us something amazing *was* constructible, it might help motivate us to build it.
Conversely we could avoid wasting our time on things not even constructible in principle.
Quiet posters feed. Youβre welcome.
24.05.2025 20:13 β π 0 π 0 π¬ 0 π 0To all the international students, post-docs, scientists, and other academics Iβve been friends with over the years - we support you, and we want you here
23.05.2025 21:13 β π 0 π 0 π¬ 0 π 0What do you mean by "information about"?
22.05.2025 02:20 β π 0 π 0 π¬ 1 π 0No. Burning a library destroys something. Not physical information (thatβs left in the heat and ash) but knowledge about the world. Whatever the fire is destroying, the brain can create βde novoβ. Itβs not conserved.
21.05.2025 13:42 β π 0 π 0 π¬ 1 π 0Physics is also information-preserving.
So thereβs been no βnewβ information since the Big Bang. 
But there must be some other sense in which new things do come into existence.
New information, no.
But new ideas, new knowledge, yes. 
Einstein didnβt acquire relativity from observations, he invented it.
@annakaharris.bsky.social @philipgoff.bsky.social All our *discourse* about C is 3rd-person observable - neurons firing, vocal cords moving, etc. We expect a boring old physical story one day. Won't that story undercut panpsychism?
@seanmcarroll.bsky.social did you ever get a satisfying answer?