π
14.07.2025 13:52 β π 0 π 0 π¬ 0 π 0π
14.07.2025 13:52 β π 0 π 0 π¬ 0 π 0I think having universities pay PhD students would also give them more academic freedom. In CS labs, for instance, itβs common for the PI to get funding which is directed toward a particular project and PhD students are just getting paid to do these projects. Thatβs not always leading to new ideas
13.02.2025 21:08 β π 0 π 0 π¬ 0 π 0Just got access to a SLURM cluster with 60 H100s (94GB each) that are ready to rip
11.12.2024 18:03 β π 0 π 0 π¬ 0 π 0To be sure, I believe in the community, I just think this is a big shift relative to just a couple of years ago when companies only had a compute moat. Now they seem to have both a compute and method moat. N/N
01.12.2024 03:24 β π 0 π 0 π¬ 0 π 0I think another interesting point here is how far industry has gotten ahead of open source/academia on building these systems. This was a whole talk trying to figure out how to reinvent the wheel. 7/N
01.12.2024 03:24 β π 0 π 0 π¬ 1 π 0Also, in case the bitter lesson wasn't enough, of course Rich Sutton had something to say about self-verifying AI in 2001! incompleteideas.net/IncIdeas/Key... 6/N
01.12.2024 03:24 β π 0 π 0 π¬ 1 π 0For example, should we ever expect an LLM to weigh in on whether P=NP? There are also more subjective domains requiring a notion of correctness, such as navigating interpersonal relations, where you can't write unit tests for every scenario you'll encounter. 5/N
01.12.2024 03:24 β π 0 π 0 π¬ 1 π 0For instance, if we build verifiers that are capable of verifying the math and science we currently know, will it be able to push on and generate and verify new solutions to open problems? If not, then my question is, how do we build verification systems that are not bounded by human feedback? 4/N
01.12.2024 03:24 β π 0 π 0 π¬ 1 π 0It seems like the system as a whole might still be bounded by the verification model's capabilities and how much they want to pay human annotators. 3/N
01.12.2024 03:24 β π 0 π 0 π¬ 1 π 0I wonder if this 2-model system (primary model + verifier model) will be enough to reach escape velocity (i.e., primary models that can generate new ideas or reason about ideas they haven't been exposed to before). 2/N
01.12.2024 03:24 β π 0 π 0 π¬ 1 π 0ICYMI, @srushnlp.bsky.social recently gave a nice talk speculating about the methods/data used to train OpenAI's o1 model. The key idea seems to be scaling up chain-of-thought (CoT) generation using auxiliary verifier models that can give feedback on the correctness of the generation. 1/N
01.12.2024 03:24 β π 10 π 1 π¬ 1 π 0What about private GitHub repos? Do you think info contained in those are already being used or will be used to train models?
27.11.2024 21:28 β π 0 π 0 π¬ 0 π 0
If youβre a PhD or masters student interested in working on NeuroAI topics, then consider applying by Dec. 10 to be a NeuroAI intern @cshlaboratory.bsky.social
I interned with @tonyzador.bsky.social working on spiking neural networks. 10/10 experience.
www.schooljobs.com/careers/cshl...
Want to be part of the NeuroAI community CSHL ?
Applications are open for outstanding graduate students in Artificial Intelligence to spend the summer at CSHL as NeuroAI Interns.
Deadline: Dec 10th, 2024. Please spread the word!
www.schooljobs.com/careers/cshl...
@cshlaboratory.bsky.social