you can create before you can understand
20.04.2025 15:02 β π 4 π 0 π¬ 0 π 0@ronak69.bsky.social
i believe the propaganda i read u believe the propaganda u read x.com/ronax69
you can create before you can understand
20.04.2025 15:02 β π 4 π 0 π¬ 0 π 0isn't deep learning also vibe coding
19.04.2025 07:12 β π 2 π 0 π¬ 1 π 0one and others stopped and realized it will not be a good thing by default and tried to convince the next generation which nodded first but ended up not actually convinced deep enough and started racing anyway
17.04.2025 17:10 β π 0 π 0 π¬ 0 π 0even AGI is not going to be intelligently designed.
11.04.2025 14:28 β π 0 π 0 π¬ 1 π 0Zvi Mowshowitz wrote about that
thezvi.substack.com/i/159992000/...
right. persuading someone to believe an obvious lie is extremely difficult. but if something is actually a good deal (untill it suddenly isn't, but that's complex to figure out beforehand) then you don't need much charisma.
08.04.2025 17:50 β π 1 π 0 π¬ 1 π 0just above what you quoted: "(donβt imagine them trying to do this with a clunky humanoid robot; imagine them doing it with a videoconferencing avatar of the most attractive person youβve ever seen)"
08.04.2025 17:31 β π 1 π 0 π¬ 1 π 0Eventually, reality will strike back and all of the research directions that were far from reality will fail.
At least they are researching for a good outcome: "AIs aligned with human values".
will we always have to stop inference to send models user's input?
audio/video modality will be better at letting models improve/express their concept of time. so if agents actually need av modality, i guess it will happen in mid 2026.
i think models already have a concept of time but it is not connected to reality, sort of like the concept of complex numbers and spacetime in my mind.
also depends on how situationally aware models are. do they know that time passes between its output and user's input?
In 2021, Daniel Kokotajlo (one of the authors) wrote "What 2026 Looks Likeβ. You can judge for yourself how accurate the predictions were. And based on that, see how you should interpret the new predictions. www.alignmentforum.org/posts/6Xgy6C...
06.04.2025 20:32 β π 2 π 0 π¬ 0 π 0I think the authors' aim was to write down one concrete future scenario that is roughly the median of all futures they think are possible from now. Another aim is to get people to not only talk in abstract terms, but get them to think what their beliefs actually imply and make concrete predictions.
06.04.2025 20:32 β π 1 π 0 π¬ 1 π 0That mostly only applies to the Applications/Products based on top of AI models.
The big AI model companies, however, have a stated goal to make more capable AIs. And as users are more self-aware now, they (users and companies both!) are at least slowly moving away from "Agreeable Cheerful AIs".
hold on, that is just an intermediate period when creating a new loveable art style is not yet abundant.
27.03.2025 22:11 β π 0 π 0 π¬ 0 π 0The previous established art style getting devalued will create an incentive for the next big art style to get created?
But the current big art style creator needs to get some extra value out of it other than "the art style people love right now was created by you".
At least they can read π
26.03.2025 20:10 β π 0 π 0 π¬ 0 π 0LLMs -- The Librarians of Human Knowledge!
26.03.2025 15:11 β π 0 π 0 π¬ 1 π 0yea i forgot they can "see" already. now i am thinking that, models that can take images as an input, and also can output images, will be better able to "see" than models that take images only as an input.
26.03.2025 07:43 β π 1 π 0 π¬ 0 π 0but to tool use a computer, models need to understand images (what is on the screen).
26.03.2025 07:28 β π 1 π 0 π¬ 1 π 0Ok. There is GPT 4.5 and Grok 3 but I am uncertain and I want to wait for Claude 4 to see what is going on with pretraining scaling.
Separately, I think reasoning models like R1 and O3 look promising, at least for verifiable tasks like coding and math.
agreed. what would a "proof" look like?
21.03.2025 16:24 β π 0 π 0 π¬ 0 π 0yes, that's one possible explaination of their behaviour you and I see. but don't treat it as a certainty, there are other explainations.
If AI is real, sooner or later you will see evidence for it. I don't have to convince you now. just keep an open mind and predict what AI will not do and wait.
the companies (and other independent people) *genuinely* believe models "think" (or will eventualy).
you think they are burning money because AI is in decline etc, but they don't. spending huge sums of money is what one would do if they thought AI is going to get better and better and better.
models get smarter when you make them bigger. when making them bigger got expensive we found a new way to make them smarter: think before you answer.
hype does not tell me if AI is real or not. if AI is in fact real and on a steep incline, of course it will be hyped!
I don't think AI is in decline. What happened ~8 months ago that you are referring to?
21.03.2025 14:09 β π 0 π 0 π¬ 3 π 0True. But it sure is a hint.
14.03.2025 14:43 β π 0 π 0 π¬ 0 π 0Cool.
Thinking allows you to model the world better. And Evolution stumbled upon it. When you evolve AIs to model the world better, Gradient Descent can also stumble upon Thinking. If something has a good world model, then it can't be Illusion-Thinking because why didn't Evolution just do that?
By "think" I mean whatever I am doing. I don't know about dogs and ants.
I believe "thinking" and "illusion of thinking" are same for sufficiently powerful "thinking", but some don't, and so I asked if they believe a computer really "thinking" is even possible in principle.
Do you think AI that actually "thinks" is possible even in principle? Is the human brain, a computer running a set of congnitive algorithms? Can you simulate every atom of a brain and create a digital brain?
09.03.2025 08:14 β π 0 π 0 π¬ 2 π 0By "Theoretically" I meant, in principle, it can output any sequence of words. Just like you can type every possible sentence. Like you can play every possible move in chess. Monkey writing Shakespeare.
It is possible but does not mean it will happen. Probability very close to 0, but not exactly 0.