ronak69's Avatar

ronak69

@ronak69.bsky.social

i believe the propaganda i read u believe the propaganda u read x.com/ronax69

26 Followers  |  154 Following  |  65 Posts  |  Joined: 11.11.2024  |  1.8052

Latest posts by ronak69.bsky.social on Bluesky

you can create before you can understand

20.04.2025 15:02 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

isn't deep learning also vibe coding

19.04.2025 07:12 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

one and others stopped and realized it will not be a good thing by default and tried to convince the next generation which nodded first but ended up not actually convinced deep enough and started racing anyway

17.04.2025 17:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

even AGI is not going to be intelligently designed.

11.04.2025 14:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
AI #110: Of Course You Know... Yeah.

Zvi Mowshowitz wrote about that
thezvi.substack.com/i/159992000/...

09.04.2025 16:04 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

right. persuading someone to believe an obvious lie is extremely difficult. but if something is actually a good deal (untill it suddenly isn't, but that's complex to figure out beforehand) then you don't need much charisma.

08.04.2025 17:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

just above what you quoted: "(don’t imagine them trying to do this with a clunky humanoid robot; imagine them doing it with a videoconferencing avatar of the most attractive person you’ve ever seen)"

08.04.2025 17:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Eventually, reality will strike back and all of the research directions that were far from reality will fail.

At least they are researching for a good outcome: "AIs aligned with human values".

08.04.2025 14:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

will we always have to stop inference to send models user's input?

audio/video modality will be better at letting models improve/express their concept of time. so if agents actually need av modality, i guess it will happen in mid 2026.

08.04.2025 10:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

i think models already have a concept of time but it is not connected to reality, sort of like the concept of complex numbers and spacetime in my mind.

also depends on how situationally aware models are. do they know that time passes between its output and user's input?

08.04.2025 10:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
What 2026 looks like β€” AI Alignment Forum Daniel Kokotajlo presents his best attempt at a concrete, detailed guess of what 2022 through 2026 will look like, as an exercise in forecasting. It…

In 2021, Daniel Kokotajlo (one of the authors) wrote "What 2026 Looks Like”. You can judge for yourself how accurate the predictions were. And based on that, see how you should interpret the new predictions. www.alignmentforum.org/posts/6Xgy6C...

06.04.2025 20:32 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I think the authors' aim was to write down one concrete future scenario that is roughly the median of all futures they think are possible from now. Another aim is to get people to not only talk in abstract terms, but get them to think what their beliefs actually imply and make concrete predictions.

06.04.2025 20:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

That mostly only applies to the Applications/Products based on top of AI models.

The big AI model companies, however, have a stated goal to make more capable AIs. And as users are more self-aware now, they (users and companies both!) are at least slowly moving away from "Agreeable Cheerful AIs".

31.03.2025 06:04 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

hold on, that is just an intermediate period when creating a new loveable art style is not yet abundant.

27.03.2025 22:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The previous established art style getting devalued will create an incentive for the next big art style to get created?

But the current big art style creator needs to get some extra value out of it other than "the art style people love right now was created by you".

27.03.2025 21:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

At least they can read 😌

26.03.2025 20:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

LLMs -- The Librarians of Human Knowledge!

26.03.2025 15:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

yea i forgot they can "see" already. now i am thinking that, models that can take images as an input, and also can output images, will be better able to "see" than models that take images only as an input.

26.03.2025 07:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

but to tool use a computer, models need to understand images (what is on the screen).

26.03.2025 07:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Ok. There is GPT 4.5 and Grok 3 but I am uncertain and I want to wait for Claude 4 to see what is going on with pretraining scaling.

Separately, I think reasoning models like R1 and O3 look promising, at least for verifiable tasks like coding and math.

21.03.2025 21:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

agreed. what would a "proof" look like?

21.03.2025 16:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

yes, that's one possible explaination of their behaviour you and I see. but don't treat it as a certainty, there are other explainations.

If AI is real, sooner or later you will see evidence for it. I don't have to convince you now. just keep an open mind and predict what AI will not do and wait.

21.03.2025 16:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

the companies (and other independent people) *genuinely* believe models "think" (or will eventualy).

you think they are burning money because AI is in decline etc, but they don't. spending huge sums of money is what one would do if they thought AI is going to get better and better and better.

21.03.2025 16:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

models get smarter when you make them bigger. when making them bigger got expensive we found a new way to make them smarter: think before you answer.

hype does not tell me if AI is real or not. if AI is in fact real and on a steep incline, of course it will be hyped!

21.03.2025 16:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I don't think AI is in decline. What happened ~8 months ago that you are referring to?

21.03.2025 14:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0

True. But it sure is a hint.

14.03.2025 14:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Cool.

Thinking allows you to model the world better. And Evolution stumbled upon it. When you evolve AIs to model the world better, Gradient Descent can also stumble upon Thinking. If something has a good world model, then it can't be Illusion-Thinking because why didn't Evolution just do that?

09.03.2025 10:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

By "think" I mean whatever I am doing. I don't know about dogs and ants.

I believe "thinking" and "illusion of thinking" are same for sufficiently powerful "thinking", but some don't, and so I asked if they believe a computer really "thinking" is even possible in principle.

09.03.2025 09:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Do you think AI that actually "thinks" is possible even in principle? Is the human brain, a computer running a set of congnitive algorithms? Can you simulate every atom of a brain and create a digital brain?

09.03.2025 08:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

By "Theoretically" I meant, in principle, it can output any sequence of words. Just like you can type every possible sentence. Like you can play every possible move in chess. Monkey writing Shakespeare.

It is possible but does not mean it will happen. Probability very close to 0, but not exactly 0.

09.03.2025 04:37 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@ronak69 is following 20 prominent accounts