Milos Borenovic's Avatar

Milos Borenovic

@milbor.bsky.social

Keeping humans in the game as AI learns to play. Once taught machines to think, now teaching humans that independent thinking matters. πŸ”οΈ

600 Followers  |  419 Following  |  17 Posts  |  Joined: 25.11.2024  |  1.6634

Latest posts by milbor.bsky.social on Bluesky


Preview
OpenAI on X: "Announcing The Stargate Project The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure" / X Announcing The Stargate Project The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure

Is it just me or this reads like Manhattan Project II?

x.com/openai/statu...

BTW, total cost of Manhattan Project was less then 2bn USD (less than 30bn USD in today's USD).

22.01.2025 08:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Seems like it's getting gravity much better now.

08.01.2025 23:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I think applying past experiences to extrapolate future when there is a non-trivial chance of a singular change is a dangerous thing to do.

Even if they're aligned, if they're smarter then us, why wouldn't you listen to them telling you what they should build for us? Smells like losing the agency?

21.12.2024 04:45 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Help, can't decide where or what to start thinking about this

14.12.2024 12:11 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Anyone here thinking about doing a #phd? We have a healthy number of PhD studentships available for 2025/6 entry at the School of Computer Science, University of St Andrews, Scotland. These include 3.5 years of fees and a stipend, and are open to students from all over the world. Details below:

21.11.2024 08:54 β€” πŸ‘ 13    πŸ” 7    πŸ’¬ 1    πŸ“Œ 3

You've been driving the imaginary speed, at two orthogonal directions at once.

10.12.2024 14:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Future of Life Institute (FLI) is looking for a Head of U.S. Policy, ideally in Washington D.C, to work on US AI policy. The application deadline is December 22. The salary for this role is 150k to 240k.

Apply here: jobs.lever.co/futureof-lif...

06.12.2024 15:14 β€” πŸ‘ 8    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

They would certainly be easier to find and cheaper to hire than elite AI researchers. πŸ˜‚

But yeah, data labeling entering a renaissance with very smart people catering the training data.

05.12.2024 17:38 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

That one I know, I'm gonna be a dog trainer.

05.12.2024 15:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

AI Governance & Human Agency

05.12.2024 13:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

As transfomative AI becomes more ubiquitous, it seems that diminishing and managing the 'great divide' will be a challenge on par with AI alignment.

If we don't get it right, we might end up as multiple races.

04.12.2024 13:40 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Sorry Darwin, Your Monkey's Going Bionic Bridging the Gap From Both Ends: A Conversation with Kristian RΓΆnn

Natural evolution vs. silicon speed - how do we keep up? Kristian's answer might surprise you πŸ€–

agencymatters.substack.com/p/sorry-darw...

#AISafety #AGI #FutureOfHumanity #AIGovernance #HumanAgency #AgencMatters

03.12.2024 14:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

His book "The Darwinian Trap" introduced me to reputational markets - but his staircase metaphor for human-AI coevolution really clicked with my thoughts on preserving human agency.

03.12.2024 14:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

"Sorry Darwin, Your Monkey's Going Bionic" πŸ’

Fascinating chat with Kristian RΓΆnn about bridging the future human-AI intelligence gap.

03.12.2024 14:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This nicely encapsulates one of the most important yet easily missed properties of LLMs.

03.12.2024 14:22 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

No explicit "HUMANS MUST REMAIN IN CHARGE" declaration, but these guardrails tell a story: They're thinking about how to keep meaningful human control as AI systems become more autonomous.

Baby steps, but in the right direction? πŸ€”

#HumanAgency #AgencyMatters

02.12.2024 09:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Looking for human agency in the EU's AI Code of Practice? πŸ”

It's there, but you have to read between the lines:

- Human oversight requirements
- Independent expert reviews
- Whistleblower protections
- Risk-based controls

#AI #AIGovernance

02.12.2024 09:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What would be your typical use?

You can solve a part of the problem by using more grounded ones such as NotebookLM but then you'd need to look for your sources.

Another partial solution might be to encourage it to ask questions when unsure. This should reduce the hallucination rate.

29.11.2024 22:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Agency Matters | Milos Borenovic | Substack What if the greatest threat from AI isn't that it becomes too powerful, but that we become too passive? Click to read Agency Matters, a Substack publication. Launched 21 days ago.

Hello everybody! πŸ‘‹

My first post here. After years of building AI systems that reached billions, I'm now writing about keeping humans in the game while we still have a say. Thoughts on human agency in the coming age of AI: agencymatters.substack.com

#AI #AIEthics #AgencyMatters

28.11.2024 19:16 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@milbor is following 20 prominent accounts