Leon Lang's Avatar

Leon Lang

@leon-lang.bsky.social

PhD Candidate at the University of Amsterdam. AI Alignment and safety research. Formerly multivariate information theory and equivariant deep learning. Masters degrees in both maths and AI. https://langleon.github.io/

229 Followers  |  116 Following  |  22 Posts  |  Joined: 20.11.2024  |  1.9678

Latest posts by leon-lang.bsky.social on Bluesky

Post image

โš ๏ธ The Perils of Optimizing Learned Reward Functions: Low Training Error Does Not Guarantee Low Regret

By Lukas Fluri*, @leon-lang.bsky.social *, Alessandro Abate, Patrick Forrรฉ, David Krueger, Joar Skalse

๐Ÿ“œ arxiv.org/abs/2406.15753

๐Ÿงต6 / 8

06.05.2025 14:53 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Modeling Human Beliefs about AI Behavior for Scalable Oversight Contemporary work in AI alignment often relies on human feedback to teach AI systems human preferences and values. Yet as AI systems grow more capable, human feedback becomes increasingly unreliable. ...

Paper link: arxiv.org/abs/2502.21262
(4/4)

03.03.2025 15:44 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I theoretically describe what modeling the human's beliefs would mean, and explain a practical proposal for how one could try to do this, based on foundation models whose internal representations *translate to* the human's beliefs using an implicit ontology translation. (3/4)

03.03.2025 15:44 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

The idea: In the robot-hand example, when the hand is in front of the ball, the human believes the ball was grasped and gives "thumbs up", leading to bad behavior. If we knew the human's beliefs, then we could assign the feedback properly: Reward the ball being grasped! (2/4)

03.03.2025 15:44 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Brief paper announcement (longer thread might follow):

In our new paper "Modeling Human Beliefs about AI behavior for Scalable Oversight", I propose to model a human evaluator's beliefs to better interpret the feedback, which might help for scalable oversight. (1/4)

03.03.2025 15:44 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Modeling Human Beliefs about AI Behavior for Scalable Oversight Contemporary work in AI alignment often relies on human feedback to teach AI systems human preferences and values. Yet as AI systems grow more capable, human feedback becomes increasingly unreliable. ...

www.arxiv.org/abs/2502.21262

I have now this follow-up paper that goes into greater detail for how to achieve the human belief modeling, both conceptually and potentially in practice.

03.03.2025 15:41 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

If you are attending #NeurIPS2024๐Ÿ‡จ๐Ÿ‡ฆ, make sure to check out AMLab's 11 accepted papers ...and to have a chat with our members there! ๐Ÿ‘ฉโ€๐Ÿ”ฌ๐Ÿปโ˜•

Submissions include generative modelling, AI4Science, geometric deep learning, reinforcement learning and early exiting. See the thread for the full list!

๐Ÿงต1 / 12

09.12.2024 13:24 โ€” ๐Ÿ‘ 25    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

First UAI conference in Latin America!! ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ

North America and Europe you are nice, but sometimes I also want to visit somewhere else ๐Ÿ˜…

03.12.2024 17:30 โ€” ๐Ÿ‘ 17    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I just completed "Historian Hysteria" - Day 1 - Advent of Code 2024 #AdventOfCode adventofcode.com/2024/day/1

01.12.2024 17:19 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I notice more โ€œbigโ€ accounts here that follow a lot of people. The same accounts follow almost no one on twitter. Is this motivated by a difference in the algorithms of these platforms?

01.12.2024 11:04 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image

Yet another safety researcher has left OpenAI.

Rosie Campbell says she has been โ€œunsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our cultureโ€.

She says she โ€œcanโ€™t see a placeโ€ for her to continue her work internally.

01.12.2024 00:48 โ€” ๐Ÿ‘ 56    ๐Ÿ” 12    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

We are taking on a mission to track progress in AI capabilities over time.

Very proud of our team!

27.11.2024 20:38 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Hey hey,

I am around in the Bay area for the next few weeks. Bay area folks hit me up if you want to meet up for coffee/ vegan food in and around SF โ˜•๐ŸŒฏ ๐ŸฅŸ

Got a major weather upgradeโ˜€๏ธ from Amsterdam's insanity last week ๐ŸŒ€๐ŸŒฉ๏ธ

24.11.2024 21:54 โ€” ๐Ÿ‘ 18    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thanks for highlighting our paper! :)

25.11.2024 19:33 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Interesting, I didnโ€™t know such things are common practice!

24.11.2024 07:52 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I think such questionnaires should maybe generally contain a control group of people who did some brief (letโ€™s say 15 minutes) calibration training just do understand what percentages even mean.

23.11.2024 22:48 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Are people maybe very bad at math?
I remember once that I asked my own mom to draw what one million dollars looks like in proportion to 1 billion, and she drew like what corresponds to ~ 150 million, off by a factor of 150.

23.11.2024 22:47 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

Yeah risks are then probably more external: who creates the LLM, and do they poison the data in such a way that it will associate human utterances to bad goals.

23.11.2024 22:41 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I actually think I (essentially?) understood this! Ie my worry was whether the LLM could end up giving high likelihood to human utterances for goals that are very bad.

23.11.2024 22:40 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I see, interesting.
Is the hope basically that the LLM utters "the same things" as what the human would utter under the same goal? Is there a (somewhat futuristic...) risk that a misaligned language model might "try" to utter the human's phrase under its own misaligned goals?

23.11.2024 19:44 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Meet our Lab's members: staff, postdocs and PhD students! :)

With this starter pack you can easily connect with us and keep up to date with all the member's research and news ๐Ÿฆ‹

go.bsky.app/8EGigUy

21.11.2024 21:22 โ€” ๐Ÿ‘ 25    ๐Ÿ” 9    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

You could add myself possibly

21.11.2024 08:27 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
The Categories Were Made For Man, Not Man For The Categories I. โ€œSilliest internet atheist argumentโ€ is a hotly contested title, but I have a special place in my heart for the people who occasionally try to prove Biblical fallibility by pointing โ€ฆ

I strongly disagree. Iโ€™d even go as far as saying that for most relevant purposes, itโ€™s fine to say mushrooms are plants. www.google.com/url?q=https:...

21.11.2024 06:49 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image

MIT undergrads from families earning less than $200K will pay no tuition fees from 2025, and undergrads from families earning less than $100K will have everything covered, including housing, dining, and a personal allowance.

news.mit.edu/2024/mit-tui...

20.11.2024 20:14 โ€” ๐Ÿ‘ 20    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I think bluesky looks much more like twitter than chat apps look alike. Bluesky even has the same ordering of buttons

20.11.2024 22:26 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Does anyone understand why itโ€™s so easy to clone twitter with no IP issues?

Itโ€™s hard to understand qualitative legal thresholds, but the UI looking ~exactly the same both here and on threads intuitively seems like the kind of thing that could violate a copyright if twitter had pursued one

20.11.2024 21:41 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

Here :) Thanks for putting this together!

20.11.2024 15:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Hi everyone! This is AMLab :)
Looking forward to share our research here on ๐Ÿฆ‹ !

19.11.2024 16:00 โ€” ๐Ÿ‘ 26    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Good to have you here :P

20.11.2024 12:48 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@leon-lang is following 18 prominent accounts