Sian Gooding's Avatar

Sian Gooding

@siangooding.bsky.social

Senior Research Scientist @GoogleDeepMind working on Autonomous Assistants βœοΈπŸ€–

277 Followers  |  20 Following  |  15 Posts  |  Joined: 17.09.2023  |  1.7373

Latest posts by siangooding.bsky.social on Bluesky

πŸ‘‹ We're building a new type of word processor at Marker, and we're hiring for React/ProseMirror engineers and full-stack AI engineers to join the team in London.

Are you an engineer who cares about writing? Or do you know someone who does?

See: writewithmarker.com/jobs

More details below πŸ‘‡

04.06.2025 17:11 β€” πŸ‘ 24    πŸ” 11    πŸ’¬ 2    πŸ“Œ 1

Sorted, thanks!

02.04.2025 22:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Don't lie to your friends: Learning what you know from collaborative self-play To be helpful assistants, AI agents must be aware of their own capabilities and limitations. This includes knowing when to answer from parametric knowledge versus using tools, when to trust tool outpu...

We all want LLMs to collaborate with humans to help them achieve their goals. But LLMs are not trained to collaborate, they are trained to imitate. Can we teach LM agents to help humans by first making them help each other?

arxiv.org/abs/2503.14481

24.03.2025 15:39 β€” πŸ‘ 56    πŸ” 20    πŸ’¬ 1    πŸ“Œ 0

You’ll collaborate with a kind, curious, research-driven teamβ€”including the brilliant @joao.omg.lol & @martinklissarov.bsky.social β€”and get to shape work at the frontier of multi-agent learning.

If that sounds like you, apply!

DM me if you're curious or have questions

02.04.2025 09:57 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Some big questions we’re thinking about:
1⃣How do communication protocols emerge?
2⃣What inductive biases help coordination?
3⃣How can language improve generalisation and transfer?

02.04.2025 09:57 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We’re interested in:
πŸ€–πŸ€– Multi-agent RL
πŸ”  Emergent language
🎲 Communication games
🧠 Social & cognitive modelling
πŸ“ˆ Scaling laws for coordination

02.04.2025 09:57 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The project explores how agents can learn to communicate and coordinate in complex, open-ended environmentsβ€”through emergent protocols, not hand-coded rules.

02.04.2025 09:57 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🚨 I’m hosting a Student Researcher @GoogleDeepMind!

Join us on the Autonomous Assistants team (led by
@egrefen.bsky.social) to explore multi-agent communicationβ€”how agents learn to interact, coordinate, and solve tasks together.

DM me for details!

02.04.2025 09:57 β€” πŸ‘ 14    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Our full paper:
arxiv.org/pdf/2503.19711

02.04.2025 09:51 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our work highlights the need for LLMs to improve in areas like action selection, self-evaluation + goal alignment to perform robustly in open-ended tasks

Implications of this work extend beyond writing assistance to autonomous workflows for LLMs in general open-ended use cases

02.04.2025 09:51 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Finding: LLMs can lose track of the original goal during iterative refinement, leading to "semantic drift" - a divergence from the author's intent. This is a key challenge for autonomous revision. ✍️

02.04.2025 09:51 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Finding: LLMs struggle to reliably filter their own suggestions. They need better self-evaluation to work effectively in autonomous revision workflows. βš–οΈ

02.04.2025 09:51 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Finding: Gemini 1.5 Pro produced the highest quality editing suggestions, according to human evaluators, outperforming Claude 3.5 Sonnet and GPT-4o 🦾

02.04.2025 09:51 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Finding: LLMs tend to favour adding content, whereas human editors remove or restructure more. This suggests LLMs are sycophantic, reinforcing existing text rather than critically evaluating it. βž•

02.04.2025 09:51 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Why? There are many possible solutions and no single 'right' answer. Success is difficult to gauge!

We examine how LLMs generate + select text revisions, comparing their actions to human editors. We focus on action diversity, alignment with human prefs, and iterative improvement

02.04.2025 09:51 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our paper explores this by analysing LLMs as autonomous co-writers. Work done with Lucia Lopez Rivilla, @egrefen.bsky.social ) 🫢

Open-ended tasks like writing are a real challenge for LLMs (even powerful ones like Gemini 1.5 Pro, Claude 3.5 Sonnet, and GPT-4o).

02.04.2025 09:51 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

New paper from our team @GoogleDeepMind!

🚨 We've put LLMs to the test as writing co-pilots – how good are they really at helping us write? LLMs are increasingly used for open-ended tasks like writing assistance, but how do we assess their effectiveness? πŸ€”

arxiv.org/pdf/2503.19711

02.04.2025 09:51 β€” πŸ‘ 20    πŸ” 8    πŸ’¬ 1    πŸ“Œ 1

you're telling me a cherry picked this example?

01.01.2025 14:27 β€” πŸ‘ 158    πŸ” 15    πŸ’¬ 2    πŸ“Œ 0
Post image

Instead of listing my publications, as the year draws to an end, I want to shine the spotlight on the commonplace assumption that productivity must always increase. Good research is disruptive and thinking time is central to high quality scholarship and necessary for disruptive research.

20.12.2024 11:18 β€” πŸ‘ 1154    πŸ” 375    πŸ’¬ 21    πŸ“Œ 57

@siangooding is following 20 prominent accounts