Chris Russell's Avatar

Chris Russell

@cruss.bsky.social

Algorithmic governance and computer vision. Assoc Prof Oxford, Ellis fellow

211 Followers  |  605 Following  |  6 Posts  |  Joined: 19.11.2023  |  1.8202

Latest posts by cruss.bsky.social on Bluesky

Post image

Job alert! Come and work with us at @oii.ox.ac.uk. We’re recruiting a Postdoctoral Researcher working with @bmittelstadt.bsky.social and @cruss.bsky.social. Full-time position, starts 1 October 2025. Closing date for applications: noon, 30 July. Apply today: bit.ly/3TuJlGc #hiring

09.07.2025 09:13 β€” πŸ‘ 5    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

One week left to submit your application!

Apply to work with Prof Sandra Wachter at the Hasso Plattner Institute and collaborate with me and Chris Russell at the Oxford Internet Institute, University of Oxford.

@swachter.bsky.social
@hpi.bsky.social
@cruss.bsky.social
@oii.ox.ac.uk

09.06.2025 11:18 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Postdoctoral Researcher (m/f/x) in Technology and Regulation Postdoctoral Researcher (m/f/x) in Technology and Regulation

Are you interested in the governance of emergent tech?

Come & work w/ me @bmittelstadt.bsky.social & @cruss.bsky.social

We are looking for 3 Post Docs in
Law: tinyurl.com/4rbhcndp
Ethics: tinyurl.com/yc2e2km4
Computer Science/AI/ML: tinyurl.com/yr5bvnn5

Application deadline is June 15, 2025.

27.05.2025 06:45 β€” πŸ‘ 10    πŸ” 14    πŸ’¬ 0    πŸ“Œ 3

See our recent FAccT paper for analysis of how many of these models are for generating nonconsensual sexual imagery arxiv.org/pdf/2505.03859

21.05.2025 18:34 β€” πŸ‘ 29    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

Still time to apply to work with me and @bmittelstadt.bsky.social and @cruss.bsky.social @oii.ox.ac.uk

15.05.2025 10:59 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
OII | Dramatic rise in publicly downloadable deepfake image generators New Oxford study uncovers explosion of accessible deepfake AI image generation models intended for the creation of non-consensual, sexualised images of women.

New! Latest study from @oii.ox.ac.uk reveals a concerning trend: easily accessible AI tools designed to create deepfake images, primarily targeting women, are rapidly proliferating. Read more: bit.ly/4kc1iVk 1/5

07.05.2025 10:27 β€” πŸ‘ 8    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1
Postdoctoral Researcher (m/f/x) in Machine Learning and Artificial Intelligence Postdoctoral Researcher (m/f/x) in Machine Learning and Artificial Intelligence

Come & work with me @hpi.bsky.social & @bmittelstadt.bsky.social & @cruss.bsky.social @oii.ox.ac.uk

I am looking for 3 post docs on the governance of emergent tech.

CS: tinyurl.com/yr5bvnn5
Ethics: tinyurl.com/yc2e2km4
Law: tinyurl.com/4rbhcndp

Application deadline is 15.06.2025.

05.05.2025 11:47 β€” πŸ‘ 12    πŸ” 11    πŸ’¬ 0    πŸ“Œ 3
Preview
Editorial

Out now in #AIRe, the Journal of AI Law and Regulation, my new editorial discussing the state of research on fairness in AI in an increasingly hostile geopolitical climate, and the need for European leadership going forward.

Open access link: doi.org/10.21552/air...

#AI #DEI @oii.ox.ac.uk

14.04.2025 08:29 β€” πŸ‘ 16    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1
Post image

The 4th Monocular Depth Estimation Challenge (MDEC) is coming to #CVPR2025, and I’m excited to join the org team! After 2024’s breakthroughs in monodepth driven by generative model advances in transformers and diffusion, this year's focus is on OOD generalization and evaluation.

21.12.2024 15:52 β€” πŸ‘ 22    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1
Preview
OxonFair: A Flexible Toolkit for Algorithmic Fairness We present OxonFair, a new open source toolkit for enforcing fairness in binary classification. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard...

Anyone interested can talk to Eoin Delany at poster 5502, or check out the paper for more details arxiv.org/abs/2407.13710. Great work by my co-authors, Eoin, Zihao Fu, @swachter.bsky.social and @bmittelstadt.bsky.social

11.12.2024 11:29 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Diagram showing the combination of two heads.

Diagram showing the combination of two heads.

The trick is model surgery on a validation set. We train a multi-head model, the first head solves the original task and the other heads predict groups using a squared loss. A weighted sum of all these heads can enforce any fairness definition, and has the same architecture as the original net

11.12.2024 11:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Cartoon logo of an ox and scales

Cartoon logo of an ox and scales

An example showing how to enforce minimum group recall in computer vision.

An example showing how to enforce minimum group recall in computer vision.

New fairness toolkit at #NeurIPS today. This fixes most of the problems I've run into in the field.
It is robust to overfitting, works for #NLP and computer vision, and can enforce any definition of fairness that can be written as a function of a confusion matrix. t.ly/ZpRJ-
How do we do that....

11.12.2024 11:29 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Sorry, not this year. Maybe next time.

01.10.2024 13:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This is a common problem with LLMs if the temperature is set to zero. It might just be that these small models need a higher temperature.

01.01.2024 12:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@cruss is following 20 prominent accounts