Andrew Strait's Avatar

Andrew Strait

@agstrait.bsky.social

UK AI Security Institute Former Ada Lovelace Institute, Google, DeepMind, OII

1,387 Followers  |  837 Following  |  96 Posts  |  Joined: 03.08.2023  |  1.9623

Latest posts by agstrait.bsky.social on Bluesky

Post image

🚨New paper🚨

From a technical perspective, safeguarding open-weight model safety is AI safety in hard mode. But there's still a lot of progress to be made. Our new paper covers 16 open problems.

🧡🧡🧡

12.11.2025 14:04 β€” πŸ‘ 17    πŸ” 3    πŸ’¬ 1    πŸ“Œ 2

This is such a cool paper from my UK AISI colleagues. We need more methods for building resistance to malicious tampering of open weight models. @scasper.bsky.social and team below have offered one for reducing biorisk.

12.08.2025 12:00 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
chart: capital expenditures, quarterly

shows hockey-stick like growth in the capex expenditures of Amazon, Microsoft, Google and meta, almost entirely on data centers

in the most recent quarter it was nearly $100 billion, collectively

chart: capital expenditures, quarterly shows hockey-stick like growth in the capex expenditures of Amazon, Microsoft, Google and meta, almost entirely on data centers in the most recent quarter it was nearly $100 billion, collectively

The AI infrastructure build-out is so gigantic that in the past 6 months, it contributed more to the growth of the U.S. economy than /all of consumer spending/

The 'magnificent 7' spent more than $100 billion on data centers and the like in the past three months *alone*

www.wsj.com/tech/ai/sili...

01.08.2025 12:19 β€” πŸ‘ 826    πŸ” 322    πŸ’¬ 78    πŸ“Œ 270

Highly recommend for your beach summer reading.

global.oup.com/academic/pro...

24.07.2025 20:51 β€” πŸ‘ 12    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Man, even the brocast community appears to be reading @shannonvallor.bsky.social 's book.

24.07.2025 20:47 β€” πŸ‘ 84    πŸ” 11    πŸ’¬ 2    πŸ“Œ 9

Congrats to @kobihackenburg.bsky.social for producing the largest study of AI persuasion to date. So many fascinating findings. Notable that (a) current models are extremely good at persuasion on political issues and (b) post training is far more significant than model size or personalisation

21.07.2025 17:17 β€” πŸ‘ 9    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0

Massive credit to the lead authors Christopher Summerfield, Lennart Luttegau, Magda Dubois, Hannah Rose Kirk, Kobi Hackenberg, Catherine Fist, Nicola Ding, Rebecca Anselmetti, Coz Ududec, Katarina Slama, Mario Giulianelli

11.07.2025 14:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Ultimately, we advocate for more rigorous scientific methods. This includes using robust statistical analysis, proper control conditions, and clear theoretical frameworks to ensure the claims made about AI capabilities are credible and well-supported.

11.07.2025 14:22 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A key recommendation is to be more precise with our language. We caution against using mentalistic terms like 'knows' or 'pretends' to describe model outputs, as it can imply a level of intentionality that may not be warranted by the evidence.

11.07.2025 14:22 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

For example, we look at how some studies use elaborate, fictional prompts to elicit certain behaviours. We question whether the resulting actions truly represent 'scheming' or are a form of complex instruction-following in a highly constrained context.

11.07.2025 14:22 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We discuss how the field can be susceptible to over-interpreting AI behavior, much like researchers in the past may have over-attributed linguistic abilities to chimps. We critique the reliance on anecdotes and a lack of rigorous controls in some current studies.

11.07.2025 14:22 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our paper, 'Lessons from a Chimp,' compares current research into AI scheming with the historic effort to teach language to apes. We argue there are important parallels and cautionary tales to consider.

11.07.2025 14:22 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Recent studies of AI systems have identified signals that they 'scheme', or covertly and strategically pursue misaligned goals from a human user. But are these underlying studies following solid research practice? My colleagues at UK AISI took a look.

arxiv.org/pdf/2507.03409

11.07.2025 14:22 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1
LinkedIn This link will take you to a page that’s not on LinkedIn

Addressing AI-enabled crime will require coordinated policy, technical and operational responses as the technology continues to develop. Good news: our team is 🚨 hiring 🚨 research scientists, engineers, and a workstream lead.

Come join our Criminal Misuse team:

lnkd.in/eS9-Dj5i
lnkd.in/e_dqU6QF

10.07.2025 10:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Our Criminal Misuse team is focussing on three key AI capabilities that are being exploited by criminals:

- Multimodal generation
- Advanced planning and reasoning,
- AI agent capabilities

10.07.2025 10:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

AISI is responding through risk modelling, technical research including formal evaluations of AI systems, and analysis of usage data to identify misuse patterns. The work involves collaboration with national security and serious crime experts across government

10.07.2025 10:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
How will AI enable the crimes of the future? | AISI Work How we're working to track and mitigate against criminal misuse of AI.

New blog on the growing use of AI in criminal activities, including cybercrime, social engineering and impersonation scams. As AI becomes more widely available through consumer applications and mobile devices, the barriers to criminal misuse will decrease.

www.aisi.gov.uk/work/how-wil...

10.07.2025 10:31 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
A screenshot of the NYT piece on chatbots with the quote β€œThe chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a disassociative  anesthetic, which ChatGPT described as a β€œtemporary pattern liberator”

A screenshot of the NYT piece on chatbots with the quote β€œThe chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a disassociative anesthetic, which ChatGPT described as a β€œtemporary pattern liberator”

this is the most dangerous shit I have ever seen sold as a product that wasn’t an AR-15

14.06.2025 11:59 β€” πŸ‘ 158    πŸ” 38    πŸ’¬ 10    πŸ“Œ 2
Two Marines in army combat outfits and guns are seen detaining a young black man in a black and white top, wearing sunglasses with air pods in his ears.

Two Marines in army combat outfits and guns are seen detaining a young black man in a black and white top, wearing sunglasses with air pods in his ears.

BREAKING: US Marines deployed to Los Angeles have carried out the first known detention of a civilian, the US military confirms.

It was confirmed to Reuters after they shared this image with the US military.

13.06.2025 22:01 β€” πŸ‘ 7466    πŸ” 3267    πŸ’¬ 803    πŸ“Œ 723
Preview
Research Engineer - Societal Resilience London, UK

For those who prefer this in GenAlpha:

Fr fr it's giving lowkey GOATED research engineering vibes, slaying data pipelines and agent evals, periodt.

job-boards.eu.greenhouse.io/aisi/jobs/46...

09.06.2025 18:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

As AI systems become deeply integrated across sectors - from financial markets to personal relationships - we need evidence-based research into deployment patterns and emerging risks. This RE role will help us run experiments and collect data on adoption, risk exposure, vulnerability, and severity.

09.06.2025 18:17 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Research Engineer - Societal Resilience London, UK

We're hiring a Research Engineer for the Societal Resilience team at the AI Security Institute. The role involves building data pipelines, web scraping, ML engineering, and creating simulations to monitor these developments as they happen.

job-boards.eu.greenhouse.io/aisi/jobs/46...

09.06.2025 18:17 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
The boy who came back: the near-death, and changed life, of my son Max It was, we were told, a case of sudden infant death syndrome interrupted. What followed would transform my understanding of parenting, disability and the breadth of what makes a meaningful life

I wrote for the Guardian’s Saturday magazine about my son Max, who changed how I see the world. Took ages. More jokes after the first bit.

Thanks Merope Mills for being the most patient and generous editor.

www.theguardian.com/lifeandstyle...

24.05.2025 07:51 β€” πŸ‘ 856    πŸ” 184    πŸ’¬ 127    πŸ“Œ 76
Preview
Grants | The AI Security Institute (AISI) View AISI grants. The AI Security Institute is a directorate of the Department of Science, Innovation, and Technology that facilitates rigorous research to enable advanced AIΒ governance.

Help us build a more resilient future in the age of advanced AI.

Find all the details about our Challenge Fund and Priority Research Areas for societal resilience here:

www.aisi.gov.uk/grants#chall...

#AIChallenge #ResearchFunding

23.05.2025 11:55 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

We're also looking for:
➑️ Deeper studies into societal risk severity, vulnerability & exposure (non-robust systems, scams, overreliance on companion apps, etc.).
➑️ Downstream mitigations for 'defense in depth'.

23.05.2025 11:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

We're interested in many kinds of projects, including:

➑️ Adoption & integration studies: How are different sectors & frontline workers really using advanced AI? For what tasks? How often?
➑️ Novel & creative datasets to understand real-world AI usage.

23.05.2025 11:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

➑️ Emotional reliance, undue influence & AI addiction
➑️ Infosphere degradation (impacting education, science, journalism)
➑️ Agentic systems causing collusion or cascading failures
➑️ Labour market impacts & displacement
And much more.

23.05.2025 11:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What risks are we focused on? Those causing or exacerbating severe psychological, economic, & physical harm
➑️ Overreliance on AI in critical infrastructure
➑️ AI enabling fraud or criminal misuse

23.05.2025 11:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Grants | The AI Security Institute (AISI) View AISI grants. The AI Security Institute is a directorate of the Department of Science, Innovation, and Technology that facilitates rigorous research to enable advanced AIΒ governance.

We're able to offer grants up Β£200k over 12 months.

We've updated our Priority Research Areas for societal resilience. Check out the kinds of research questions we're keen to fund. This list will evolve as new challenges emerge.

www.aisi.gov.uk/grants#chall...

23.05.2025 11:55 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

🚨Funding Klaxon!🚨

Our Societal Resilience team at UK AISI is working to identify, monitor & mitigate societal risks from the deployment of advanced AI systems. But we can't do it alone. If you're tackling similar questions, apply to our Challenge Fund.

#AI #SocietalResilience #Funding

23.05.2025 11:55 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

@agstrait is following 20 prominent accounts