My last recommendation is to support the development of evaluations for AI capabilities and risks. The AI Action Plan already includes this, but it should go one step further and consider restricting models that fail to meet industry-standard performance levels of safety. (7/7)
28.07.2025 15:13 β π 0 π 0 π¬ 0 π 0
My second recommendation is to push AI companies to share safety-relevant knowledge with each other and other relevant stakeholders. This would involve mandating reporting requirements, disclosure of unexpected capabilities in new models, and sharing threat intelligence. (6/7)
28.07.2025 15:13 β π 0 π 0 π¬ 1 π 0
My first recommendation is to require AI companies to adhere to their own risk management plans. Companies like OpenAI and Anthropic have already published frameworks describing their planned risk mitigations, but these need to be made legally binding to have any effect. (5/7)
28.07.2025 15:13 β π 0 π 0 π¬ 1 π 0
At the same time, AI is advancing too rapidly for government to keep up with traditional regulation. The solution is to promote industry self-regulation β make AI companies figure out the best way to keep their products safe and then make sure they actually follow through. (4/7)
28.07.2025 15:13 β π 0 π 0 π¬ 1 π 0
It's true that we need to increase AI adoption, but quite simply β people don't want to use things that aren't guaranteed to work! There's still too many hallucinations, security concerns, and other liabilities for companies to feel confident relying on AI for important tasks. (3/7)
28.07.2025 15:13 β π 0 π 0 π¬ 1 π 0
Trump's AI Action Plan promises to put AI innovation first and safety second, but this is a false dichotomy. In my new piece for The National Interest, I explain why innovation can't happen without safety, and how government can help industry regulate itself. π§΅ (1/7)
28.07.2025 15:13 β π 2 π 1 π¬ 1 π 0
These standards, if implemented, would go a long way towards mitigating the potential risks of AI and increasing public trust and confidence in using it, allowing us to realize its benefits sooner than we could otherwise. [3/4]
17.03.2025 20:34 β π 0 π 0 π¬ 1 π 0
Specifically, AISI should develop standards on topics such as model training, pre-release internal & external security testing, cybersecurity practices, if-then commitments, AI risk assessments, and processes for testing and re-testing systems as they change over time. [2/4]
17.03.2025 20:34 β π 1 π 0 π¬ 1 π 0
Earlier today, @csetgeorgetown.bsky.social published our recommendations for the U.S. AI Action Plan. One recommendation that I personally contributed was that the U.S. government should develop and adopt standards to mitigate risks from AI. What kind of standards? Read below: π§΅ [1/4]
17.03.2025 20:34 β π 2 π 1 π¬ 1 π 0
AI & emerging tech policy is going to experience rapid development over the next four years. The best way you can keep up-to-date on the latest changes is by following my colleagues at @csetgeorgetown.bsky.social and checking out the CSET Starter Pack: bsky.app/starter-pack...
22.01.2025 23:04 β π 1 π 0 π¬ 0 π 0
Deputy Director & Fellow @korea.csis.org
- China's Weaponization of Trade (forthcoming, Jan 2026)
- Koreanists starter pack: https://go.bsky.app/U3Nm5FX
- Personal website: andysaulim.com
- Moonlighting at @citylightsofchina.bsky.social
Writing policy.ai for @csetgeorgetown.bsky.social | Mainer
AI Safety and Security. Fellow @ CSET | Georgetown. CS/AI PhD. Nerd.
AI, national security, China. Part of the founding team at @csetgeorgetown.bsky.social⬠(opinions my own). Author of Rising Tide on substack: helentoner.substack.com
Assistant Professor & Faculty Fellow, NYU.
AI Fellow, Georgetown University.
Probabilistic methods for robust and transparent ML & AI Governance.
Prev: Oxford, Yale, UC Berkeley.
https://timrudner.com
Ticket seller, Comedian, Improvisor, Writer, Scoop, Member of the Congregation
Postdoc at Mount Sinai | Neuroscience PhD from CUNY Graduate Center working on drug discovery for Alzheimerβs disease | Weill Cornell CTSC TL1 award recipient
A non-profit bringing together academic, civil society, industry, & media organizations to address the most important and difficult questions concerning AI.
White House and National Security Correspondent, The New York Times. CNN contributor. Adjunct lecturer, Kennedy School of Government, Harvard University
Tech lobbying and influence reporter @politico. Let's chat - bbordelon@politico.com.
Pentagon correspondent at The Associated Press
CNN national security correspondent. Send tips/pics of your dog(s) to Natasha.bertrand@cnn.com
Signal: Natashabertrand.12
https://www.cnn.com/profiles/natasha-bertrand-profile
Computational LinguistsβNatural LanguageβMachine Learning
The AI Now Institute produces diagnosis and actionable policy research on artificial intelligence.
Find us at https://ainowinstitute.org/
The Information Society Project at Yale Law School is an intellectual center addressing issues at the intersection of law, tech & society. https://law.yale.edu/isp/community
NYT bestselling author of EMPIRE OF AI: empireofai.com. ai reporter. national magazine award & american humanist media award winner. words in The Atlantic. formerly WSJ, MIT Tech Review, KSJ@MIT. email: http://karendhao.com/contact.