Mina Narayanan's Avatar

Mina Narayanan

@minanrn.bsky.social

Research Analyst @CSETGeorgetown | AI governance and safety | Views my own

167 Followers  |  275 Following  |  24 Posts  |  Joined: 26.11.2024  |  1.5137

Latest posts by minanrn.bsky.social on Bluesky

Preview
Exploring AI legislation in Congress with AGORA: Risks, Harms, and Governance Strategies – Emerging Technology Observatory Using AGORA to explore AI legislation enacted by U.S. Congress since 2020

In other words, Congress is still in the early days of governing AI but so far seems more focused on understanding and harnessing AI’s potential than addressing its downsides. Make sure to take a deeper dive into our analysis here 🧡6/6 eto.tech/blog/ai-laws...

29.07.2025 18:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Fewer legislative docs directly tackle risks or undesirable consequences from AI (such as harm to infrastructure) than propose strategies such as government support, convening, or institution-building 🧡5/6

29.07.2025 18:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Very few enactments leverage performance requirements, pilots, new institutions, or other governance strategies that place concrete requirements on AI systems or represent investments in maturing or scaling up AI capabilities 🧡4/6

29.07.2025 18:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Most of Congress’s 147 enactments focus on commissioning studies of AI systems, assessing their impacts, providing support for AI-related activities, convening stakeholders, & developing additional AI-related governance docs 🧡3/6

29.07.2025 18:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We find that Congress has enacted many AI-related laws & provisions which are focused more on laying the groundwork to harness AI’s potential – often in nat'l sec contexts – than placing concrete demands on AI or directly tackling their specific, undesirable consequences 🧡2/6

29.07.2025 18:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Check out the second @csetgeorgetown.bsky.social @emergingtechobs.bsky.social blog from @sonali-sr.bsky.social and myself where we explore the strategies, risks, and harms addressed by AI-related laws enacted by Congress between Jan 2020 and March 2025 🧡1/6 eto.tech/blog/ai-laws...

29.07.2025 18:15 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
How the White House AI plan helps, and hurts, in the race against China While one tech advocate called the new plan β€œa critical component” of efforts to outpace China, another criticized it as a β€œSilicon Valley wishlist.”

Shared some thoughts on the AI Action Plan's recs around shaping state-level AI activity last week -- essentially, the plan's attempt to pressure states to abandon AI restrictions risks hurting U.S. national security www.defenseone.com/technology/2...

29.07.2025 00:30 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Yesterday's new AI Action Plan has a lot worth discussing!

One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."

This could be cause for concern.

24.07.2025 18:55 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Stay tuned for the second blog, which examines the governance strategies, risk-related concepts, and harms covered by this legislation! 🧡3/3

23.07.2025 13:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We find that, contrary to conventional wisdom, Congress has enacted many AI-related laws and provisions β€” most of which apply to military and public safety contexts 🧡2/3

23.07.2025 13:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Check out the first blog in a 2 part series from @sonali-sr.bsky.social and myself where we use data from @csetgeorgetown.bsky.social @emergingtechobs.bsky.social AGORA to explore ✨AI-related legislation that was enacted by Congress between January 2020 and March 2025✨
eto.tech/blog/ai-laws... 🧡1/3

23.07.2025 13:39 β€” πŸ‘ 6    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Check out the latest AGORA roundup from @emergingtechobs.bsky.social , which highlights some overlooked AI provisions in the Big Beautiful Bill!

02.07.2025 19:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The 10 yr moratorium on state AI laws will hurt U.S. nat'l security & innovation if enacted. In our piece in @thehill.com , @jessicaji.bsky.social , @vikramvenkatram.bsky.social , & I argue that states support the very infrastructure needed for a vibrant U.S. AI ecosystem
thehill.com/opinion/tech...

19.06.2025 22:33 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Banning state-level AI regulation is a bad idea!

One crucial reason is that states play a critical role in building AI governance infrastructure.

Check out this new op-ed by @jessicaji.bsky.social, myself, and @minanrn.bsky.social on this topic!

thehill.com/opinion/tech...

18.06.2025 18:52 β€” πŸ‘ 7    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

Amidst all the discussion about AI safety, how exactly do we figure out whether a model is safe?

There's no perfect method, but safety evaluations are the best tool we have.

That said, different evals answer different questions about a model!

28.05.2025 14:31 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
AI Action Plan Database A database of recommendations for OSTP's AI Action Plan.

@ifp.bsky.social recently published a searchable database of all AI Action Plan submissions, many of which cover topics that overlap with CSET's submission! Check out CSET's recs here: cset.georgetown.edu/publication/... and compare it to others here: www.aiactionplan.org

19.05.2025 17:14 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches | Center for Security and Emerging Technology Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluat...

CDS Faculty Fellow @timrudner.bsky.social, with @minanrn.bsky.social & Christian Schoeberl, analyzed AI explainability evals, finding a focus on system correctness over real-world effectiveness. They call for the creation of standards for AI safety evaluations.

cset.georgetown.edu/publication/...

17.04.2025 16:05 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Trump Should Not Abandon March-In Rights Moving forward with the Biden administration’s guidance could deliver lower drug prices and allow more Americans to reap the benefits of public science. In late 2023, the federal government published ...

Have you heard of the Bayh-Dole Act? It's niche, but an incredibly important factor in the U.S. innovation ecosystem!

For the National Interest, @jack-corrigan.bsky.social and I discuss a potential change that could benefit public access to medical drugs.

nationalinterest.org/blog/techlan...

28.04.2025 18:08 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

What does the EU's shifting strategy mean for AI?

CSET's @miahoffmann.bsky.social & @ojdaniels.bsky.social have a new piece out for @techpolicypress.bsky.social.

Read it now πŸ‘‡

10.03.2025 14:17 β€” πŸ‘ 4    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Preview
CSET's Recommendations for an AI Action Plan | Center for Security and Emerging Technology In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and max...

Check out @csetgeorgetown.bsky.social's response to the AI Action Plan RFI! We recommend that the administration support key enablers of U.S. tech prowess, including access to AI talent, foundational AI standards & evaluations, & open markets & research ecosystems cset.georgetown.edu/publication/...

17.03.2025 15:27 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Check out our paper on the quality of interpretability evaluations of recommender systems:

cset.georgetown.edu/publication/...

Led by @minanrn.bsky.social and Christian Schoeberl!

@csetgeorgetown.bsky.social

19.02.2025 20:45 β€” πŸ‘ 11    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

[6/6] Our findings suggest the importance of standards for AI evaluations and a capable workforce to assess the efficacy of these evaluations. If researchers understand & measure facets of AI trustworthiness differently, policies for building trusted AI systems may not work

20.02.2025 19:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

[5/6] Here are the evaluation approaches we identified:

20.02.2025 19:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[4/6] We find that research papers (1) do not clearly differentiate explainability from interpretability, (2) contain combinations of five evaluation approaches, & (3) more often test if systems are built according to design criteria than if systems work in the real world

20.02.2025 19:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[3/6] Our new report examines how researchers evaluate claims about the explainability & interpretability of recommender systems – a type of AI system that often uses explanations

20.02.2025 19:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[2/6] Building trust in AI & understanding why & how AI systems work are key to adopting and building better models

20.02.2025 19:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches | Center for Security and Emerging Technology Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluat...

[1/6] Discourse around AI evaluations has focused a lot on testing LLMs for catastrophic risks. In a new @csetgeorgetown.bsky.social report, Christian Schoeberl, @timrudner.bsky.social, and I explore another side of AI evals: evals of claims about the trustworthiness of AI systems

20.02.2025 19:52 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
Will the Paris artificial intelligence summit set a unified approach to AI governanceβ€”or just be another conference? AI innovations and governments’ preferences can make international consensus on governance at the Paris Summit challenging.

Will the Paris #AIActionSummit set a unified approach to AI governanceβ€”or just be another conference?

A new article from @miahoffmann.bsky.social, @minanrn.bsky.social, and @ojdaniels.bsky.social.

06.02.2025 15:47 β€” πŸ‘ 8    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0
Preview
Will the Paris artificial intelligence summit set a unified approach to AI governanceβ€”or just be another conference? AI innovations and governments’ preferences can make international consensus on governance at the Paris Summit challenging.

@miahoffmann.bsky.social , @ojdaniels.bsky.social, and I wrote a piece on key AI governance areas to watch in 2025 with the upcoming AI Action Summit in mind. Check it out here! thebulletin.org/2025/02/will...

07.02.2025 03:00 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
Research Fellow - Applications | Center for Security and Emerging Technology The Center for Security and Emerging Technology at Georgetown University (CSET) is seeking applications for a Research Fellow to support our Applications Line of Research. This role will analyze topic...

We're hiring πŸ“’

CSET is looking for a Research Fellow to analyze topics related to the development, deployment, and operations of AI & ML tools in the national security space.

Interested or know someone who would be? Learn more and apply πŸ‘‡ cset.georgetown.edu/job/research...

04.02.2025 18:34 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1

@minanrn is following 20 prominent accounts