Vikram Venkatram's Avatar

Vikram Venkatram

@vikramvenkatram.bsky.social

Research Analyst at @CSETGeorgetown on the Biotechnology team. Georgetown Center for Security Studies and Georgetown School of Foreign Service alum.

183 Followers  |  366 Following  |  38 Posts  |  Joined: 26.11.2024  |  2.4116

Latest posts by vikramvenkatram.bsky.social on Bluesky

Preview
CSET's Recommendations for an AI Action Plan | Center for Security and Emerging Technology In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and max...

The plan also promotes and emphasizes the importance of scientific, including biological, datasets- in line with @csetgeorgetown.bsky.social recommendations for the plan, which you can read here: cset.georgetown.edu/publication/..., and with other CSET work: cset.georgetown.edu/publication/....

25.07.2025 14:26 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
How to stop bioterrorists from buying dangerous DNA The companies that sell synthesized DNA to scientists need to screen their customers, lest dangerous sequences for pathogens or toxins fall into the wrong hands.

Focusing on bio, one provision is a federal funding requirement for DNA synthesis screening- a useful tool in the toolbox for limiting biological risk.

Check out @stephbatalis.bsky.social and I's piece breaking down the kind of decisions screeners have to make: thebulletin.org/2025/04/how-...

25.07.2025 14:26 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

More on the recent AI Action Plan! @csetgeorgetown.bsky.social work is very relevant.

25.07.2025 14:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Ultimately, though, a chilling effect on state-driven AI legislation could severely harm innovation by reducing foundational AI governance infrastructure.

The Action Plan's implementation and approach remain to be seen, but it should be careful not to nip useful state regulation in the bud.

24.07.2025 18:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The plan does clarify that restrictions shouldn't interfere with prudent state laws that don't harm innovation.
And it's true that a complex thicket of onerous state laws governing AI could make it harder for AI companies to comply, harming innovation.

24.07.2025 18:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

States are better-positioned to pass these laws than the federal government in the current environment.

They can also serve as a sandbox for experimentation and debate, allowing for innovation in governance approaches. The best governance approaches can inspire other states to follow suit.

24.07.2025 18:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

State laws provide a critical avenue for building governance infrastructure: things like workforce capacity, information-sharing regimes, standardized protocols, incident reporting, etc.

These help provide clarity for companies and are crucial for innovation.

24.07.2025 18:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A recent @thehill.com piece by @minanrn.bsky.social, @jessicaji.bsky.social, and myself introduces the topic of governance infrastructure.

It discusses the recent proposed ban on state AI regulation-which would have gone much further and, thankfully, did not pass.

thehill.com/opinion/tech...

24.07.2025 18:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yesterday's new AI Action Plan has a lot worth discussing!

One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."

This could be cause for concern.

24.07.2025 18:55 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Really timely breakdown of today's big AI Action Plan release, by @csetgeorgetown.bsky.social's own @alexfriedland.bsky.social! Give it a read, I think it's really useful.

23.07.2025 21:11 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Factors like robust third-party auditing, strong information-sharing incentives, and shared resources and workforce development enhance, rather than reduce, innovation.

As such, we argue that the proposed moratorium would be counterproductive, undermining the very goals it aims to achieve.

18.06.2025 18:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

These debates are worth having, but miss a crucial factor: AI governance infrastructure, which states are best-positioned to build.

This infrastructure helps achieve the moratorium's stated goals. It helps developers innovate, strengthens consumer trust, and preserves U.S. national security.

18.06.2025 18:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Proponents of this plan argue that reducing strenuous regulations will speed up innovation, and that the federal government should lead in regulating AI anyway.

Opponents cite congressional gridlock, partisanship, and lack of meaningful tech regulation, as proof state laws are needed.

18.06.2025 18:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The recent reconciliation bill, which passed the House and will face a Senate vote soon, would place a 10-year moratorium on state-level AI regulation.

Whether this is a good idea has been hotly debated.

18.06.2025 18:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Banning state-level AI regulation is a bad idea!

One crucial reason is that states play a critical role in building AI governance infrastructure.

Check out this new op-ed by @jessicaji.bsky.social, myself, and @minanrn.bsky.social on this topic!

thehill.com/opinion/tech...

18.06.2025 18:52 β€” πŸ‘ 7    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

It's heartbreaking to see people dying from preventable disease.

AMR is a global problem, and people die from it everywhere. But as with many other problems, it affects the poor most harshly.

As a global community, we must fund more AMR research, and find ways to get drugs to those in need.

02.06.2025 13:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The Antimicrobial Resistance Research Landscape and Emerging Solutions | Center for Security and Emerging Technology Antimicrobial resistance (AMR) is one of the world’s most pressing global health threats. Basic research is the first step towards identifying solutions. This brief examines the AMR research landscape...

Some of my work at CSET has focused on this last problem- examining the global research landscape. Over the last few decades, very few new antimicrobial drugs have been discovered- and even fewer have been innovative.

cset.georgetown.edu/publication/...

02.06.2025 13:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

AMR is a multi-pronged issue. Accessibility (ensuring that all people who need antimicrobial drugs can use them), stewardship (ensuring the proper prescription and use of the drugs), and R&D (developing new drugs to fix a thin global pipeline of new ones) are all key.

02.06.2025 13:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The study focuses on Carbapenem-resistant Gram-negative bacterial infections in 2019, finding that in the 8 LMICs analyzed, only 6-9% of infections were treated properly.

These are treatable infections, but with the lack of access to the right antibiotics, they kill.

02.06.2025 13:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Antimicrobial resistance is a huge issue and an oft-forgotten killer. It kills more people each year than HIV/AIDS or malaria.

This article is fascinating- it points out that while much of the AMR prevention discussion focuses on overuse of antimicrobials, underuse can also be a major issue.

02.06.2025 13:24 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

"Red-teaming" isn't a catch-all term (or methodology!) to evaluate AI safety. So, what else do we have in the toolbox?

In our recent blog post, we explore the different questions we can ask about safety, how we can start to measure them, and what it means for AIxBio. Check it out! ⬇️

28.05.2025 15:03 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
AI Safety Evaluations: An Explainer | Center for Security and Emerging Technology Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work? Our explainer lays out the different fundamental types of AI safety evaluations alongside their res...

Understanding the strengths and limitations of different evaluations, and avoiding under or overstating the results, will be crucial as we navigate an evolving AI safety landscape.

Read the blog post here!
cset.georgetown.edu/article/ai-s...

28.05.2025 14:31 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

AI safety evaluations fall into two fundamental categories: model safety evals and contextual safety evals.

The former evaluate just the model's output, in a vacuum. The latter test how models perform in a real-world context or use case.

28.05.2025 14:31 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Looking to understand how safety evals work, how different evals differ, and what they do and don't tell us?

Check out this new @csetgeorgetown.bsky.social blog post by @jessicaji.bsky.social, @stephbatalis.bsky.social, and myself breaking down different types of AI safety evaluations!

28.05.2025 14:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Amidst all the discussion about AI safety, how exactly do we figure out whether a model is safe?

There's no perfect method, but safety evaluations are the best tool we have.

That said, different evals answer different questions about a model!

28.05.2025 14:31 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

News like this isn't just a concern for public health practitioners - it should also be a big red flag for U.S. national security folks.

America's biodefense strategy uses robust health infrastructure to deter bad actors. Right now, we're tearing down our own defenses so adversaries don't have to.

16.05.2025 15:46 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
America’s response to measles is eroding its ability to deter biological attacks The rising death toll for a preventable disease reveals just how ill-prepared the country is to handle a malicious bioweapon.

🚨 Latest op-ed is out in Defense One!

β€œDismantling critical preparedness offices, cutting infrastructure and funding, and allowing misinformation to derail the response are not just bad for healthcareβ€”they’re dangerous national security signals.”

www.defenseone.com/ideas/2025/0...

01.05.2025 19:06 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1
Preview
Trump Should Not Abandon March-In Rights Moving forward with the Biden administration’s guidance could deliver lower drug prices and allow more Americans to reap the benefits of public science. In late 2023, the federal government published ...

Have you heard of the Bayh-Dole Act? It's niche, but an incredibly important factor in the U.S. innovation ecosystem!

For the National Interest, @jack-corrigan.bsky.social and I discuss a potential change that could benefit public access to medical drugs.

nationalinterest.org/blog/techlan...

28.04.2025 18:08 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

The choices these companies have to make are trickier than you might think!

For sensitive orders, some customers are clearly unqualified while others clearly have the needed expertise to use the DNA safely.

But to deal with the gray areas in-between, synthesis companies need clear guidelines.

07.04.2025 20:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
How to stop bioterrorists from buying dangerous DNA The companies that sell synthesized DNA to scientists need to screen their customers, lest dangerous sequences for pathogens or toxins fall into the wrong hands.

For @thebulletin.org, @stephbatalis.bsky.social and I break down the choices that DNA synthesis companies have to make when they screen customers- deciding how much expertise is enough.

If you were in their shoes, where would you draw the line?

Check it out here!

thebulletin.org/2025/04/how-...

07.04.2025 20:04 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

@vikramvenkatram is following 20 prominent accounts