Any autocrat in the world can build a stupid killing or spying machine, writes Laura MacCleery. To escape a dystopian future, we must not strip moral frameworks from a technology we are—perhaps unwisely—designing to make decisions for us, she says.
Utility bills are far from the only issue driving resistance to hyperscale data center projects across the country.
"Overwhelmingly, the energy infrastructure powering the tech industry's rapid build-out of power-hungry data centers is dirty." - @ruddock.bsky.social via @techpolicypress.bsky.social
Last Wednesday, US tech companies gathered at the White House to sign a nonbinding, unenforceable pledge to offset data center energy costs. But energy bills are just the tip of the iceberg when it comes to the costs of hyperscale data centers people see in their backyards, writes Jenna Ruddock.
Why was the Department of Defense willing to accept the same conditions from OpenAI that they rejected from Anthropic? If OpenAI’s agreement truly includes more meaningful safeguards, more transparency is needed to build public trust, writes Tech Policy Press fellow Jake Laperruque.
Nepal's Gen Z used TikTok, Discord & Reddit to flip their country’s political system, as rapper-turned-politician Balen Shah crushed the 6-time incumbent in last week's election. Aaradhyaa Gyawali offers an account of how events played out, and what it says about the role of technology in elections.
The European Union is scaling up defense investment and technological capacity as AI continues to become central to the EU’s vision of strategic autonomy, reports Raluca Besliu. But who decides where the line falls between civilian AI and military use, especially when regulation is lacking?
The FTC has cast age verification as an essential shield against harmful online content, effectively carving out a quiet safe harbor for tech platforms. But its solution raises urgent questions about whether it is trading one risk to children for another, Danai Nhando writes.
The framing around limiting the US military’s use of generative AI for domestic surveillance should unsettle the rest of the world, writes Kristina Irion. Countries seem woefully unprepared to respond to AI-enabled mass surveillance by another state.
Portable benefits seem like an attractive idea to address gig labor concerns, but they might not help much with basic algorithmic transparency issues, such as when a worker loses their access to work overnight through automated processes, Nakul Nagaraj writes.
Any autocrat in the world can build a stupid killing or spying machine, writes Laura MacCleery. To escape a dystopian future, we must not strip moral frameworks from a technology we are—perhaps unwisely—designing to make decisions for us, she says.
Trump has invoked Iran’s past attempts to influence US elections as a justification for war. But his administration has decimated CISA and stopped the FBI from investigating foreign election interference, writes Paul M. Barrett. Is Trump building a case to intervene in future elections?
Railway and utility monopolies of the Gilded Age required decades of regulatory correction, with lasting economic damage in between. Space infrastructure is consolidating now. Janet Vertesi says we need to think outside existing competition regulation to avoid undemocratic concentrations of power.
February roundup on tech litigation from Tech Justice Law Project’s Madeline Batt. This month covers an Arizona jury holding Uber liable for a passenger’s sexual assault and New Mexico’s Attorney General taking Meta to trial, alleging its platforms are enabling child sexual exploitation.
Some of the most effective disinformation campaigns today unfold inside private messaging platforms, write Katharina Zuegel and Mariana Olaizola Rosenblat. In a new report, they lay out recommendations to address the problem without undermining encryption.
The EU is ramping up defense AI investment, while much of it sits outside the bloc’s flagship AI law, Raluca Besliu reports. As military and civilian systems increasingly overlap, a regulatory gap is emerging over how dual-use AI will be governed in Europe.
Podcast! Justin Hendrix spoke to Electronic Frontier Foundation executive director Cindy Cohn. Her new book, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance, weaves her journey with the legal battles she's fought on behalf of whistleblowers, researchers, and everyday people.
Malicious AI agent extensions are already appearing in public marketplaces. The real risk isn’t just malware—it’s missing accountability. If a marketplace doesn’t have minimum standards to protect users, “it is not ready to distribute extensions that operate at scale,” argues Kostakis Bouzoukas.
#Billionaires aren't going to #space because they love #scifi. This is a game of Monopoly, and they are playing to Own the Pipes.
www.techpolicy.press/in-the-twent...
my latest for @techpolicypress.bsky.social on space-based economic capture
#academicsky #space #newspace #spacex #nasa #blueorigin
Railway and utility monopolies of the Gilded Age required decades of regulatory correction, with lasting economic damage in between. Space infrastructure is consolidating now. Janet Vertesi says we need to think outside existing competition regulation to avoid undemocratic concentrations of power.
The Trump administration’s escalating campaign in Iran marks the beginning of America’s first war in the age of large language models. These events make clear that those who work on AI safety must confront the limits of so-called “alignment to human values,” writes Eryk Salvaggio.
For politically active billionaires and their allies in Washington, social media is becoming an instrument of political power, writes Paddy Leerssen. Broadly, a new regulatory paradigm for content moderation is emerging: the EU writes laws, the US buys shares.
The expanding war in Iran brought to the fore questions about the role of technology in armed conflict, including the controversial use of new artificial intelligence technologies. Tech Policy Press invited perspectives from experts on what they are watching for as the situation unfolds.
February roundup on tech litigation from Tech Justice Law Project’s Madeline Batt. This month covers an Arizona jury holding Uber liable for a passenger’s sexual assault and New Mexico’s Attorney General taking Meta to trial, alleging its platforms are enabling child sexual exploitation.
Some of the most effective disinformation campaigns today unfold inside private messaging platforms, write Katharina Zuegel and Mariana Olaizola Rosenblat. In a new report, they lay out recommendations to address the problem without undermining encryption.
The EU is ramping up defense AI investment, while much of it sits outside the bloc’s flagship AI law, Raluca Besliu reports. As military and civilian systems increasingly overlap, a regulatory gap is emerging over how dual-use AI will be governed in Europe.
The Children’s Online Privacy Protection Act (COPPA) was designed to protect personal information collected from children, but the act of verifying a child’s age generates a data trail before any protection kicks in, writes Danai Nhando. Who protects the data being collected to protect children?
Trump has invoked Iran’s past attempts to influence US elections as a justification for war. But his administration has decimated CISA and stopped the FBI from investigating foreign election interference, writes Paul M. Barrett. Is Trump building a case to intervene in future elections?
Last Wednesday, US tech companies gathered at the White House to sign a nonbinding, unenforceable pledge to offset data center energy costs. But energy bills are just the tip of the iceberg when it comes to the costs of hyperscale data centers people see in their backyards, writes Jenna Ruddock.
Nepal's Gen Z used TikTok, Discord & Reddit to flip their country’s political system, as rapper-turned-politician Balen Shah crushed the 6-time incumbent in last week's election. Aaradhyaa Gyawali offers an account of how events played out, and what it says about the role of technology in elections.
OpenAI says its Defense Department contract includes “red lines” on mass domestic surveillance and autonomous weapons. But the newly released terms still leave key questions about surveillance, privacy, and how these safeguards would work in practice, argues Tech Policy Press Fellow Jake Laperruque.