#EUIUpFront: @streinz.bsky.social and Anna Pisarkiewicz discuss what Europe's digital sovereignty means beyond laws.
@eui-law.bsky.social @eui-schuman.bsky.social
🎥 loom.ly/7qpznT4
@streinz.bsky.social
Prof of Law & Regulatory Theory @eui-law.bsky.social @eui-schuman.bsky.social @eui-eu.bsky.social 📚 Researching European/global data/tech law & governance
#EUIUpFront: @streinz.bsky.social and Anna Pisarkiewicz discuss what Europe's digital sovereignty means beyond laws.
@eui-law.bsky.social @eui-schuman.bsky.social
🎥 loom.ly/7qpznT4
Here‘s the full list of organisations Trump is withdrawing from. Just to highlight one: Trump is withdrawing from the ILC, showing the US disengagement from international law www.whitehouse.gov/presidential...
08.01.2026 02:50 — 👍 25 🔁 13 💬 3 📌 2We’re withdrawing from the International Law Commission??? This whole list is absolutely insane.
www.whitehouse.gov/presidential...
The EU readies tougher tech enforcement in 2026 as Trump warns of retaliation, but how to reconcile this New Year’s resolution with EU digital omnibus dismantling digital rulebook? @teresaribera.ec.europa.eu @bmoens.bsky.social @digitalmario.bsky.social giftarticle.ft.com/giftarticle/...
04.01.2026 12:06 — 👍 27 🔁 11 💬 0 📌 2Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Has anyone done good work on privacy-preserving social graph portability? So that you could, say, bring your Insta graph over to Bluesky without revealing info to either side about what your graph is?
09.09.2025 13:57 — 👍 11 🔁 2 💬 3 📌 0Watch the EU's flagship speech on one of four US tech platforms, three of which are actively trying torch the EU's digital regulations.
Oh and EBS.
On First View: Pietro Ortolani, @cgoanta.bsky.social, @sarahvahed.bsky.social, and Alan Sanfey test some behavioural assumptions in the DSA. People are strange when you're a stranger.
www.cambridge.org/core/journal...
🇪🇺 For today’s #SOTEU to succeed, @vonderleyen.ec.europa.eu must go beyond Draghi’s recipe. Address citizens’ real concerns—housing, cost of living—while making EU more independent: bolder stance on US/Russia/China, Ukraine peace talks, suspend EU-Israel ties www.reuters.com/business/von...
10.09.2025 06:25 — 👍 3 🔁 1 💬 1 📌 1Last year, Colorado signed a first-of-its-kind AI measure into law, but the state recently held a special session where lawmakers held frenzied negotiations over whether to expand or dilute its protections. Tech Policy Press spoke to two Colorado Sun reporters closely tracking the talks.
01.09.2025 18:15 — 👍 4 🔁 2 💬 0 📌 0In German: I have written a small bit about the ways the EU could build Social Media as a public infrastructure: www.sz-dossier.de/gastbeitraeg...
29.08.2025 07:33 — 👍 22 🔁 5 💬 0 📌 1Die Bundesnetzagentur soll künftig einen großen Teil der KI-Aufsicht übernehmen. Ringsum ist jedoch ein Mosaik aus weiteren Zuständigkeiten geplant. Das geht aus dem Gesetzentwurf aus dem Digitalministerium hervor, den wir veröffentlichen.
netzpolitik.org/2025/referen...
1/Great conversation w/ Daniel Kurtz-Phelan @foreignaffairs.com on the weaponization of the world economy. As the economic weapon proliferations, is the US or the world ready? Not if you decapitate the expertise needed to navigate this new world.
www.foreignaffairs.com/podcasts/ris...
New paper on a new topic in @policyr.bsky.social
I analyse the EU’s AI Act and its place in the global governance of the AI.
Examining the economy of AI + the contents of the Act, I conclude that expecting a ‘Brussels effect’ is neither apt nor useful.
….
policyreview.info/articles/ana...
Remember: This is not about regulation. It is about the future of democracy and national security in Europe. Viewing it as a deal/regulatory tradeoff will not end well.
www.ft.com/content/5820...
🌍 Looking for summer reading?
Explore our Citizenship Lit Database for the latest research ⚡
From non-resident citizens’ vote choice in plebiscites, to refugees paths to naturalisation in Germany, citizenship trials in India, postcolonial legacies in the Caribbean & more
📖: tinyurl.com/5n7pr623
abstract: Eventually, many scholarly approaches that mix a descriptive programme with a normative agenda face a delicate question: what if the thing written about for years does not (fully) transpire? Today, public law is a crucial discursive reference point in international politics and global constitutionalism a widely known scholarly approach thereof. However, despite prevalent talk about constitutional principles in global governance, aspirations of a liberal world order gave way to confrontation, tribalism, and cynicism. Simply put, the promise of global constitutionalism has not fully materialized. So, what can digital constitutionalism, a new scholarly approach that mixes description with desire, learn from global constitutionalism? Apart from several already learned fundamentals, this chapter focuses on what digital constitutionalism might learn from global constitutionalism’s current moment of introspection. Concretely, the chapter presents four lessons, all of which build on critiques of global constitutionalism, which may be, in the future, relevant for digital constitutionalism as well. These lessons are: a stronger empirical footing, a broader focus that includes administrative dimensions, less constitutional metaphors, and more disciplinary and scholarly diversity.
Digital Constitutionalism has become a widespread analytical framework and linguistic toolbox for studying digitization, particularly in Europe.
In the recently published Oxford Handbook on Digital Constitutionalism, I reflect on some of its discursive continuities with...
A paperback copy of the book Ways of being by James Bridle.
The most inspiring read of my summer. James Bridle’s Ways of Being is a beautiful and thought-provoking essay on the entanglement of humans, nature and technologies, and ideas about what intelligence is and how to build ”smart” machines.
#STS #AI
A photo of a night market text: Open House New York / Scavenger Hunts
Just want to give props to Open House New York, which has been organizing themed city-wide scavenger hunts — focusing on water infrastructure, public works, and libraries — for the past several years!
ohny.org/project/scav...
Incredible scenes as the University of Warwick refuses to answer an FOI on the cost of an advertorial photo feature on their chief marketing officer in Vogue Singapore because it is a 'trade secret' www.whatdotheyknow.com/request/cost...
24.08.2025 11:35 — 👍 32 🔁 24 💬 4 📌 3Questions to ask yourself As GenAl poses to be a revolutionary tool that can change the academic space and beyond, it is important for you to understand why and how you intend to use these new, powerful tools. These are a few questions to consider and note that the answers to these questions will vary for each person. • Is using a GenAl-based tool helping me learn more and think better? • Is using a GenAl-based tool enabling or hindering my mastery of the stated course objectives? • Is the content I generate accurate and verifiable? Is it free of biases that might harm other groups of society? • How will I treat content that might have been generated using a GenAl-based tool? • Is using a GenAl-based tool equitable to my peers in my course? • How can my actions in using a GenAl-based tool lead to the greater good of society? Understand that your usage of GenAl-based tools can give you the means to better not just yourself, but also society as a whole, and there is an ethical responsibility towards doing so.
The University of Michigan is now claiming that students have an ‘ethical responsibility’ to use AI.
20.08.2025 12:40 — 👍 547 🔁 173 💬 140 📌 431Any EU media policy nerds flying this sky?
22.08.2025 08:57 — 👍 2 🔁 1 💬 0 📌 0📱@eui-law.bsky.social researcher Michael Fitzgerald unpacks the gap between EU tech regulations and how platforms actually respond.
By analysing cases involving Facebook and YouTube, his research shows why regulating Big Tech is so complex & where solutions might lie.
⚖️ https://loom.ly/VXbI9X0
STS. You’re talking about STS. Small but mighty. Also, a lot of modern media studies.
19.08.2025 13:18 — 👍 5 🔁 1 💬 0 📌 0Convenience AI Sabina Leonelli & Alexander Martin Mussgnug12 Abstract: This paper considers the mundane ways in which AI is being incorporated into scientific practice today, and particularly the extent to which AI is used to automate tasks perceived to be boring, “mere routine” and inconvenient to researchers. We label such uses as instances of “Convenience AI” — that is situations where AI is applied with the primary intention to increase speed and minimize human effort. We outline how attributions of convenience to AI applications involve three key characteristics: (i) an emphasis on speed and ease of action, (ii) a comparative element, as well as (iii) a subject-dependent and subjective quality. Using examples from medical science and development economics, we highlight epistemic benefits, complications, and drawbacks of Convenience AI along these three dimensions. While the pursuit of convenience through AI can save precious time and resources as well as give rise to novel forms of inquiry, our analysis underscores how the uncritical adoption of Convenience AI for the sake of shortcutting human labour may also weaken the evidential foundations of science and generate inertia in how research is planned, set-up and conducted, with potentially damaging implications for the knowledge being produced. Critically, we argue that the consistent association of Convenience AI with the goals of productivity, efficiency, and ease, as often promoted also by companies targeting the research market for AI applications, can lower critical scrutiny of research processes and shift focus away from appreciating their broader epistemic and social implications.
5. Today I read a paper by @sabinaleonelli.bsky.social and Alexander Mussgnug that I think illustrates this point perfectly.
philsci-archive.pitt.edu/24891/1/Phil...
1. The philosophy of science sometimes gets an unearned reputation as a purely academic exercise that offers little by way of concrete tools for advancing research.
This is wrong.
And today, as we grapple with how AI is changing the nature of scientific activity, it's desperately wrong.
A thread on transcription and AI. One reason I don't use AI to transcribe (besides the fact that AI is hugely detrimental to the environment & is coming for our jobs) is that it can't tell the difference between original documents & new notes added by staff members to whom documents were sent. 🗃️ 1/?
18.08.2025 01:45 — 👍 191 🔁 62 💬 7 📌 7"Critical data would become inaccessible, websites would go dark, and essential state services like hospital IT systems would be thrown into chaos," says Robin Berjon, a digital governance specialist who advises EU policymakers.
www.bbc.com/news/article...
1/Just back from Europe and exactly this is the problem -- no strategy. Europe keeps letting the US set the terms on Ukraine/Trade. Without a clear alternative and changing facts on the ground Europe will keep getting rolled.
www.nytimes.com/2025/08/16/w...
Text "United States Department of Agriculture. Transporting Watermelons in Bulk and Bin by Truck." Illustration of a semi truck with a watermelon as its trailer.
All reports in this thread are from the collections of Northwestern University's Transportation Library. Materials we've digitized can generally be found in HathiTrust. Learn more and search our catalog here: www.library.northwestern.edu/libraries-co...
29.01.2024 16:12 — 👍 1003 🔁 265 💬 30 📌 111