Internet Exchange's Avatar

Internet Exchange

@index.internet.exchangepoint.tech.ap.brid.gy

Feminist perspectives on digital justice and tech [bridged from https://internet.exchangepoint.tech/ on the fediverse by https://fed.brid.gy/ ]

11 Followers  |  0 Following  |  46 Posts  |  Joined: 25.03.2025  |  2.6159

Latest posts by index.internet.exchangepoint.tech.ap.brid.gy on Bluesky

Preview
Is It True Nobody Reads the T&Cs? _By_ _Lucy Perdon_ _, adapted for the IX audience from her original_ _Substack article_ _._ In 2014, a group of security researchers set up a free Wi-Fi hotspot in and around busy London stations to conduct a novel experiment. Before connecting to the hotspot, members of the public had to accept the Terms and Conditions (T&Cs) for using the service. In return for free Wi-Fi, the user agreed to assign their first born child to the Wi-Fi provider “for the duration of eternity”. Referred to in the T&Cs as the “ _Herod Clause_," it was a stunt to highlight the lack of awareness of public Wi-Fi security issues, and the fact that nobody reads the small print. Also known as Terms of Service, Terms of Use, User Agreements or Service Agreements, it is true that most are a case of clicked-accept-but-didn’t-read. According to a _2024 Ofcom survey,_ between half and two-thirds of users reported signing up to online platforms without reading the T&Cs. No wonder, with Microsoft’s combined _T &Cs_ and _privacy policy_ in Europe clocking in at a hefty 22,360 words, estimated to take at least two hours to read. ## Sign up for Internet Exchange Feminist perspectives on digital justice and tech Subscribe Email sent! Check your inbox to complete your signup. No spam. Unsubscribe anytime. Not only are they lengthy, they’re also complex. In 2018, the BBC undertook a _readability test_ of the T&Cs and privacy policies of 15 popular websites and found they were written at a university reading level and more complicated than Charles Dickens’ _A Tale of Two Cities_. This is a problem, especially given that some of these websites allow users as young as 13. ### An Ethical AND a Legal Mess This alone could be a breach of data protection rules, which require a clear explanation of how companies are using data. Article 12 of GDPR says that communications to individuals about their data must be presented in a "concise, transparent, intelligible and easily accessible form, using clear and plain language, in particular for any information addressed specifically to a child". Expectations are further outlined by the UK’s Information Commissioner’s Office (ICO) _Age Appropriate Design Code_, likely to be updated in light of increased protection for children under the _The Data (Use and Access Act) 2025_ which came into force on June 19th. There have been attempts to address overly complex T&Cs and contracts generally in order to protect consumers from signing up to something they don’t understand. The UK Consumer Rights Act (2015) includes a _transparency requirement_, mandating that the terms of consumer contracts are expressed in plain language. On the government side, The US Plain Writing Act of 2010 required federal agencies to communicate in clear language and mandated all staff undergo “plain language” training. The goal was to ensure government documents are easy to read, enhancing public understanding and trust in institutions. ### So Who Does Read the T&Cs? Eagle-eyed readers of T&Cs are likely digital rights advocates and journalists mainly looking out for what data is being collected and with whom it is shared, which has led to some memorable discoveries. When the first smart TVs with voice recognition went on sale in 2015, Samsung’s policies warned users "If your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party." This was interpreted by many to read “IT’S OUT OF CONTROL, RUN FOR YOUR LIVES.” _Warning people_ not to have sensitive conversations in front of the TV gave the impression that Samsung was not fully in control of their own creation. A good catch by journalist Samantha Cole in 2022 forced the period tracking app _Stardust_ to change their privacy policy. At a time of _heightened tension_ and increased reproductive surveillance just after Roe v. Wade was overturned, when women feared their menstrual data could be used to investigate or prosecute abortions, the company seemed happy to hand over data on periods to law enforcement without a warrant. Being tone deaf to your customer base is a theme we will return to. And in 2023, _Mozilla Foundation_ found in the small print of some connected car policies that they had gone full voyeur and collected information about the driver’s sex life, taking all the fun out of back seat canoodling. ### In the AI Era, T&Cs Are Receiving Renewed Scrutiny Now, with the majority of tech companies figuring out how to capitalise on the vast amount of data collected over many years and pivot their business model to cash in on the AI goldrush, updates to T&Cs reveal clues about future business plans and test the limits of what users are willing to accept. On July 1st, WeTransfer notified customers about an update to their T&Cs regarding licensing which included utilizing user content for possible new technologies such as “ _machine learning models that enhance our content moderation process”._ This was widely interpreted to mean that WeTransfer could capture any content transferred over the WeTransfer platform and use it to train their AI models. For free, as the clause also specifies there would be no compensation for this. WeTransfer is a platform that allows very large digital files to be sent quickly and is used by the creative industries to send design/illustration files, photos and videos. The creative industries are VERY sensitive to their work being used to train AI models and this update came across as tone deaf to the concerns of their core customer base. The backlash was instant after the offending article was published widely on _social media_, with users making it clear this was not OK and they would be shuttering their accounts. WeTransfer apologized and amended their policy to be clearer, _saying_: > _“Such a feature [machine learning for content moderation] hasn’t been built or used in practice, but it was under consideration for the future. To avoid confusion, we’ve removed this reference.”_ Online conferencing platform Zoom tried something similar back in 2023 when it updated its T&Cs to allow the company to use customer data to train AI models “with no opt out” and rolled it back after intense _customer backlash_. A misunderstanding when T&Cs/privacy policies are written in inaccessible language for the average user is one thing, but are companies legally allowed to use data collected to train AI models? The implications for copyright are an issue currently being played out in court in both the US and UK. Once again, data protection is having to do some of the heavy lifting, where the use of data for training AI hinges on consent. From recent examples, users are not clear what they are consenting to and they don’t like what they see. The flood of AI tools coming to market, particularly agents, will demand access to a wide range of personal information and data points and users need to be crystal clear about what this means. The ICO already had a word with Microsoft regarding the “Recall” tool embedded into Co-Pilot after the _“privacy nightmare”_ of it taking screenshots of a user desktop every few seconds. ### Tick the Box to Do Better Does it have to be like this? Volunteer projects like _“Terms of Service; Didn't Read”_(ToS;DR) provide legal analysis of T&C’s/Privacy Policies and assign a score. _Ranking Digital Rights_provides an annual index and analysis of the transparency and trends of the world's largest platforms. But it is on the companies themselves to commit to using plain language and stop hiding controversial elements of the business in the T&Cs with the presumption that it provides legal cover and anyway no-one will notice. Legal protection for a wide range of liabilities relating to a wide range of services does not lend itself to simplicity in online platforms’ T&C’s. But lengthy and complex T&Cs can lead to confusion and misunderstanding among users, which is ultimately bad for business. Investing in co-creation of terms of service, where users play a role from beginning to end and centering those most impacted, could go some way to improve understanding and trust. We are still in a space of resistance figuring out what AI means in our lives—what it will give and what it will take. Giving users a raw deal might work in the short term, but could backfire in the longer term as competition increases. Time to tick that box to accept we can do better. * * * ### IEEE Just Standardised Inclusive Language 🎉 🥳 IEEE SA has published a new standard, IEEE 3400-2025, aimed at improving the use of inclusive language in its technical documents. It covers everything from policies and procedures to code and machine-readable formats. The standard provides guidance for identifying and replacing outdated or non-inclusive terms, along with a list of preferred alternatives. Created by the Inclusive Language Working Group, it’s part of a broader push to make technical communication clearer and more respectful. ****Support the Internet Exchange**** If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling. Not ready for a long-term commitment? You can always leave us a tip. Become A Paid Subscriber * * * ### From the Group Chat 👥 💬 _This week in our Signal community, we talked about:_ OpenAI’s _announcement_ of its most advanced open-**_weight_ **models, allowing developers, governments, and nonprofits to run and customize AI systems on their own infrastructure, and framed as supporting “democratic values” and “the global buildout of AI on US-led rails.” But should countries aiming to build sovereign LLMs be concerned if the foundation model is Gemini, Llama, or ChatGPT? And is OpenAI still co-opting the word “open” without following the _Open Source Initiative’s definition_ of open source to troll them? We’re just asking questions. * * * ## This Week's Links ### Open Social Web * The next major version of Ghost has arrived, and the 6.0 release has upgrades and improvements including built in distribution across federated platforms and a native analytics suite. _https://ghost.org/changelog/6_ * At FediCon 2025, Evan S. Prodromou gives a keynote talk on Connecting the Social Web. _https://spectra.video/w/eg35Kne91oEHcgyfvQWMhM_ ### Internet Governance * Senator Edward J. Markey released a discussion draft of legislation that would address the national security risks posed by ByteDance’s ownership of TikTok without banning it. _https://www.markey.senate.gov/news/press-releases/senator-markey-releases-discussion-draft-of-legislation-to-keep-tiktok-online-and-protect-national-security_ * Downloads of VPNs have surged in Britain since new age‑verification rules came into force designed to prevent children from accessing pornography and harmful content. _https://www.ft.com/content/915d380a-7d1f-46b3-9dcb-536e832114fd?shareType=nongift_ * Does your AI model fall under the EU AI Act? As of August 2, 2025, new rules apply to general-purpose AI (GPAI) models, with some exemptions for research and open-source projects. This guide helps developers understand their obligations and how to comply. _https://huggingface.co/blog/yjernite/eu-act-os-guideai_ * The report Funding Europe’s Open Digital Infrastructure outlines a roadmap for an EU Sovereign Tech Fund to support critical open source technologies key to Europe's digital sovereignty, cybersecurity, and competitiveness. _https://eu-stf.openforumeurope.org_ * The internet depends on aging and expanding undersea infrastructure, at the same time the submarine cable industry faces challenges in maintenance, security, and sustainability. This report by TeleGeography and Infra-Analytics, with contributions from Tim Stronge, Alan Mauldin, and Michael Ruddy, explores what’s needed to keep the world connected beneath the waves. _https://www2.telegeography.com/hubfs/LP-Assets/Ebooks/The%20Future%20of%20Submarine%20Cable%20Maintenance_%20Trends%2c%20Challenges%2c%20and%20Strategies.pdf_ * Canada’s algorithmic impact assessments reveal major accountability gaps, with limited compliance, no civil society input, and a tendency to frame negative impacts positively, find Ana Brandusescu & Renée E. Sieber. _https://link.springer.com/article/10.1007/s44206-025-00221-7_ * Since April 7, 2024, China’s Great Firewall (GFW) has started blocking QUIC connections to certain domains, despite the encryption of QUIC handshake packets. This study investigates how the GFW censors QUIC. _https://gfw.report/publications/usenixsecurity25/data/paper/quic-sni.pdf_ * The W3C Advisory Board has published a W3C Statement the document Vision for W3C, which articulates W3C’s core vision for the Web. _https://www.w3.org/news/2025/vision-for-w3c-is-a-w3c-statement_ * Meta is replacing fact-checkers with a crowdsourced notes system, but early tests show it rarely works. _https://www.washingtonpost.com/technology/2025/08/04/meta-fact-check-community-notes-test-facebook-instagram_ * ETSI advances new standards to support trusted, interoperable, and AI-ready data sharing across sectors. _https://www.etsi.org/newsroom/press-releases/2569-etsi-moves-forward-with-standards-for-trusted-interoperable-data-ecosystem_ * President Trump's administration has instructed US diplomats in Europe to lobby against the Services Act, which they say stifles free speech and imposes costs on US tech companies. _https://www.reuters.com/sustainability/society-equity/rubio-orders-us-diplomats-launch-lobbying-blitz-against-europes-tech-law-2025-08-07_ * New EU-backed research project will assess how Pornhub, XVideos, and XNXX comply with the Digital Services Act, focusing on user safety, content moderation, and performer protections. _https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/tender-details/149afd0b-26bf-435f-a95d-0c0f94e6de6d-EXA_ ### Digital Rights * Israel used Microsoft’s Azure cloud to store and analyze millions of Palestinians’ phone calls daily, enabling mass surveillance and targeted military actions from a system built with the tech giant’s support. Satya Nadella reportedly even met with the commander of Israel’s military surveillance agency. _https://www.theguardian.com/world/2025/aug/06/microsoft-israeli-military-palestinian-phone-calls-cloud_ * A jury found that Meta violated the California Invasion of Privacy Act when it recorded the sensitive health information of millions of women through the period tracking app Flo. _https://www.courthousenews.com/meta-violated-privacy-law-jury-says-in-menstrual-data-fight_ * The Trump administration is pushing for access to state-held data from food stamps, Medicaid, and voter rolls. Critics fear it could be used to surveil immigrants and political opponents. CDT’s Elizabeth Laird quoted saying information held by the states “is the largest piece of that data] puzzle.” [_https://www.nytimes.com/2025/08/01/upshot/trump-states-data-privacy.html_ * A forthcoming USENIX Security 2025 paper investigates the growing ecosystem of cheap, accessible, and heavily commercialized AI “nudification” tools that use generative AI to create non-consensual nude and sexual images, primarily of women. _https://www.usenix.org/publications/loginonline/tools-and-tolls-ai-nudification-1_ * New report from Digitally Right and OONI sheds light on the July–August 2024 internet shutdowns in Bangladesh. _https://ooni.org/post/2025-bangladesh-report/_ ### Technology for Society * Wikipedia editors adopted a new policy giving an administrator the authority to quickly delete an AI-generated article that meets a certain criteria, helping manage the growing AI slop problem. _https://www.404media.co/wikipedia-editors-adopt-speedy-deletion-policy-for-ai-slop-articles_ * Gone are the days when Google, Apple, Meta and Netflix were the dream destinations for tech workers. It’s the shut up and grind era as tech giants age into large bureaucracies writes Kate Conger. _https://www.nytimes.com/2025/08/04/technology/tech-jobs-silicon-valley-changes.html_ * Google, OpenAI, Meta and venture capitalists, many of whom had once forsworn involvement in war, have embraced the military industrial complex writes Sheera Frenkel. _https://www.nytimes.com/2025/08/04/technology/google-meta-openai-military-war.html_ * Meta brought AI bots to rural Colombia through its apps, but instead of improving education, they're now being blamed for a decline in student exam performance. _https://restofworld.org/2025/colombia-meta-ai-education_ * A new report from Project Liberty argues that data co-operatives offer a democratic, community-driven alternative to today’s centralized digital economy. _https://www.thenews.coop/as-the-tech-revolution-accelerates-project-liberty-calls-for-mutualised-data_ * The Tucson city council voted unanimously against bringing the massive and water-devouring Project Blue data center, tied to tech giant Amazon, into city limits. _https://azluminaria.org/2025/08/06/tucson-city-council-rejects-project-blue-amid-intense-community-pressure_ * At IETF 123, PhD students Haarika Manda and Hendrik Cech were awarded the Internet Research Task Force’s Applied Networking Research Prize for impactful work in applied internet research. _https://www.ietf.org/blog/ietf123-anrp_ * Escape newsletter inbox chaos and algorithmic surveillance by building your own enshittification-proof newspaper with RSS from the writers you already read, suggests Molly White. _https://www.citationneeded.news/curate-with-rss_ ### Privacy and Security * AI chatbots lack the privacy protections we take for granted in therapy or medicine, raising urgent questions about encryption, data use, and model safety, writes Celine Liu. _https://celinexinyiliu.substack.com/p/what-i-learnt-this-week-c96?r=5odpxh_ * A guide to making use of disposable email addresses to protect yourself from unwanted spam and phishing attempts in your personal email inbox, online tracking, and other forms of data abuse. _https://privacyinternational.org/guide-step/5534/guide-making-use-disposable-email-addresses_ ### Upcoming Events * The EFF Benefit Poker Tournament is back for DEF CON 33! Your buy-in is paired with a donation to support EFF’s mission to protect online privacy and free expression for all. **August 8, 12pm PT. Las Vegas, NV.**_https://www.eff.org/event/betting-your-digital-rights-eff-benefit-poker-tournament-def-con-33_ ### Careers and Funding Opportunities **United States** * Internet Society: Foundation Vice President of Philanthropy. **Washington, DC (Remote).**_https://internetsociety.bamboohr.com/careers/294_ * American Association of People with Disabilities: Technology Policy Associate. **Washington, DC or Remote US**. _https://ats.rippling.com/en-GB/aapd-jobs/jobs/329e56b7-d03a-4a11-a4fa-9b28666e6d86_ * Consumer Reports: Lead AI / ML Engineer. **Remote US.**_https://job-boards.greenhouse.io/consumerreports/jobs/4531209007_ * Partnership on AI (PAI): Head of Corporate Governance, Risk, and Responsible Practice. **Remote US or CA.**_https://ats.rippling.com/en-GB/pai-job-board/jobs/af43b214-f5ab-4f9b-ae02-b34d8136fcb5_ **Global** * UNESCO: Consultant (AI for the Public Sector). **Paris, France.** _https://careers.unesco.org/job/Paris-Consultant-%28AI-for-the-Public-Sector%29/825474702_ * Digital Futures Lab: * Senior Research Manager - Technology Policy & AI. **Goa, India**. _https://www.digitalfutureslab.in/careers/senior-research-manager-technology-policy-and-ai_ * Consultant: Capacity Strengthening Lead – Responsible AI. **Location** **Flexible.** https://www.digitalfutureslab.in/careers/consultant-capacity-strengthening-lead-responsible-ai * Dutch Government (CIO Rijk): Deputy CTO Rijk / Coordinating Policy Officer. **The Hague, Netherlands.** _https://www.werkenvoornederland.nl/vacatures/plaatsvervangend-cto-rijk-coordinerend-beleidsmedewerker-s14-BZK-2025-0527_ * Current AI: CEO. Paris, France. _https://www.currentai.org/vacancy-ceo-current-ai_ * The Better Information Project: Executive Assistant. **Remote Europe.**_https://careers.meliorefoundation.org/en/postings/84913943-292e-469e-83db-f9ad1d7de64e_ * Access Now: Senior Coordinator, Community Support & Accessibility - Limited term Contractor. **Multiple Locations**. _https://accessnow.bamboohr.com/careers/226_ * Tech Equity: Chief Program Officer. **Remote**. _https://techequity.us/our-team/chief-program-officer_ * New_ Public: Marketing Consultant, Local Lab. **Remote.**_https://newpublic.org/jobs/marketing-consultant_ ### Opportunities to Get Involved * Stand up for children’s rights in the digital classroom! Children aged 13–17 and based in the UK, can join the EdTech Youth Advisory Board to shape research, influence policy, and ensure tech in schools works for students, not against them. **August 15.**_https://forms.office.com/pages/responsepage.aspx?id=FXmI014-yUuCgsvn4qzHYHFfpY5n5SZEjoTXEC93lthURjVKM1hFSElXM0k3TVhCTElZVUpaS1FXUiQlQCN0PWcu &route=shorturl_ * Apply to join the next cohort of India’s Digital Defenders Network and receive training, support, and opportunities to lead strategic litigation to protect online freedoms. **India.** Apply by **August 17**. _https://ddn.sflc.in_ * Call for Proposals is now open for RightsCon 2026 until **September 12**. _https://rightscon.secure-platform.com/a_ _What did we miss? Please send us a reply or write to_ _editor@exchangepoint.tech_ _._ 💡 Want to see some of our week's links in advance? Follow us on Mastodon, Bluesky or LinkedIn, and don't forget to forward and share! ##
07.08.2025 14:27 — 👍 0    🔁 0    💬 0    📌 0
Original post on internet.exchangepoint.tech

A preview of some of public interest technology careers we're featuring in Thursday's edition of IX:

1️⃣ UNESCO: Consultant (AI for the Public Sector). Paris, France. https://careers.unesco.org/job/Paris-Consultant-%28AI-for-the-Public-Sector%29/825474702

2️⃣ Internet Society: Foundation Vice […]

05.08.2025 17:30 — 👍 0    🔁 0    💬 0    📌 0
Preview
Four Responses on the Future of Internet Governance Submitted as part of the WSIS+20 review process, which marks twenty years since the original World Summit on the Information Society.
31.07.2025 17:58 — 👍 0    🔁 0    💬 0    📌 0
Original post on internet.exchangepoint.tech

🧵 A preview of links 🔗 from tomorrow's IX:
1️⃣ The Iran-Israel conflict exposed a dangerous new phase of warfare, where rapidly advancing AI-generated propaganda outpaces the tools and expertise needed to detect it, write Mahsa Alimardani and Sam Gregory for the Carnegie Endowment for […]

30.07.2025 16:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Big Tech Redefined the Open Internet to Serve Its Own Interests Big Tech companies have redefined terms like “openness” and “free expression” to support business models built on centralization and data monetization.
24.07.2025 13:47 — 👍 0    🔁 0    💬 0    📌 1
Original post on internet.exchangepoint.tech

3️⃣ Hackers are turning DNS records, the internet’s basic address book, into covert malware delivery channels, exploiting a blind spot in cybersecurity defenses that even advanced tools often overlook, reports Dan Goodin in Ars Technica […]

22.07.2025 22:02 — 👍 0    🔁 0    💬 0    📌 0
Preview
Present but Silent: Rethinking the Role of African Governments in Internet Governance The crisis at AFRINIC has sparked renewed debate about internet governance in Africa and rightly so.

2️⃣ As AFRINIC teeters on the edge of collapse, the deafening silence from African governments raises a critical question: in a continent increasingly
shaped by digital infrastructure, where are the public stewards of the internet? From Alice Munyua. https://substack.com/inbox/post/168774762

22.07.2025 22:01 — 👍 0    🔁 0    💬 0    📌 0
Original post on internet.exchangepoint.tech

A preview of links 🔗 from this coming Thursday’s edition of IX!

1️⃣ Rural hospitals, schools, and essential services are under siege from cyberattacks. Not because they’re behind, but because public policy has left them unprotected, writes Nicole Tisdale for Aspen Digital […]

22.07.2025 22:01 — 👍 1    🔁 0    💬 0    📌 0
Preview
The UK Online Safety Act: Ofcom’s Age Checks Raise Concerns _By Audrey Hingle (with input from Mallory Knodel)_ Under the _UK’s Online Safety Act_, websites and apps that host adult content must implement “highly effective” age checks starting next week to prevent users under 18 from accessing that content. This includes not just pornography websites, but _also major social media platforms_. Restricting children’s access to harmful content such as pornography or material promoting self-harm is a noble aim. But as with many seemingly straightforward tech policies, the reality is more complicated. Age verification mandates may appear like a common-sense safeguard. In practice, it threatens to create serious privacy, security, and access risks; not just for kids, but for everyone. Mandates for strict age verification could ultimately do more harm than good. ## Age Checks Raise Technical and Human Rights Concerns Ofcom has listed seven _age verification methods_ that platforms may use, but each presents serious flaws that undermine privacy, equity, and effectiveness. ## Sign up for Internet Exchange Feminist perspectives on digital justice and tech Subscribe Email sent! Check your inbox to complete your signup. No spam. Unsubscribe anytime. 1. _**Facial age estimation** – you show your face via photo or video, and technology analyses it to estimate your age. _ While promoted as a scalable and user-friendly tool, facial age estimation is riddled with accuracy issues. These systems _often perform poorly_ for children, teens, and people of color, especially under variable lighting conditions or with low-quality cameras. Beyond accuracy, the approach raises deep privacy concerns. Facial scans are considered _sensitive biometric data_ under GDPR. Requiring children to submit this data in order to access online content is not only invasive but could normalize surveillance. It also puts sensitive data at risk of leaks or misuse, potentially compromising children's digital safety for years to come. 2. _**Open banking** – you give permission for the age-check service to securely access information from your bank about whether you are over 18. The age-check service then confirms this with the site or app._ With this method, users give an age-check service permission to access their banking data to confirm they are over 18. While secure in theory, this approach creates a troubling precedent: using financial data as a proxy for age. It assumes access to a bank account, excluding unbanked users and those without access to financial services including many teens, low-income individuals, and undocumented people. Using banking data for age checks also creates a significant privacy intrusion. Financial records reveal more than just age, and users may not fully understand or consent to how their data is accessed and processed. Once shared, this sensitive information can be misused or breached. 3. _**Digital identity services –** these include digital identity wallets, which can securely store and share information which proves your age in a digital format._ While convenient, these tools present major risks. Because they make sharing personal information easier, they may encourage platforms to collect more data than necessary, violating _the data minimization principle_ under GDPR. They also expand the "threat surface" for breaches. If a phone storing a digital ID is lost, stolen, or compromised by malware, all the sensitive data inside becomes vulnerable. This is particularly risky for children and teens who may not understand the consequences of poor digital hygiene. 4. _**Credit card age checks** – you provide your credit card details and a payment processor checks if the card is valid. As you must be over 18 to obtain a credit card this shows you are over 18._ Credit cards are not reliable indicators of age. Many children use cards issued to them by their parents. From a privacy standpoint, providing credit card information or other financial data just to access a website is an overreach. These documents often contain sensitive information unrelated to age and increase the risk of fraud and identity theft. Low-income users and those without access to credit may also be excluded altogether. 5. _**Email-based age estimation** – you provide your email address, and technology analyses other online services where it has been used – such as banking or utility providers - to estimate your age. _ This represents a newer, more expansive approach to email-based verification. While less overtly invasive than biometric methods, this technique still raises significant concerns. It relies on third-party linkages that may be opaque to users, and it grants outsized power to email providers or data brokers who can infer age from digital behavior. This model introduces serious data protection and consent issues: users may not know what data is being collected, how it’s being analysed, or whether inferences are accurate. Like institutional email requirements, it is both overinclusive and underinclusive. It may exclude users with limited digital footprints: young people, shared device users, or those who rely on family email accounts, and misclassify others based on incidental associations with adult-linked services. The result is a high risk of false positives and arbitrary denials of access, especially for marginalised or low-income users. Rather than offering trustworthy verification, it may deepen inequities while expanding surveillance infrastructure 6. _**Mobile network operator age checks** – you give your permission for an age-check service to confirm whether or not your mobile phone number has age filters applied to it. If there are no restrictions, this confirms you are over 18. _ This might sound like a streamlined solution, but it introduces serious privacy and power concentration issues. It gives telecom companies and operating systems like Apple and Google greater control over user data and online access. It also opens the door to more targeted advertising based on verified age, which undermines user privacy. Since this approach is tied to a specific phone number or device, it may not apply to web-based services or shared devices, resulting in inconsistent enforcement. 7. _**Photo-ID matching** – this is similar to a check when you show a document. For example, you upload an image of a document that shows your face and age, and an image of yourself at the same time – these are compared to confirm if the document is yours. _ While common in some financial and legal settings, this level of identity verification is excessive for general online use. Requiring users to upload government-issued identity documents, credit cards, or other hard documentation to verify age introduces serious privacy, equity, and safety concerns. These documents typically contain much more information than is necessary to confirm a user’s age, such as full name, address, photo, and physical characteristics like height and weight. Processing and storing this data creates an expanded attack surface for data breaches and increases the risk of surveillance or misuse. These risks are not hypothetical. In 2023 alone, _over 17 billion personal records were compromised worldwide_. It also conflicts with data minimization principles outlined in privacy laws like the _GDPR_ and _CPRA_. For individuals like migrants, low-income users and youth who do not possess formal ID, this method becomes a barrier to participation online. ### There Are Better Solutions than Age Checks Policymakers and platforms should pursue approaches grounded in privacy, technical feasibility, and human rights. Age verification alone will not eliminate online harms, and when overused, it can introduce new risks to user autonomy, safety, and inclusion. Below are four principles that can help build a safer internet for young people without compromising core digital rights. 1. **Embrace privacy by design** Reducing harm to children online requires a holistic, incremental, and collaborative approach. This includes designing with privacy in mind from the start. Most general-use social media platforms already include some form of age gating. A common method is self-reported age: when users sign up, they enter their birthdate. This approach is imperfect. People can lie. But it minimizes data collection and reflects a growing industry-wide _trend toward privacy-first design_. Platforms should continue to support and improve self-declared age systems, combined with strong default protections for teen accounts. For example, limiting data visibility, exposure to recommendations, and interactions from unknown users can provide meaningful protection without requiring intrusive identity checks. _Recent transparency reporting_ from Australia’s eSafety Commissioner reinforces this direction, highlighting that meaningful progress in protecting children online requires coordinated efforts across industry, government, families, and communities, rather than relying solely on rigid enforcement or technical mandates and that _no single solution_ is suitable for all contexts. A privacy-respecting model would reserve invasive verification for the most sensitive use cases, where the risks of harm are high and enforcement is proportionate. Even then, platforms should adopt the strongest possible data minimization and security practices. 2. **Use content controls, not identity checks** _User-agency_ is a key pillar of modern safety strategies. Platforms are increasingly being designed around _user-controlled content moderation_, especially for older teens. These tools can also support younger users, even if they are using the platform in violation of its stated minimum age policy. Filters, message controls, muting, blocking, and flagging mechanisms allow users to tailor their experiences and reduce exposure to harmful content. Such tools can also be applied to community enforcement, for example by flagging content from creators who violate community guidelines. _This strategy improves user safety_ without expanding surveillance or collecting unnecessary personal data. 3. **Avoid one-size-fits-all mandates** Blanket laws that require all users to verify their age often harm those already at the margins. Undocumented people, unbanked users, individuals with disabilities, and those who lack formal IDs may be excluded altogether._Age verification systems are often inaccurate, vulnerable to circumvention, and disproportionately impact those with the least access to government services._ In practice, these laws are also difficult to apply consistently across platforms and user histories. Longstanding users who signed up before any regulation was introduced may not respond to new prompts to disclose their age. If platforms deactivate or restrict these accounts, they risk significant harm to users' digital social lives and support networks. Placing such accounts on hold until the user “ages up” is one possibility, but it introduces ethical and legal complications that most technical proposals fail to address. 4. **Focus on bad actors, not just access** The majority of harm to children online stems not from falsified age information, but from abusive behavior, algorithmically amplified content, or lack of effective moderation. Rather than gatekeeping access based on age, platforms and regulators should invest in detecting and mitigating these underlying risks. Strengthening trust and safety teams, building better reporting tools, and auditing algorithms for systemic harms can all reduce exposure to dangerous content or predatory behavior. 5. **Address platform equities by restricting ads targeted to youth** Instead of allowing platforms to use age inference to fine tune ad delivery, increasing their profits, we could place restrictions on platforms based on these same inference tools. Specifically, we could prevent companies from targeting ads to users under 18 or under 13. This solution may be more durable and compelling for the platform, because any unauthorized minor would deplete value from the platform itself as a user base that increases cost by using the service, but does not generate revenue. ## Conclusion Ofcom’s own website advises users to "_exercise a degree of caution and judgement_" when handing over personal data, yet their framework effectively forces users to share sensitive information in order to use the internet. Mandatory age checks may be intended to protect young people, but they could backfire. By building systems that collect more personal data, we risk undermining core principles of modern privacy law like user consent, data minimization, and proportionality. Children should not have to surrender their identity to participate online. If mishandled or breached, these systems could harm not just their privacy, but their long-term trust in digital life. * * * ### 🎶 IETF 123 (as simple as TCP/IP) 🎶 The Internet Engineering Task Force is back for IETF 123, taking place **July 20–26 in Madrid** **and online**. With sessions on post-quantum cryptography, routing, real-time communications, and human rights, it’s one of the best places to see how the technical future of the internet is taking shape. Things kick off with the Hackathon and registration on Saturday, July 19, followed by newcomer sessions and working groups starting Sunday. Most sessions are open and available online, so it’s easy to drop in and follow the discussions that interest you. IX’s Mallory Knodel is co-chairing the Human Rights Protocol Considerations Research Group with Sofia Celi. Their session will feature talks from Chantal Joris from ARTICLE 19 on the constraints and considerations of legal frameworks in armed conflict, a speaker from Ainita.net on censorship in Iran, and Maria Farrell author of "We Need to Rewild the Internet," on **Monday, July 21 at 5pm (UTC+2)**. If you’re attending in person, stop by and say hello, or join remotely. See The Agenda ****Support the Internet Exchange**** If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling. Not ready for a long-term commitment? You can always leave us a tip. Become A Paid Subscriber ## This Week's Links ### Open Social Web * A group of European technology entrepreneurs has unveiled the Eurosky initiative, a project to create infrastructure for social media offerings and reduce reliance on US tech giants. It plans to use a decentralized moderation platform, similar to that behind Bluesky. _https://www.reuters.com/business/media-telecom/european-project-eurosky-aims-reduce-reliance-us-tech-giants-2025-07-15_ ### Internet Governance * The European Commission’s proposed Digital Networks Act could jeopardize core principles of internet governance in Europe: net neutrality, regulatory independence, and equitable connectivity, warn civil society groups and ARTICLE 19. _https://www.article19.org/resources/eu-the-dna-of-europes-connectivity-at-stake_ * A new risk analysis warns that adversaries can indirectly undermine critical infrastructure by weaponizing disinformation to manipulate public behavior. _https://onlinelibrary.wiley.com/doi/10.1111/risa.70062?af=R_ * On the _Tech Won't Save Us_ podcast, Paris Marx is joined by Laleh Khalili to discuss how the United States uses its control of key technologies to shift global power dynamics, and how that specifically plays out in the Middle East. https://techwontsave.us/episode/284_how_the_us_weaponizes_tech_in_the_middle_east_w_laleh_khalili * A new report by the Knight-Georgetown Institute shows how the US government’s proposed Google Chrome divestiture is technically feasible. _https://kgi.georgetown.edu/research-and-commentary/technical-feasibility-of-divesting-google-chrome_ * People with disabilities, those living in poverty or who have serious health conditions are being left in a bureaucratic limbo due to digital exclusion caused by the Department of Work and Pensions’ unchecked roll-out of technologies finds Amnesty International. _https://www.amnesty.org/en/latest/news/2025/07/uk-governments-unchecked-use-of-tech-and-ai-systems-leading-to-exclusion-of-people-with-disabilities-and-other-marginalized-groups_ * For fifth time, China blocks Wikimedia Foundation as permanent observer to the World Intellectual Property Organization. _https://wikimediafoundation.org/news/2025/07/09/china-block-wikimedia-wipo_ * Useful! If you’re based in the UK, Ofcom’s mobile checker shows which network offers the best 4G or 5G signal where you need it most. _https://www.ofcom.org.uk/mobile-coverage-checker_ ### Digital Rights * AI Forensics and eight other civil society groups have filed a formal DSA complaint against 𝕏, alleging unlawful use of sensitive personal data for targeted ads. _https://mailchi.mp/aiforensics/joint-statement-on-suspected-dsa-violations-by?e=dcad64b917_ * India’s growing manosphere is fuelling gendered disinformation, online abuse, and anti-feminist narratives that threaten democratic discourse and women’s rights warns Rohini Lakshan​​é. _https://genderit.org/feminist-talk/challenging-gendered-disinformation-indias-manosphere-requires-systemic-change_ * ProPublica has obtained the blueprint for the Trump administration’s unprecedented plan to turn over IRS records to Homeland Security in order to speed up the agency’s mass deportation efforts. _https://www.propublica.org/article/trump-irs-share-tax-records-ice-dhs-deportations_ * Just hours before delivering her keynote at the UN’s AI for Good Global Summit, AI ethics researcher and _friend of the newsletter_ Abeba Birhane was pressured to censor slides referencing Palestine, genocide, and Big Tech. _https://thebulletin.org/2025/07/ai-for-good-with-caveats-how-a-keynote-speaker-was-censored-during-an-international-artificial-intelligence-summit_ * A new study from CDT finds that content moderation systems are failing speakers of low-resource languages in the Global South. _https://cdt.org/insights/content-moderation-in-the-global-south-a-comparative-study-of-four-low-resource-languages_ ### Technology for Society * Fed up with ChatGPT, dozens of organizations in Latin America have partnered to develop a large language model that better understands their cultural and linguistic nuances. _https://restofworld.org/2025/chatgpt-latin-america-alternative-latamgpt_ * AI is being weaponized to enforce austerity and dismantle democracy in the U.S., particularly under the Trump administration’s alliance with Big Tech warns Kevin De Liban. _https://www.techpolicy.press/austerity-intelligence_ * Amid growing stigma and internal hierarchies in the digital sex industry, young German OnlyFans creators are developing nuanced strategies to protect their identities and challenge societal norms finds Swana Schuchmann. _https://onlinelibrary.wiley.com/doi/10.1111/gwao.70010_ * A Columbia dropout who used AI to cheat his way into Big Tech internships is now leading a VC-backed startup that promises to “cheat on everything.” _https://www.theatlantic.com/technology/archive/2025/07/ai-radicalization-civil-war/683460_ * New research from Maximilian Pieper argues that data isn’t just something digital. It’s a real, material process that reduces people and the planet to useful bits, much like mining or factory work. _https://link.springer.com/article/10.1007/s00146-025-02444-1_ * Femtech companies and women’s health organisations are routinely censored on social media for using basic anatomical terms, an example of how platform policies continue to sexualise and suppress women’s bodies, writes Lucy Purdon. https://courageeverywhere.substack.com/p/why-cant-you-say-vagina-on-social * Cory Doctorow and Maria Farrell celebrate Open Rights Group's 20th anniversary in a convo about surveillance capitalism, and the 'enshittification’ of digital platforms. https://mas.to/@cubicgarden/114864040608743301 ### Privacy and Security * Ireland’s upcoming age verification rules for social media are poorly conceived and dangerously invasive. These measures may require users to upload identity documents or biometric data just to access platforms, posing serious threats to privacy and enabling surveillance capitalism warns Simon McGarr. _https://www.thegist.ie/the-gist-age-verification-is-an-epic-fail_ * An elite Chinese cyberspy group hacked at least one state’s National Guard network for nearly a year, the Department of Defense has found. _https://www-nbcnews-com.cdn.ampproject.org/c/s/www.nbcnews.com/news/amp/rcna218648_ * A recent audit from the US Department of Justice has exposed severe vulnerabilities in the FBI's cybersecurity measures, which directly contributed to the deaths of key informants in the high-profile El Chapo investigation. _https://www.secureworld.io/industry-news/fbi-breach-deaths-el-chapo_ * The Spanish government is using Huawei to manage and store judicially authorized wiretaps in the country despite concerns about how the Chinese government could compel Huawei to assist Beijing with its own intelligence activities. _https://therecord.media/spain-awards-contracts-huawei-intelligence-agency-wiretaps_ ### Upcoming Events * Deep dive session: Vulnerability Handling. **July 22, 1:00pm CET. Online.** _https://www.stan4cra.eu/event-details/deep-dive-session-vulnerability-handling_ * At BlackHat USA 2025 join Ronald Dibert for a keynote on the history of _The Citizen Lab_, their investigations into the abuse of mercenary spyware and other sleuthing stories, and what keeps him up at night these days (answer: a lot). **August 6, 1:30pm PT. Las Vegas, NV.** _https://www.blackhat.com/us-25/briefings/schedule/#keynote-chasing-shadows-chronicles-of-counter-intelligence-from-the-citizen-lab-48196_ * You can now register for W3C TPAC 2025, W3C's major event of the year, which gathers the community for thought-provoking discussions and coordinated work to advance the invaluable work of our groups. **28 October / 10–14 November. Kobe, Japan & online. **https://www.w3.org/2025/11/TPAC ### Careers and Funding Opportunities * Cira: Public Affairs Specialist. **Ottawa, CA (Hybrid)**. _https://cira.bamboohr.com/careers/292_ * Equality Fund: Request for Proposal: Feminist Digital Security and Holistic Protection Training. **Remote.** _https://equalityfund.bamboohr.com/careers/103?source=aWQ9MTA=_ * The Alan Turing Institute: Open Source AI Fellowship Call 2025. **London, UK.**_https://www.turing.ac.uk/work-turing/open-source-ai-fellowship-call-2025_ ### Opportunities to Get Involved * #PrivacyCamp25: Registrations open and call for sessions deadline! CFP closes July 21. Privacy Camp 25 takes place September 30. **Brussels, BE.**_https://privacycamp.eu/2025/07/15/privacycamp25-registrations-open-and-call-for-sessions-deadline_ _What did we miss? Please send us a reply or write to_ _editor@exchangepoint.tech_ _._ 💡 Want to see some of our week's links in advance? Follow us on Mastodon, Bluesky or LinkedIn, and don't forget to forward and share! ##
17.07.2025 12:28 — 👍 0    🔁 0    💬 0    📌 0
Preview
If It Breaks Wikipedia, It’s Probably Bad Policy One Simple Test to Try Before Regulating the Internet.
10.07.2025 13:37 — 👍 0    🔁 0    💬 0    📌 0
Original post on internet.exchangepoint.tech

@Mallory@techpolicy.social is speaking at The Tech People Want Online Summit.
Join us July 8-9 for a global conversation about building people-centered technology that actually serves communities. We'll explore open tech, ethical AI, and sustainable solutions that prioritize public interest over […]

08.07.2025 07:51 — 👍 1    🔁 1    💬 0    📌 0
Preview
Google Walks Back Cookie Privacy Protections Google’s reversal on third-party cookies underscores how, when privacy and profit collide, the needs of advertisers continue to shape the web’s most widely used browser.
03.07.2025 16:03 — 👍 0    🔁 1    💬 0    📌 0

3️⃣ How much investment would it take to build a competitive, independent browser, in the context of all this talk on digital sovereignty? Asks Tara Tarakiyee. https://tarakiyee.com/digital-sovereignty-in-practice-web-browsers-as-a-reality-check

02.07.2025 16:31 — 👍 0    🔁 0    💬 0    📌 0

2️⃣ As the EU advances its 'twin transition' agenda, Carsten Horn and Ulrike Felt find the supposed harmony between digital and green goals rests on fragile, tech-driven optimism rather than ecological reality. https://www.tandfonline.com/doi/full/10.1080/1523908X.2025.2515225

02.07.2025 16:31 — 👍 0    🔁 0    💬 0    📌 0
Original post on internet.exchangepoint.tech

A sample of links 🔗 from tomorrow's edition of Internet Exchange 🧵

1️⃣ In the era of Trump 2.0, tariffs have become a powerful bargaining chip, with the vast American market leveraged to pressure trading partners, like Europe, into removing long-standing tariff and non-tariff barriers. In a […]

02.07.2025 16:30 — 👍 0    🔁 0    💬 0    📌 0
Preview
The UK Struggles to Balance AI Innovation and Creative Protection The UK, a global hub for both AI and the arts, struggles to balance tech innovation and protecting a creative sector increasingly threatened by AI trained on copyrighted works.
26.06.2025 15:44 — 👍 0    🔁 0    💬 0    📌 0
Preview
The Future of Interoperability is Private @ IGF 2025 This month, the Social Web Foundation is joining the UN’s 20th annual conference on the internet.
19.06.2025 15:54 — 👍 0    🔁 1    💬 0    📌 0
Preview
WhatsApp is getting ads using personal data from Instagram and Facebook Meta is expanding its ads business on WhatsApp using your data from Instagram and Facebook

3️⃣ Meta has introduced two new features on Threads to enhance integration with the fediverse: https://noyb.eu/en/whatsapp-getting-ads-using-personal-data-instagram-and-facebook

Want to see all this week's links? Subscribe to IX: https://internet.exchangepoint.tech/#/portal/signup

18.06.2025 17:25 — 👍 0    🔁 0    💬 0    📌 0
Original post on internet.exchangepoint.tech

2️⃣ Meta has announced it will begin showing ads on WhatsApp, using personal data from Facebook and Instagram to do so. This move deepens the integration of WhatsApp into Meta’s advertising ecosystem and raises legal concerns under EU law. From noyb.eu […]

18.06.2025 17:25 — 👍 0    🔁 0    💬 0    📌 0
Original post on internet.exchangepoint.tech

A preview of links 🔗 from tomorrow's edition of IX! 🧵

1️⃣ A new report from Michael Weinberg and GLAM-E Lab reveals that AI scraping bots are overwhelming the servers of libraries, archives, museums, and galleries by aggressively collecting training data for AI models.
Report […]

18.06.2025 17:24 — 👍 0    🔁 0    💬 0    📌 0
Preview
AI, Bias and the Courts AI is reshaping how courts make decisions, but without transparency and accountability, these tools risk amplifying bias, eroding civil rights, and undermining public trust in the justice system. This is a summary of Mallory Knodel’s keynote at the Michigan Judges Conference.
12.06.2025 14:00 — 👍 0    🔁 1    💬 0    📌 0
Original post on internet.exchangepoint.tech

3️⃣ Unchecked AI use in journalism risks press freedom, source confidentiality, and public trust. Derechos Digitales highlights these dangers while also pointing to concrete efforts to promote ethical and rights-respecting practices […]

11.06.2025 15:06 — 👍 0    🔁 0    💬 0    📌 0
Original post on internet.exchangepoint.tech

2️⃣ A growing data privacy crisis is unfolding across the U.S. public sector, from IRS data being shared with ICE to AI-powered visa surveillance and pressure on states to hand over food assistance records write Reem Suleiman, Esra'a Al Shafei, and Brian Hofer […]

11.06.2025 15:05 — 👍 0    🔁 0    💬 0    📌 0
Preview
Home | Channel.org Home channel page

A preview of links 🔗 from tomorrow's edition of IX! 🧵

1️⃣ At the 2025 FediForum conference, three new apps were introduced to strengthen the open social web: Bonfire Social, Channel.org and Bounce. https://www.theverge.com/news/680895/fediverse-fediforum-2025-open-social-web-apps

11.06.2025 15:05 — 👍 0    🔁 0    💬 0    📌 0
Preview
Rethinking Robots: Why Visual Representation of AI Matters The images we use to depict AI, from robots, to blue brains and cascading code, are more than just clichés. They shape public understanding, feed myths and undermine meaningful engagement. Better Images of AI is working to change that.
05.06.2025 17:24 — 👍 0    🔁 0    💬 0    📌 0
Original post on internet.exchangepoint.tech

2️⃣ Microsoft denied cutting off services to the International Criminal Court (ICC) after the email account of its chief prosecutor, Karim Khan, was sanctioned by a Trump executive order and reportedly “disconnected.” […]

04.06.2025 17:09 — 👍 0    🔁 0    💬 0    📌 0
Preview
Coworker.org Coworker is the leading global peer-based platform designed specifically to support workers of all kinds to spark true change in the workplace.

A preview of links 🔗 from tomorrow's edition of IX!

1️⃣ Coworker.org’s new report “Little Tech Goes Global” reveals how venture-backed startups are rapidly spreading AI-powered workplace surveillance across the Global South. https://home.coworker.org/little-tech-goes-global

04.06.2025 17:08 — 👍 0    🔁 0    💬 0    📌 0
Preview
Who Is Organizing the Tech Workforce? A few weeks ago in the IX community, we were talking about the tech workforce, particularly in light of recent mass layoffs and protests over Big Tech’s role in supporting Israel’s occupation of Gaza. There was a time when employees at the world’s most powerful tech companies could influence major decisions about ethics, government contracts, and product development. But those ties seemed weaker now. Why? So we asked some of those workers to share their story. In our main story today: the answer we got from a group of workers organizing with No Tech for Apartheid. _But first..._ ### Mallory Knodel Joins Yerevan Dialogue IX's Mallory Knodel attended the second edition of the Yerevan Dialogue, held May 26–27, 2025 in partnership with Armenia’s Ministry of Foreign Affairs. Taking place in Yerevan, a regional leader for internet freedom in the South Caucasus, the event focused on the theme _Navigating the Unknown_ , exploring shifting geopolitics, peace and security, AI politics, connectivity, and trade. Mallory spoke on the panel _Deep Dive Into the Unknown: Exploring the Depths of Artificial Intelligence_ and also contributed to a Freedom Online Coalition side event on _Digital Public Infrastructure and Human Rights_. ## Sign up for Internet Exchange Feminist perspectives on digital justice and tech Subscribe Email sent! Check your inbox to complete your signup. No spam. Unsubscribe anytime. ****Support the Internet Exchange**** If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling. Not ready for a long-term commitment? You can always leave us a tip. Become A Paid Subscriber ## This Week's Links ### Related To Our Main Story: Decentralizing Power in Tech * Microsoft employees discovered that emails with a variety of terms related to Gaza and Palestine have been blocked internally after employee protests. _https://www.theverge.com/tech/672312/microsoft-block-palestine-gaza-email_ * A U.S. sanctions order forced Microsoft to cut off the International Criminal Court’s accounts, crippling war crimes investigations and exposing global civil society’s dangerous reliance on American tech. _https://www.thedissident.news/international-civil-societys-tech-stack-is-in-extreme-danger_ * Mayfirst introduces _Cutting the Cord_ an initial road map for reducing the movement's dependence on Big Tech by growing our autonomous technology ecosystem. _https://mayfirst.coop/en/post/2025/cutting-the-cord_ * Gig worker unions in India are using electoral politics to push for legal protections, signaling the adoption of direct political interventionism as a bargaining strategy for gig and platform workers’ rights. _https://journals.sagepub.com/doi/10.1177/10245294251339391_ ### Internet Governance * Study examining the evolving landscape of digital sovereignty through the lens of Threema. _https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5192550_ * Is a social media ban the most efficient way of keeping children safe online? Guernsey Data Protection Commissioner Brent Homan sits down with Australian Privacy Commissioner Carly Kind to tackle this and many other thorny privacy-related issues. _https://www.odpa.gg/project-bijou/the-bijou-lecture/the-bijou-lecture-2025_ * This article seeks to explain the notion of “glue” in the Domain Name System (DNS). Why is glue needed in the DNS? Doesn’t it hold by itself? What are the consequences of this gluing? Is it mandatory? _https://www.afnic.fr/en/observatory-and-resources/expert-papers/dns-records-a-sticky-subject_ * The dominance of Amazon, Google, and Microsoft in the cloud computing market poses risks to national security, innovation, and public oversight. _https://www.ft.com/content/5c930686-9119-402d-8b9b-4c3f6233164e_ * With the fall of Assad’s regime, Syrians now face a historic opportunity to rebuild their digital future. Doing so will require lifting sanctions, overhauling repressive laws, and restoring shattered internet infrastructure. _https://www.accessnow.org/syria-sanctions-digital-future_ * At the IETF 122 meeting in Bangkok, the Internet Architecture Board (IAB) held a special session about Internet Governance – specifically, the upcoming World Summit on Information Security (WSIS) 20 year review. _https://www.ietf.org/blog/technical-community-involvement-in-internet-governance_ * This report explores how the global rise of AI and cloud-driven data centers impacts economies, environments, and communities, drawing on case studies from five countries to examine power dynamics and advocacy needs. _https://www.themaybe.org/research/data-center-report-where-cloud-meets-cement_ * Brazil is handing out generous incentives for data centers, but what it stands togain is still unclear. _https://www.techpolicy.press/brazil-is-handing-out-generous-incentives-for-data-centers-but-what-it-stands-to-gain-from-it-is-still-unclear_ * The 2025 State of Open Infrastructure Report by Invest in Open Infrastructure (IOI) is an in-depth annual assessment of the sustainability, funding, and global context surrounding the tools and systems that underpin open research and scholarship. _https://investinopen.org/state-of-open-infrastructure-2025/sooi-foreword-2025_ * The 13th annual African School on Internet Governance (AfriSIG) took place from 23 to 28 May in Dar es Salaam, Tanzania. _https://afrisig.org/afrisig-2025_ * The IPTC has released a set of guidelines expressing best practices that publishers can follow to express the fact that they reserve data-mining rights on their copyrighted content. https://iptc.org/news/iptc-publishes-best-practice-guidance-on-generative-ai-opt-out-for-publishers ### Digital Rights * 89 civil society groups, companies, and experts are urging the European Commission to rethink the Protect EU strategy, warning it threatens end-to-end encryption and could make Europeans less safe. _https://www.globalencryption.org/2025/05/joint-letter-on-the-european-internal-security-strategy-protecteu_ * A controversial site-blocking order in Spain, initiated by LaLiga and Telefónica to combat piracy of live sports streams, has sparked a significant backlash due to its sweeping collateral damage. _https://torrentfreak.com/constitutional-court-urged-to-end-piracy-blockades-now-hurting-millions-250519_ * Related: The ICANN Security and Stability Advisory Committee (SSAC) revisits DNS blocking, warning that it’s often misused or poorly implemented by policymakers. https://itp.cdn.icann.org/en/files/security-and-stability-advisory-committee-ssac-reports/sac127-dns-blocking-revisited-16-05-2025-en.pdf * Regulators are investigating whether Media Matters colluded with advertisers, following Elon Musk’s lawsuit accusing the group of trying to harm X’s ties with ad partners. _https://www.nytimes.com/2025/05/22/technology/ftc-investigates-media-matters.html_ * Techno-diplomacy, inside the battleground for your right to connect, and the decisions about who controls the infrastructure that makes the internet work. _https://pitg.gitlab.io/news/2025/05/02/connectivity-spectrum.html_ ### Technology for Society * The Observatory on Social Media is excited to introduce three new tools that will make it easier to study and engage with social media data: NewsBridge, Barney’s Tavern, and OSoMeNet. _https://osome.iu.edu/research/blog/three-new-tools-to-explore-social-media_ * Trump isn’t just attacking the press as an authoritarian. He’s treating it like a business rival, using deceptive litigation tactics to undercut competition. _https://freedom.press/issues/trump-attacks-the-press-not-just-as-an-authoritarian-but-as-a-business-rival_ * Some signs of AI model collapse begin to reveal themselves, writes Steven J. Vaughan-Nichols, who predicts that general purpose AI could start getting worse. _https://www.theregister.com/2025/05/27/opinion_column_ai_model_collapse_ * PeerTube is raising funds to grow its mobile app and decentralised video platform that offers a community-driven alternative to Big Tech’s dominance in video sharing. https://support.joinpeertube.org * A new paper explores how government use of AI may erode citizens’ sense of recognition and respect, shifting the focus from system performance to the deeper impacts on the citizen-state relationship. _https://osf.io/preprints/socarxiv/pua4s_v1_ * Starlink is stitching together a pan-African footprint by targeting niche, high-value users across multiple countries. _https://www.semafor.com/article/05/21/2025/starlink-is-stitching-together-a-pan-african-strategy-in-small-bytes_ * A guide to understanding and navigating the increasingly complex and challenging job market in “Responsible Tech.” _https://alltechishuman.org/all-tech-is-human-blog/navigating-the-complex-field-of-responsible-tech_ * Announcing the launch of the Coalition on Digital Impact (CODI), a new alliance working to make the Internet accessible in every language. _https://www.businesswire.com/news/home/20250520781786/en/Coalition-on-Digital-Impact-Launches-Global-Alliance-to-Create-a-Multilingual-Internet_ * After more than four decades as a writer and technologist, Bill Thompson reflects on his lifelong work on the internet. _https://www.linkedin.com/pulse/writing-internet-bill-thompson-0wdoe_ * Flipboard reaffirms its commitment to truth, quality journalism, and human-curated content moderation in an era dominated by algorithmic amplification and misinformation. _https://about.flipboard.com/inside-flipboard/flipboard-commitment-to-truth-and-quality-journalism_ ### Privacy and Security * New report from Noor examines how digital spaces across the SWANA region are being weaponized by states and aligned actors to enforce authoritarian control, suppress dissent, and target marginalized groups. _https://wearenoor.org/fascism-in-practice-digital-spheres-as-landscape-by-moussa-saleh_ * Failures in cybersecurity practices at a software company that helps federal agencies manage investigations and FOIA requests allowed two convicted hackers to delete databases, according to internal documents. _https://www.bloomberg.com/news/articles/2025-05-21/security-failures-behind-us-contractor-s-data-breach_ * Hacker who breached communications app used by Trump national security adviser Mike Waltz earlier this month intercepted messages from a broader swathe of American officials than has previously been reported. _https://www.reuters.com/world/us/hacker-who-breached-communications-app-used-by-trump-aide-stole-data-across-us-2025-05-21_ * The NCSC and DSIT have collaborated with ETSI on a new standard designed to protect AI systems from evolving cyber threats, setting a benchmark for AI security. _https://www.ncsc.gov.uk/blog-post/new-etsi-standard-protects-ai-systems-from-evolving-cyber-threats_; Read more about the standard and how AI systems might help against evolving cyber threats. _https://www.nature.com/articles/s41599-025-04464-0_ ### Upcoming Events * First meeting of the Telecommunication Standardization Advisory Group (TSAG), **May 26-30, Geneva, Switzerland.**_https://www.itu.int/md/T25-TSAG-COL-0001/en_ * Detecting Unwanted Trackers IETF meeting, note that you need a datatracker account to participate, but otherwise participation is available to all. **June 6, 12pm ET. Online.******_https://meetecho-interims.ietf.org/client/?group=a0bc4167-d962-4b1d-924b-35672622f700_ * Inaugural meeting of ITU trustworthy AI testing and knowledge platform at AI for Good Global Summit. **July 9, 15:30 CEST. Geneva, Switzerland.**_https://aiforgood.itu.int_ * Derechos Digitales is holding a webinar: "Fundamental rights for working on platforms."**June 3, 11:00 EST. Online.**https://www.derechosdigitales.org/25280/webinario-y-campana-global-derechos-fundamentales-para-el-trabajo-en-plataformas ### Career Opportunities * EFF is hiring a Policy and Research Staff Technologist. **San Francisco, CA.** _https://www.paycomonline.net/v4/ats/web.php/jobs/ViewJobDetails?clientkey=28620672D234BF368306CEB4A2746667 &job=262470_ * New_ Public is hiring a Staff UX Researcher, Local Lab. **U.S. or Canada. Remote.**https://newpublic.org/jobs/staff-researcher * People Vs BigTech is recruiting an Executive Director. **UK or EU. Remote.**https://peoplevsbig.tech/job-advertisement-people-vs-bigtech-executive-director * Internet Systems Consortium (ISC) is hiring for two remote roles. **Remote.** * Open Source Software Engineer for Stork _https://isc.recruitee.com/o/stork-dev_ * QA Engineer for Stork _https://isc.recruitee.com/o/stork-qa_ * Digital Action is hiring a Communication Manager. **Remote.** _https://digitalaction.co/join-our-team_ * Hasso Plattner Institute is looking for a Postdoctoral Researcher (m/f/x) in Technology and Regulation. **Potsdam, Germany.** _https://jobs.plattnerfoundation.org/HPI/job/Potsdam-Postdoctoral-Researcher-%28mfx%29-in-Technology-and-Regulation-14482/1158083155_ * Cognizant is hiring a Responsible AI Governance and Compliance Lead. **San Francisco, CA.**_https://careers.cognizant.com/us-en/jobs/00063669401/senior-director-responsible-ai-governance-and-compliance-lead_ * Amgen is recruiting a Director of Responsible AI. **Lisbon, Portugal.**_https://careers.amgen.com/en/job/lisbon/director-of-responsible-ai/87/81177628480_ * Warner Bros. Discovery is hiring a Lead - Responsible AI. **Hyderabad, India.** _https://warnerbros.wd5.myworkdayjobs.com/en-US/global/job/Lead---Responsible-AI--RAI-_R000093294_ * Thomson Reuters is looking for a Manager, AI Enablement. **Toronto, Canada.**_https://careers.thomsonreuters.com/us/en/job/JREQ191162/Manager-Responsible-AI_ * Thomson Reuters is also recruiting a Digital Director. **New York, NY.**_https://careers.thomsonreuters.com/us/en/job/JREQ191561/Digital-Director-Reuters_ * Salesforce is recruiting for two roles, both offer flexible working in **San Francisco or Palo Alto, CA. New York, NY. Seattle or Bellevue, WA.** * Senior or Principal Data Scientist - Technical AI Ethicist _https://salesforce.wd12.myworkdayjobs.com/External_Career_Site/job/California---San-Francisco/Senior-Technical-AI-Ethicist---AI-Red-Teamer_JR268693_ * Lead Applied Research Scientist - Responsible AI _https://salesforce.wd12.myworkdayjobs.com/External_Career_Site/job/California—San-Francisco/Lead-Applied-Research-Scientist—Responsible-AI_JR268691_ * Pinterest is hiring a Staff Machine Learning Engineer - Responsible AI. **Palo Alto or San Francisco, CA or Remote.**_https://www.pinterestcareers.com/jobs/6371685/staff-machine-learning-engineer-responsible-ai/?gh_jid=6371685_ * Rivian is hiring a Lead, AI Governance. **Irvine or Palo Alto, CA. Plymouth, MI. Normal, IL or Atlanta, GA.**_https://careers.rivian.com/careers-home/jobs/23989?lang=en-us_ * Microsoft is looking for a Senior AI Security Researcher. **Redmond, WA.**_https://jobs.careers.microsoft.com/global/en/job/1819983/Senior-AI-Security-Researcher_ ### **Funding and Opportunities to Get Involved** * The Spyware Accountability Initiative (SAI) is offering grants to support civil society organizations, journalists, and others working to investigate, expose, and prevent the abuse of commercial spyware. Apply by **June 13.**https://stopspyware.fund/apply * Are you interested in pursuing a Masters or PhD in an AI-related field but unsure where to start? Apply to Black In AI’s Emerging Leaders In AI Grad Prep Program by **July 3.**_https://docs.google.com/forms/d/e/1FAIpQLSfBzvzj6ySaMNG85Eav5m1IflEB_uaroZ2jmmSHEVh8zRXLLQ/viewform_ * If you are part of a research community at a UK university, consider taking this survey on the impact of the U.S. government deleting research data. _https://forms.office.com/pages/responsepage.aspx?id=sAafLmkWiUWHiRCgaTTcYfKo8CxWpglJkWOtNlgusW9URjRCMlFOSUQzS1pVTEhNRTJZWU9PSkZDSC4u &route=shorturl_ * Bread&Net is the region’s leading annual unconference on digital rights. Their call for proposals is now open. _https://breadandnet.secure-platform.com/site_ * A new ITU training programme will equip indigenous and rural African community leaders with the technical and policy skills needed to build and sustain their own ICT networks. https://share2.apc.org/s/sg6gmFb87p5rsm6 _What did we miss? Please send us a reply or write to_ _editor@exchangepoint.tech_ _._ 💡 Want to see some of our week's links in advance? Follow us on Mastodon, Bluesky or LinkedIn. ## The Myth of a Progressive Silicon Valley _By workers organizing with No Tech for Apartheid_ _Silicon Valley used to be progressive_. At least, that’s what the tech oligarchy would like us to think. During the industry’s ascent, Big Tech execs like Sheryl Sandberg encouraged people to “ _bring your authentic self to work_,” and Google prided itself on its since-discarded motto: “ _Don’t be evil_.” Tech workers with a conscience looking to build for good felt empowered to influence company policies by organizing petitions and walkouts. But the days when billionaire tech executives feigned to care for their workers are long gone. Silicon Valley’s pivot from “woke identity politics” to realpolitik was on full display at President Trump’s inauguration, where top executives from Google, Amazon, Meta, and Apple were in prominent attendance. Despite Google cofounder Sergey Brin and CEO Sundar Pichai _attending and speaking_ at employee-organized walkouts in protest of anti-immigration executive orders during Trump’s first presidency in 2017, eight years later, we watched _Google donate $1 million_ to Trump’s inauguration fund, _joining other companies_ hoping for a more favorable regulatory regime. Among them fossil fuel giants, crypto firms and a vaping industry trade group. Is this a departure from Big Tech’s “progressive values”—or simply a revelation of what has always been true? ### Tech's Increasing Retaliation Some of us have become all too familiar with Silicon Valley’s real political alliances. We bore the brunt of them. In April 2024, Google workers organizing with _No Tech for Apartheid_ (NOTA) staged _sit-ins_ at company offices to protest how our labor was being used to support the genocide in Gaza, to demand an end to the _harassment_ and _discrimination_ of our Palestinian, Muslim, and Arab colleagues, and to demand executives address the workplace _health and safety crisis_ caused by _Project Nimbus_, a $1.2 billion contract that provides cloud computing services and AI tools to the Israeli military and government. In response, Google illegally fired 50 of their workers in what amounted to mass retaliation—including many who had not participated directly in the protest. ### Deepening Military Ties In the year since this prominent display of direct, collective worker action, Google has only deepened its commitment to military contracting. Three months ago, in order to take advantage of the federal contracts available via the U.S. Department of Defense, Google _abandoned_ its pledge not to build AI for weapons or surveillance. In the months since, Google has _acquired Israeli cloud security start-up_ Wiz for $32 billion, _pursued partnerships_ with U.S. Customs and Border Patrol to update towers by Israeli war contractor Elbit Systems with AI at the U.S.-Mexico border, _launched an AI partnership_ with the largest war profiteer in the world, Lockheed Martin, and _announced a Google Cloud collaboration_ with the tech defense contracting company Palantir (whose focus includes _“making America more lethal”_). ### Why the Shift? Companies like Google and Microsoft have always viewed workers as means to achieve capital and favorable stock prices for their shareholders rather than living and breathing humans whose time, effort, and humanity shape the fabric of the company. This has revealed itself slowly through quiet, media-suppressed _retaliation against dissenting workers_, then rapidly with _yearly mass layoffs of thousands of employees_ at the behest of shareholder and investor interest even when market conditions didn’t justify such job losses. What may have mattered to these companies in the past—a positive PR campaign, user satisfaction in their products—were only pursued to the extent that they improved shareholder perception and boosted the stock price. So why the shift? Big tech has revealed the hand they were always playing with, as well as its cost: leadership answers to shareholders and investors. If investors are influenced more by the prospect of holding a definitive stake in the defense industry, in being crowned the winner of cloud infrastructure as well as of the AI race—which has now become synonymous with the global arms race—than they care about the voice of workers, or even the violation of basic human dignities, then tech leadership will always fall in line. Big tech oligarchs have assessed the risk and made the move to be more transparent about their allegiance to the fascist state. This alignment between billionaire capitalists and state power is a defining marker of imperialism. With the job market difficult for engineers, workers are more easily scared into keeping their heads down even as their employers become transparent about their immorality. However, it’s critical to note that these are ripe conditions to organize mass movements in. Through political education, the high-income working class must come to terms with the fact that they are disposable to capitalism, and wield collective power to turn these conditions around. When workers can become active agents in the structure of the corporation, the structure of the corporation becomes theirs, and they are able to do with it what they are willing. As long as workers do not make this shift, capitalist broligarchs will continue deepening their ties to self-destructive capitalist systems and enactingviolence on class-oppressed and marginalized people. ### Organizing Against Oppression We, the workers organizing with No Tech for Apartheid, understand that we are living in a time shaped by a tech oligarchy that wields unprecedented technological, political, and capitalist power. In the past, Google was careful to protect its public image, and negative press could serve as a check on its actions. But today, legacy media outlets manufacture consent for the genocide of Palestinians, making “bad press” a far less effective tool. Tech corporations like Google feel less pressure to respond to public backlash when their reward comes from securing partnerships with dominant, oppressive powers. No Tech for Apartheid centers our labor organizing on Palestine––it is our refusal to build technology to facilitate apartheid and genocide that draws clear lines of resistance against contracts with the U.S. Department of Defense, Customs and Border Patrol, Lockheed Martin, Palantir, and the UAE. Moreover, normalizing tech labor in service of necropolitical power sets the stage for worsening workplace conditions, such as the end of Diversity, Equity, and Inclusion initiatives, and persistent mass layoffs. In fact, a closer look at this moment reveals that tech giants have undermined collective power since the industry’s inception: _union busting tactics_, emphasis on _exclusionary and classist prestige_, and atomized, _multi-tier labor forces_ have all been designed to keep us, as workers, from developing coalitions and building community. As Big Tech’s strategies repeatedly fail to prevent worker organizing, these companies increasingly resort to repressive tactics to silence, intimidate, and repress workers of conscience, especially when those same workers threaten what they value most: endless, free flowing capital. In "No Shortcuts: Organizing for Power in the Gilded Age" Jane McElevey writes: > “In the organizing approach, specific injustice and outrage are the immediate motivation, but the primary goal is to transfer power from the elite to the majority, from the 1 percent to the 99 percent... [organizing] relies on mass negotiations to win, rather than the closed-door deal making typical of both advocacy and mobilizing. Ordinary people help make the power analysis, design the strategy, and achieve the outcome. They are essential and they know it.” No Tech for Apartheid is organizing a mass movement of workers prepared to demand just, ethical work and care for the common good in our workplaces, and who are prepared to withhold our labor through collective power if our demands are not met. This is what guides our strategy: a focus on deep base-building by talking to fellow workers in one-on-one conversations and through tabling at our workplaces. We listen to their concerns, and make them our own, knowing that we are driven by the same collective struggle. We are building a movement that steps off the feeds of social media and into the spaces where our labor takes place. ### Reclaiming Our Collective Power We must reclaim our shared agency united as workers, and in doing so remind ourselves that our labor is not only our power––but also our responsibility. Tech workers must resist complicity. We invite all tech workers to join our cause: to remain steadfast in the belief of a world where workers are liberated from oppression. Building trust among the masses begins with each of us, as we strengthen our ties to one another in meaningful, lasting ways. Organizing is simply about building genuine relationships through which we can build collective action. The next time you speak to your coworkers, have a vulnerable conversation about the world. Ask them what they care about in the workplace, and ask yourself how you can show them the ways it connects to a shared struggle. This is how we resist the rise of the techno-fascist state, and reclaim what is ours: our time, our creativity, and our labor, in solidarity with those who have much to lose, and the world to gain: like farmworkers and migrants organizing under the thumb of the U.S. detention-deportation machine, and our Palestinian siblings and martyrs united in the struggle for liberation. 💡 Please forward and share! ##
29.05.2025 18:48 — 👍 0    🔁 0    💬 0    📌 0
Original post on internet.exchangepoint.tech

A preview of links 🔗 from tomorrow's edition of IX highlight the urgent need to reduce global reliance on Big Tech infrastructure.

1️⃣ A U.S. sanctions order forced Microsoft to cut off the International Criminal Court’s accounts, crippling war crimes investigations […]

28.05.2025 05:28 — 👍 0    🔁 0    💬 0    📌 0
Preview
QUIC: The Battle That Never Was New research from postdoctoral researchers Clément Perarnaud and Francesca Musiani on the QUIC protocol reveals how tech giants like Google are reshaping internet infrastructure through standard-setting, raising fresh questions about power, governance, and digital sovereignty. Need a QUIC explainer? Check out this WIP from the authors of _How the Internet Really Works_. _But first..._ ### From Software to Society—Openness in a changing world The new study from Open Knowledge Foundation “From Software to Society: Openness in a Changing World” by Dr. Henriette Litta and Peter Bihr takes stock and looks to the future: What does openness mean in the digital age? Is the concept still up to date? The study traces the development of Openness and analyses current challenges. It is based on interviews with experts, including IX contributor Mallory Knodel, and extensive literature research. The key insights at a glance are: * Give Openness a purpose. * Protect Openness by adding guard rails. * Open innovation and infrastructure need investments. * Openness is not neutral. * Market domination needs to be curtailed. Read The Report ### Correction This is our 81st newsletter, and our first correction. Last week, we mistakenly said that this CPDP workshop was happening in Vaud, Switzerland. Oops. Like the rest of CPDP, it’s in Brussels, Belgium. Spot an error? Send us a reply or write to editor@exchangepoint.tech. ## Sign up for Internet Exchange Feminist perspectives on digital justice and tech Subscribe Email sent! Check your inbox to complete your signup. No spam. Unsubscribe anytime. ****Support the Internet Exchange**** If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling. Not ready for a long-term commitment? You can always leave us a tip. Become A Paid Subscriber ****Thank you for being a paid member!**** We would like to invite you to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. The link below will get you in for this week: Join Now ## This Week's Links ### **Internet Governance** * A few tech giants are gaining control over subsea cables and satellites that power the internet. ARTICLE 19 is launching a new initiative to investigate the impact. _https://www.article19.org/resources/wired-and-orbited-reclaiming-infrastructure-for-a-resilient-internet_ * South Korea’s online platform bill is drawing U.S. backlash, as the U.S. continues to use tariffs as way to push back on regulations they perceive to be non-tariff barriers to digital trade. _https://www.lawfaremedia.org/article/south-korea-s-digital-regulation-proposal-sparks-u.s.-pushback_ * ICANN is the latest organization to remove references to “diversity” and “inclusion” from its website. _https://domainincite.com/31049-icann-kills-off-diversity-and-inclusion_ * Open Rights Group analyses how the Online Safety Act and Ofcom’s guidance are reshaping online speech through stricter rules on content moderation and age verification. _https://www.openrightsgroup.org/publications/how-to-fix-the-online-safety-act-a-rights-first-approach_ * The EU is moving forward on an antitrust cooperation agreement with the U.K., providing regulators with a clear framework for collaboration on competition investigations. _https://www.wsj.com/world/europe/eu-moves-forward-on-antitrust-cooperation-with-u-k-255dfc83?reflink=desktopwebshare_permalink &st=pv8UiA_ * More than 100 organizations are raising alarms about a provision in the “one big, beautiful” bill that would hamstring the regulation of AI systems. _https://edition.cnn.com/2025/05/19/tech/house-spending-bill-ai-provision-organizations-raise-alarm_ * A summary and analysis of the U.S. TAKE IT DOWN Act (2025) provided by the IFTAS Trust & Safety Library, a resource aimed at supporting volunteer moderators and administrators in the Fediverse. _https://connect.iftas.org/library/legal-regulatory/take-it-down-act-2025-usa_ * Fighting disinformation by taking down content misses the point. Today’s disinformation spreads through coordinated campaigns that exploit platform design, write Rohit Kumar and Paavi Kulshreshth. _https://indianexpress.com/article/opinion/columns/disinformation-in-the-digital-age-cannot-be-fought-by-taking-down-content-10017122_ ### **Digital Rights** * Trump’s latest immigration crackdown harnesses AI surveillance to sidestep due process. _https://thebulletin-org.cdn.ampproject.org/c/s/thebulletin.org/2025/05/trumps-immigration-crackdown-is-built-on-ai-surveillance-and-disregard-for-due-process/amp_ * A student has made a tool that scans for users writing certain keywords on Reddit and assigns those users a so-called “radical score,” before deploying an AI-powered bot to automatically engage with the users to de-radicalize them. _https://www.404media.co/student-makes-tool-that-identifies-radicals-on-reddit-deploys-ai-bots-to-engage-with-them_ * Seven civil society groups have launched a landmark legal case in Kenya to challenge internet shutdowns as unconstitutional violations of digital rights. _https://blog.bake.co.ke/2025/05/14/bake-6-other-organizations-challenge-internet-shutdowns-in-kenya-in-landmark-public-interest-case_ * SMEX’s latest newsletter covers digital rights in Syria, an open letter to Samsung, and a reflection on the state of press freedom in the MENA. _https://mailchi.mp/smex/digital-rights-in-syria-an-open-letter-to-samsung-and-a-reflection-on-the-state-of-press-freedom-in-the-mena_ * Skyline International for Human Rights condemns a proposed aid system in Gaza that would require Palestinians to undergo biometric scans, such as facial recognition, in exchange for food, water, and medical supplies. _https://skylineforhuman.org/en/news/details/819/biometrics-for-food-a-dangerous-shift-from-humanitarian-relief-to-coercive-surveillance_ * The US embassy in Zambia has warned its citizens to be wary of a new "intrusive" cyber-security law. _https://www.bbc.co.uk/news/articles/cj451xd0ezwo_ ### **Technology for Society** * A New Social unveils Bridgy Fed Config for seamless cross-platform setup, and launches a Patreon to help sustain the future of the open social web. _https://blog.anew.social/bridgy-fed-config-patreon_ * Across the Western Balkans, governments are increasingly weaponising surveillance and censorship tools to silence dissent and control digital spaces. _https://balkaninsight.com/2025/04/29/surveillance-and-censorship-worsening-in-western-balkans-birn-report_ * As openness faces co-optation and crisis, a new study argues it must be redefined and defended as a political, not just technical, value. _https://okfn.de/blog/2025/05/from-software-to-society-openness-in-a-changing-world_ * This essay collection asks: Can we reimagine AI to serve environmental justice and creativity instead of accelerating harm and erasing human expression? _https://libraopen.lib.virginia.edu/public_view/3n203z326_ * The tech industry’s greatest disruptor is its own workers. Lucy Purdon explains why they need our support. _https://courageeverywhere.substack.com/p/the-tech-industrys-greatest-disruptor_ * Elon Musk struck a series of undisclosed business deals across the Gulf while accompanying Donald Trump on a high-profile diplomatic tour. _https://www.nytimes.com/2025/05/20/world/middleeast/gulf-deal-making-spree-also-benefited-elon-musk-and-his-family.html_ * A new report examines Kenya’s push to digitize migrant identification, revealing how bureaucratic barriers to ID access shape refugees’ ability to integrate and access essential services. _https://www.cariboudigital.net/publication/integration-without-identification_ * The High Court of Kenya officially ruled that Worldcoin’s collection and processing of biometric data in the country was unconstitutional. _https://www.linkedin.com/feed/update/urn:li:activity:7328018153991520258/_ * New report finds that Spanish-speaking data workers in Latin America use informal digital networks to cope with low pay and precarity on global tech platforms. _https://onlinelibrary.wiley.com/doi/10.1111/ntwe.12340_ ### **Privacy and Security** * Signal doesn't recall, and now you can (not), too. Signal Desktop now includes “Screen security” designed to prevent your device from capturing screenshots of your Signal chats on Windows, and any app can copy their solution to flag content as under digital rights management (DRM) by default. https://signal.org/blog/signal-doesnt-recall * Researchers scraped 2 billion public Discord messages for academic use, raising serious privacy concerns and potential violations of Discord’s policies. _https://www.404media.co/researchers-scrape-2-billion-discord-messages-and-publish-them-online_ * AI and virtual reality are creating new frontiers for misogyny, and tech companies are looking the other way. _https://www.newstatesman.com/culture/books/book-of-the-day/2025/05/misogyny-in-the-metaverse_ * Google’s Jigsaw unit unveils new encryption standards to close long-standing privacy gaps in DNS and TLS, protecting billions from domain-level surveillance. _https://medium.com/jigsaw/a-more-private-internet-encryption-standards-hit-new-milestones-c239ede23eaf_ * New document from the World Wide Web Consortium (W3C) provides definitions for privacy and related concepts that are applicable worldwide as well as a set of privacy principles that should guide the development of the web as a trustworthy platform. _https://www.w3.org/TR/2025/STMT-privacy-principles-20250515_ * W3C has officially published Verifiable Credentials 2.0 as a W3C Standard, providing a more secure, privacy-respecting, and interoperable framework for issuing and verifying digital credentials. _https://www.w3.org/press-releases/2025/verifiable-credentials-2-0_ * The company behind the Signal clone used by at least one Trump administration official was breached earlier this month. The hacker says they got in thanks to a basic misconfiguration. https://www.wired.com/story/how-the-signal-knock-off-app-telemessage-got-hacked-in-20-minutes ### **Upcoming Events** * Streaming now, Ian Bruce is a portrait artist, fascinated by what Elon Musk is doing to our online and offline worlds. Join him as he spends a week, alongside special guests, exploring who Musk is as he paints him. **May 20-23. Online.**_https://www.youtube.com/live/uogmc-cKPIw_ * FediForum brings together the leading thinkers and doers who build this new Open Social Web. **June 5-7. Online.**_https://fediforum.org/2025-06_ * ICANN83 Policy Forum schedule now live! The hybrid event will accommodate both in-person and virtual participation. **June 9–12. Prague, Czech Republic and Online.**_https://www.icann.org/en/announcements/details/icann83-schedule-now-available-19-05-2025-en_**** * Join DemocracyNext for a conversation on how AI can support, not replace, the scaling of democratic deliberation across five key dimensions.**June 17, 4:00pm BST.**_https://www.linkedin.com/events/fivedimensionsofscalingdemocrat7330556694038507521_ * Webinar hosted by the AI Standards Hub and the AI Quality Infrastructure Consortium exploring conformity assessment schemes and international standards like ISO/IEC 42006 support compliance with the EU AI Act and broader AI safety. **June 20, 12:30 pm BST. Online.**_https://aistandardshub.org/events/the-role-of-conformity-assessment-and-quality-infrastructure-in-supporting-ai-safety-and-regulatory-compliance_ * The annual Internet Governance Forum is coming up **23-27 June in Norway**. It's a UN event that is free to attend and facilitates remote participation. https://www.igf2025.no * South to South (S2S) is a program that aims to strengthen the connection between artificial intelligence accountability reporting and civil society engagement, contributing to a more informed governance of AI in the Global South.Three peer learning circles are currently planned: **Southeast Asia: June 25-27. Africa: July 9-11. Latin America: July 23-25. Online.**_https://pulitzercenter.org/journalism/initiatives/south-to-south_ ### **Careers and Funding Opportunities** * Applications are now open to join the Knight-Georgetown Institute (KGI) and the Institute for Technology Law & Policy (Tech Institute) as a postdoc fellow. Rolling applications, interviews begin **mid-May**. _https://kgi.georgetown.edu/postdoctoral-fritz-fellow-digital-competition-policy-research_ * The Internet Society is hiring a Director of Monitoring, Evaluation and Learning. _https://internetsociety.bamboohr.com/careers/289_ * Open Knowledge Foundation is looking for an AI Strategy Consultant and an AI Technical Expert. _https://okfn.org/en/jobs_ * Phoenix R&D, a European messaging technology company, is hiring a Freelance Junior Product Manager. _https://join.com/companies/phoenix/14160490-freelance-junior-product-manager-all-genders-part-time_ * Three Fully-funded Interdisciplinary PhD Positions on Governance by Data Infrastructure at University of Amsterdam. _https://www.academictransfer.com/en/jobs/351725/three-fully-funded-interdisciplinary-phd-positions-on-governance-by-data-infrastructure_ ### **Opportunities to Get Involved** * Hard Art is a cultural collective of artists, activists, and scientists standing in solidarity in the face of climate and democratic collapse. They're starting a new creative movement. Join them. _https://hardart.metalabel.com/introducing-hard-art_ * A unique online course on threat sharing and digital forensics, taught from a feminist and human rights perspective. Apply by **June 15.**_https://www.digitaldefenders.org/online-course-threat-sharing-and-digital-forensics-from-a-feminist-and-human-rights-perspective_ ### Book Recommendations: * _Capitalism and Its Critics A History: From the Industrial Revolution to AI._ From Luddites to degrowth activists, a new history explores capitalism’s global evolution through the eyes of its fiercest critics. _https://bookshop.org/p/books/capitalism-and-its-critics-a-history-john-cassidy/20374711?affiliate=112451_ * _Fatal Abstraction: Why the Managerial Class Loses Control of Software._ In practice, few of the systems we looked to with such high hopes have lived up to their fundamental mandate. https://bookshop.org/p/books/fatal-abstraction-why-the-managerial-class-loses-control-of-software-darryl-campbell/21479418?affiliate=112451 _What did we miss? Please send us a reply or write to_ _editor@exchangepoint.tech_ _._ 💡 Want to see some of our week's links in advance? Follow us on Mastodon, Bluesky or LinkedIn. ## QUIC, The Battle That Never Was: A Case Of Infrastructuring Control Over Internet Traffic _By Clément Perarnaud and Francesca Musiani_ __Recent research__ _from Clément Perarnaud and Francesca Musiani explores the under-studied arena of Internet standardization by focusing on the adoption process and global deployment of QUIC (pronounced “quick”) , arguably one of the most consequential transport standards that the IETF has released in recent history. The authors demonstrate how this process is reshaping global power over the internet, with potentially long-term consequences for internet governance. Below they discuss the key findings of their paper._ _QUIC_ is a transport-layer network protocol. In other words, it defines how two endpoints—such as a user's device and a web server—can establish a connection and communicate on the Internet. The Transmission Control Protocol (TCP), introduced in the 1970s as part of the original suite of internet protocols, has long been the standard way for moving data across the internet. Initially designed by Google, QUIC is often presented as an alternative to TCP. It provides a new technical architecture to communicate and encrypt data through the network. ### What Can We Learn from QUIC? QUIC is a case study in how powerful tech companies like Google are reshaping who controls the core infrastructure of the internet. QUIC offers a way to look at recent changes in the balance of power between “Big Tech” actors, other actors of the internet industry, and states. The protocol’s development highlights how private companies take part in the making of internet standards, how the growing influence of a few dominant actors impacts standard-setting, and about how technical debates over standardisation design details can end up shifting power and decision-making in internet governance. Because QUIC was first created by Google, its standardisation process offers a way to look closely at how Google can use its influence within the Internet Engineering Task Force (IETF), one of the leading bodies responsible for developing and maintaining global Internet standards. When bringing QUIC to the IETF in the mid-2010s, Google came with a sophisticated working solution, it had already tested at scale and widely deployed across services like Chrome and YouTube. Yet, success in the IETF usually doesn’t happen alone and requires allies and coalitions. We examined how Google was able to gain the support of internet companies, network operators and state officials alike. despite early concerns that QUIC might hurt their business models, technical systems or security practices. ### The Controversial Debate about Internet Consolidation While QUIC is often celebrated for the many innovations it brings to the transport layer (including encryption), its deployment raises important questions about who stands to benefit most from its use, and how it may accelerate the ongoing concentration of power in Internet architecture. Internet giants seem far better equipped to fully benefit from QUIC’ speed and performance gains at scale. Companies such as Google, Meta, Apple, Alibaba, Amazon, and Microsoft have all developed their own QUIC implementations—which vary depending on their use cases and design choices—but are expected to share the core of the IETF specifications, to ensure interoperability. Though it is far from impossible technically, smaller actors may find it harder or less beneficial to adopt QUIC. Currently, the limited adoption from smaller actors seems linked in part to the lack of technical support and regular updates needed to implement QUIC in _many server environments_. There is also limited awareness, or even appetite within large and small companies, in adopting a protocol that could affect the stability and security of their private networks. Despite QUIC’s advantages, this means that many actors will continue relying on TCP, even if it leads to slower performance compared to major services like YouTube. This shows why QUIC will not completely replace TCP in internet traffic. Both protocols are likely to coexist in the medium term due to the mixed incentives, motivations and resources of the many actors on the internet. Our research shows that QUIC’s implications go beyond the internet industry, it matters for states themselves. Some of the new features introduced by QUIC could be interpreted as a challenge to the ability of states, or national network operators, to analyze traffic and communications. ### QUIC and (Digital) Sovereignties This brings us to the final point, which relates to digital sovereignty and the role of states in shaping internet standards. QUIC was formally adopted at a time when many countries in the world, including in the EU, were advocating for their _“digital sovereignty”_ _, calling for more control over their digital infrastructure and standards._ The analysis of how QUIC was developed shows that state actors were largely absent in these discussions. Yet, states were still “there”, somehow. States’ control over networks came up often as a central issue in debates, and was regularly used as justification for both limiting and expanding encryption of internet traffic. Our research highlights that the success of QUIC’ standardisation process partly lies in the successful pre-emption of states’ security concerns by Google in the design of the protocol itself, to ensure limited opposition to QUIC’s global deployment. The case of QUIC raises broader questions such as: can “digital sovereignty”—understood as the ability to control infrastructure, data, and technology with minimal dependencies—exist not only for states, but also for private actors? If so, have large technology companies taken one more step towards their own digital sovereignty? To what extent are digital sovereignty claims becoming an explicit part of IETF discussions and, more generally, of standardisation processes? For more, the open access article is available here: _https://journals.sagepub.com/doi/10.1177/14614448251336438_ 💡 Please forward and share! ##
22.05.2025 14:55 — 👍 0    🔁 1    💬 0    📌 0