ISL Exec Dir, Lisa LeVasseur's Harvard Carr-Ryan Center fellowship paper on digital product safety is out! Congrats, Lisa! @lalevasseur.bsky.social
www.hks.harvard.edu/centers/carr...
ISL Exec Dir, Lisa LeVasseur's Harvard Carr-Ryan Center fellowship paper on digital product safety is out! Congrats, Lisa! @lalevasseur.bsky.social
www.hks.harvard.edu/centers/carr...
We have successfully fought for product safety repeatedly in the past and we can do so now. We can no longer live in this ongoing amnesiac fog, thinking that these digital products are somehow exempt from basic product safety.
Stay tuned..... 3/3
We can and must start coalescing around a vision of measurably safe digital product behavior. You can call it "privacy by design" or "safety by design" or "ethical AI by design" or whatever, but what people want deserve is reasonably safe digital products. 2/3
19.02.2026 19:09 β π 0 π 0 π¬ 1 π 0Industry loves that the war for safer digital products is being waged on multiple disconnected fronts. It hampers accountability and makes the goal of achieving actually safe digital products more expensive and difficult. 1/3
19.02.2026 19:09 β π 0 π 0 π¬ 1 π 0we need to talk about that Ring Super Bowl ad
10.02.2026 20:18 β π 31325 π 13780 π¬ 972 π 1688
"Perhaps the most substantive change is that the US government has achieved its own kind of βgolden shareβ of unfettered TikTok user data. In short, it might be the worst of all possible worlds."
By @lalevasseur.bsky.social Irene Knapp and Bryce Simpson
We're a bit late here, but we started some research on TikTok last year and shelved it due to other priorities. For Privacy Day, in light of the announced sale of US-related TikTok assets to the new TikTok USDS Joint Venture LLC, we release the updated piece: internetsafetylabs.org/blog/researc...
31.01.2026 01:07 β π 0 π 0 π¬ 1 π 0
We love to see it!
Community-created harm reduction infrastructure to contest the alarming integration of AI agents into the Windows operating system (which is currently the most reckless deployment environment) β₯οΈ
github.com/zoicware/Rem...
snapshot of Microsoft Office Options menu open to the "Advanced" tab, highlighting a default file called AI Agent AI Spy.docx for two options.
What fresh copilot ai hell is being once again foist upon us? all of the settings in MS Word Advanced that apply at a file level, apparently default to [the obnoxiously truthfully named? ] "AI Agent AI Spy.docx".
This is exhausting.
@meredithmeredith.bsky.social and @udbhav-tiwari.bsky.social 's CCC talk was so inspiring I wrote a thing about it: internetsafetylabs.org/blog/insight...
15.01.2026 16:07 β π 3 π 1 π¬ 0 π 0
while reusing existing definitions as much as possible.
Join us for the fun!
internetsafetylabs.org/get-involved...
/fin
The purpose of these new resources is to better align technologists with legal terminology (and vice versa), to identify strengths/weaknesses in terminology in existing AI laws, and to arrive at a disambiguated and ISL-approved list of terms and definitions, /3
10.12.2025 23:58 β π 0 π 1 π¬ 1 π 0
π£ Calling all word-nerds: we're updating our glossary of terms in our open-to-the-public Software Safety Standards Panel.
We've synthesized definitions from extant AI and other laws worldwide to arrive at a clear and complete list of terms. /1
Historically, we've mainly focused on 2 and 3. We really must get keener about codifying acceptable and unacceptable risk in digital product behavior. /fin
24.11.2025 21:13 β π 0 π 0 π¬ 0 π 04οΈβ£ Governance must cover at least 3 things (in no particular order): (1) constraints on digital product behavior, (2) constraints on human behavior and use of digital products (cybercrime), and (3) constraints on corporate behavior in building digital products. /7
24.11.2025 21:13 β π 0 π 0 π¬ 1 π 0So when we're dealing with Digital Product Safety, a responsible manufacturer must keep aware of both ends of the [not really a] spectrum, and first and foremost, the design-based behaviors. /6
24.11.2025 21:13 β π 0 π 0 π¬ 1 π 03οΈβ£Digital products possess the unique quality that they are capable of inflicting harms and hazards through their independent action (i.e. without human support). i.e. the products BEHAVE in a hazardous or harmful manner. In addition, humans can use digital products in a harmful manner. /5
24.11.2025 21:13 β π 0 π 0 π¬ 1 π 0Digital products are riddled with hazards that can become harms. Direct, immediate harm, like suicide coaching by chatbots, eg.,are presently lower frequency, but the dumpster fire accelerant of so-called "AI" technologies seems to be increasing immediately harmful digital product behavior. /4
24.11.2025 21:13 β π 1 π 0 π¬ 1 π 0Note that the majority of behaviors ISL tracks in our safety labels are more accurately understood as hazards: they are necessary conditions that, when combined with other conditions, can and will result in harm to the person using the tech. /3
24.11.2025 21:13 β π 0 π 0 π¬ 1 π 02οΈβ£ Note the change in language from "Programmatic Harm" to "Design-based hazards and harms". We think this lines up better with language used in litigation, and also more accurately covers both hazards and harms baked into digital products by design. /2
24.11.2025 21:13 β π 0 π 0 π¬ 1 π 0
We've just refined our depiction of digital product hazards and harms. Some key takeaways:
1οΈβ£ There are two actors capable of inflicting harm or generating hazards when it comes to digital products: the products themselves and humans who weaponize digital products. /1
Have you checked out App Microscope yet? π App Microscope is a useful tool that displays safety labels for mobile applications, with over 1700 apps studied from our previous K-12 EdTech Safety Benchmark.
Check it out here: appmicroscope.org
Weβve conducted extensive research into school tech safety & compiled a set of recommendations covering app, web, and school technology practices. Dive into more of our Privacy Recommendations for EdTech stakeholders in the link below π:
internetsafetylabs.org/resources/re...
At Internet Safety Labs, we're committed to making the digital world a safer place. If you find our work valuable, consider supporting our mission! Your donations help us continue to provide critical insights and transparency in software safety.
π Donate here: internetsafetylabs.org/donate
Have you checked out App Microscope yet? π App Microscope is a useful tool that displays safety labels for mobile applications, with over 1700 apps studied from our 2 K-12 EdTech Safety Benchmark.
Check it out here: appmicroscope.org
π‘ Helpful Hint from ISL: If you see a "Do Not Sell My Information" on a website's homepage it means that company IS selling your data. π¬ If you're a California resident, be sure to always click on that button and do the needful. If you're NOT a California resident, at least you know there's a risk.
16.09.2025 16:16 β π 0 π 0 π¬ 0 π 0
π£ATTENTION: EdTech Developers! The new school year is here and it's the perfect time to ensure your tech is safer for students!
Dive into more of our Privacy Recommendations for EdTech stakeholders in the link below π:
internetsafetylabs.org/resources/re...
Want to help set the standard for safer software? Join our Software Safety Standards Panel!
Weβre gathering experts to define and uphold essential safety standards for software. If youβre ready to dive inβthis panel is for you.
π Get involved here: internetsafetylabs.org/get-involved/
Did you know that 23% percent of school apps used by K12 students include ads? 13% include retargeting ads directed at children.
This is one of the many findings within our previous US EdTech Benchmark report:
π: internetsafetylabs.org/wp-content/u...
Did you know most school apps used by K12 students are unsafe for children? That's just the tip of the iceberg! π¬
Deep dive into our research findings in our previous US K12 EdTech Product Safety Benchmark Report:
π: internetsafetylabs.org/resources/re...