Big thanks to @shelbygrossman.bsky.social, @riana.bsky.social @det.bsky.social, @sarashah19.bsky.social & @stamos.org for almost a year of work interviewing stakeholders and researching these issues; also @noupside.bsky.social, @jcperrino.bsky.social and Elena Cryst for helping bring it to fruition.
We argue these issues would be best addressed by a concerted effort to massively uplift NCMEC's technical and analytical capabilities, which will require the cooperation of platforms, NCMEC, law enforcement and, importantly, the U.S. Congress.
Through interviews with 66 respondents, we explain why.
1. Many online platforms submit low-quality reports.
2. NCMEC has faced challenges rapidly implementing technological improvements that would aid LE triage.
3. Legal constraints on NCMEC and U.S. LE have implications for efficiency.
It is well known that law enforcement are overwhelmed with the volume of CyberTipline reports. Our contribution is to show that law enforcement feel unable to accurately prioritize reports that are most likely to lead to the rescue of a child being abused.
If U.S. platforms discover child sexual abuse material, federal law requires they report it to the CyberTipline, which is run by the National Center for Missing and Exploited Children, a nonprofit. NCMEC then forwards the reports to law enforcement.
The CyberTipline is the main line of defense for children who are exploited on the internet. It leads to the rescue of children and the arrest of abusers. Yet many believe the entire system does not always live up to its potential. Our new report explores why.
NEW REPORT:
The Strengths and Weaknesses of the Online Child Safety Ecosystem: Perspectives from Platforms, NCMEC, and Law Enforcement on the CyberTipline and How to Improve It
What can policymakers do to address AI-generated child abuse images? @riana.bsky.social authored a new must-read @lawfare.bsky.social white paper.
As a result of the investigation:
- The image URLs for abuse material were reported to authorities with action being taken to remove this content across the internet.
- The datasets have been temporarily taken down by the nonprofit developer to address safety concerns.
A new investigation by SIO Chief Technologist David Thiel (@det.bsky.social) found more than 1,000 instances of externally validated child sexual abuse material in a dataset used to train popular AI image generation models.
We are excited to launch a grant funding program for researchers studying trust and safety issues outside of the North American or Western European context.
Have an idea? Applications are due January 30, 2024.
Thanks to all who made the second annual Trust and Safety Research Conference a huge success!
ποΈ SOLD OUT with nearly 500 attendees
π¬ 27 sessions spanning research on AI, content moderation, and mental health
π 10 new Journal of Online Trust and Safety articles
π» Great food, drink, and community
Billions of people use encryption to protect their online data, payments, and communications.
Riana Pfefferkorn and ISOC's Callum Voge argue that online safety regulations can protect people without targeting encryption.
@riana.bsky.social
@internetsociety.bsky.social
Happening now! π₯οΈ
You're on Bluesky, so you know social media is increasingly decentralized.
With opportunity and growth, unfortunately, comes abuse. Our new primer offers tips for addressing unique safety challenges in decentralized spaces.
By @stamos.org & @sarashah19.bsky.social π¬π
A new issue of the Journal of Online Trust and Safety is hot off the cloud servers with research on privacy, deepfakes, crowdsourced fact checking, and what influences online searches. π π¬ π
Shelby Grossman joined the Tech Policy Press podcast to discuss research finding that current AI tools can generate persuasive propaganda articles.
@shelbygrossman.bsky.social / @techpolicypress.bsky.social
Alex Stamos (@stamos.bsky.social) tells the New York Times it is possible for Apple to do more in the fight against child sexual exploitation while balancing privacy and safety.
YouTube rabbit holes are rare, but SIO scholar Ronald Robertson finds the platform can still help alternative and extremist channels build audiences.
cyber.fsi.stanford.edu/io/news/stud...