Ignacio Cofone's Avatar

Ignacio Cofone

@ignaciocofone.bsky.social

Prof of Law & Regulation of AI at @oxfordlawfac.bsky.social @ox.ac.uk. Fellow @reubencollege.bsky.social. Book: "The Privacy Fallacy" (2023). Likes tech, dogs, and sustainable industrial policy

996 Followers  |  232 Following  |  81 Posts  |  Joined: 17.11.2024
Posts Following

Posts by Ignacio Cofone (@ignaciocofone.bsky.social)

Preview
Grok, Deepfakes, and the Collapse of the Content/Capability Distinction The Grok case suggests that effective AI regulation may come not from comprehensive AI-specific frameworks, but from applying existing harm-based laws to new capabilities.

The UK and France’s response to the Grok deepfake case suggests that effective AI regulation may not come from comprehensive AI-specific frameworks, but from the proper application of existing harm-based approaches to new capabilities, writes @ignaciocofone.bsky.social:

09.02.2026 14:08 β€” πŸ‘ 27    πŸ” 13    πŸ’¬ 0    πŸ“Œ 1

Thank you!

08.01.2026 14:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Can’t wait to read this!

For our part the @lco-cdo.bsky.social 2024 Consumer Protection Project recommended Ontario regulate consumer notice to include β€œmarket contexts” - plain language descriptions of systems & real risks ie β€œstructural uncertainties.” See p 33-36: www.lco-cdo.org/wp-content/u...

08.01.2026 05:11 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

TLDR: people agree to data practices while valuing privacy because risk is indeterminate at the time of agreement

07.01.2026 23:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Regulators: treat consent as contingent on uncertainty reduction, with notices that focus on risks rather than technical aspects. Shift focus to redesigning the decision environment: away from attention to default settings and towards whether the decision environment makes harms legible

07.01.2026 23:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This shifts the problem from self-control to information conditions, which operate as a market failure. Because structural uncertainty drives agreement contrary to preferences, good laws reduce uncertainty and keeps choices flexible

07.01.2026 23:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
The Privacy Paradox Is A Misnomer: Data Under Structural Uncertainty The infamous privacy paradox refers to the apparent inconsistency between people's stated concern for privacy and their readiness to disclose personal informati

Happy to share β€œThe Privacy Paradox is a Misnomer: Data Under Structural Uncertainty” (GTLJ 2026) which empirically shows uncertainty about downstream data uses and consequences, rather than unstable or contradictory preferences, drives the so-called privacy paradox papers.ssrn.com/sol3/papers....

07.01.2026 23:07 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0
Taxonomizing Synthetic Data for Law Synthetic data is increasingly important in data usage and AI design, creating novel legal and policy dilemmas. All too often, discussions of synthetic data tre

ISP Fellow @ignaciocofone.bsky.social publishes in Iowa Law Review about "Taxonomizing Synthetic Data for Law"

papers.ssrn.com/sol3/papers....

07.11.2025 19:13 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Norway’s Court of Appeal just upheld the historic fine against Grindr for (unlawful) sharing its users data with third parties. It’s an important step in considering inferences personal data (app-level identifiers as processing that reveals sexual orientation) www.datatilsynet.no/contentasset...

24.10.2025 14:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Some implications: privacy risks include both leakage and group-based inferences; data quality depends on valid assumptions; competition effects vary by type. Regulators should check the ground-truth claims that synthetic data encodes when differentiating among types

10.10.2025 16:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Ground-truth taxonomy based on G&L: (1) transformed data modifies collected data for an end use; (2) augmented data adds to collected data from modeled structure often to improve fidelity; (3) simulated data is generated from background models rather than records

10.10.2025 16:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Taxonomizing Synthetic Data for Law Synthetic data is increasingly important in data usage and AI design, creating novel legal and policy dilemmas. All too often, discussions of synthetic data tre

Happy to share our new piece with Katherine Strandburg & Nicholas Tilmes, β€œTaxonomizing Synthetic Data for Law.” It engages Gal & Lynskey’s excellent article & centers the role of ground-truth assumptions. The key q is how creation methods encode claims about the world ssrn.com/abstract=555...

10.10.2025 16:40 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

As many know, @bjard.bsky.social and I have been drafting a Technology Law coursebook for a few years. We've used it to teach classes at three institutions, including Yale Law School, and others have used chapters in their techlaw classes.

We're excited to share the current version more broadly!

02.09.2025 16:45 β€” πŸ‘ 13    πŸ” 7    πŸ’¬ 2    πŸ“Œ 1

Always read @ignaciocofone.bsky.social, including accidental legal history.

11.09.2025 14:33 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Generative AI Regulation in the US and Canada AbstractThe US and Canada regulate generative AI in different ways for the public and private sectors. They both have federal frameworks that set AI-specif

Glad to see this chapter published. I always found history of law quite interesting, and I never thought I would accidentally do it by writing in 2023-2024 a chapter focused on two now dead pieces of legislation! academic.oup.com/edited-volum...

11.09.2025 14:08 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Post image

What a nice surprise to find this review of The Privacy Fallacy in the Society for Technical Communication by Donald Riccomini. Thankful to the reviewer for engaging the book and the claim that we need a new type of accountability www.jstor.org/stable/27373...

08.09.2025 10:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Protecting Consumers in a Post-Consent World | Stanford Law Review In Charting a New Course on Digital Consumer Protection at the Federal Trade Commission, former FTC Chair Lina Khan and her co-authors Samuel Levine a

My article "Protecting Consumers in a Post-Consent World," about how we can broaden antitrust and consumer protection to deal with the fact that we have abandoned notice and consent in contract law, is now published in the Stanford Law Review Online.

www.stanfordlawreview.org/online/prote...

28.08.2025 22:30 β€” πŸ‘ 22    πŸ” 9    πŸ’¬ 1    πŸ“Œ 1
Preview
Opinion: Don’t hate ChatGPT-5. Your chatbot is not your friend The reaction to OpenAI’s new GPT-5 system shows the risks of making a chatbot too humanlike

As other recent news of death by suicide, this shows CSR in AI requires building products that avoid fostering addiction and are less parasocial. Reducing sycophancy and downplaying the illusion of personality mitigates risks of unhealthy AI reliance
www.theglobeandmail.com/business/com...

28.08.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

New op-ed: Many users were distraught over OpenAI’s shift from GPT-4o to GPT-5, describing the change as β€œlosing a friend” or a β€œtherapist.” The enormous online reaction shows that, when language models sound convincingly human, people start to forget they aren't.

28.08.2025 15:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Big thank you to @frankpasquale.mastodon.social.ap.brid.gy for his generous review of The Privacy Fallacy in the Journal of Law & Political Economy. He lays out the difficult questions involved in valuing harm & distributing compensation, in which LPE can play a role escholarship.org/uc/item/4jz9...

21.08.2025 12:07 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Article abstract

Article abstract

I'm delighted to share that my article, AI and Doctrinal Collapse, is forthcoming in Stanford Law Review! Draft at papers.ssrn.com/sol3/papers.....

17.08.2025 15:25 β€” πŸ‘ 31    πŸ” 5    πŸ’¬ 3    πŸ“Œ 0
Post image

πŸŽ‰ We are delighted to announce that Lionel Smith, Professor of Comparative Law at the University of Oxford, has been elected a Fellow of the British Academy πŸ‘

He joins a community of over 1,800 scholars who share a commitment to advancing the humanities and social sciences.

πŸ‘‡ shorturl.at/3Bkyg

21.07.2025 10:13 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction (pre-print) As debates about the policy and ethical implications of AI systems grow, it will be increasingly important to accurately locate who is responsible when agency i

I often think of this intentional stylistic choice as the opposite of Madeleine Elish’s β€œmoral crumple zone” papers.ssrn.com/sol3/papers....

19.07.2025 21:08 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
<i>The Privacy Fallacy: Harm and Power in the Information Economy</i> by Ignacio Cofone β€˜β€˜Our privacy is besieged by tech companies,”1 laments Ignacio Cofone, Law Professor and privacy aficionado, in The Privacy Fallacy: Harm and Power in the Information Economy. In an enlightening yet h...

D’Souza highlights something key: AI makes new inferences that amplify harms we can’t meaningfully consent to or control. So β€œnavigating the artificially intelligent world of the near future”, as he puts it, is a call for new forms of accountability digitalcommons.schulichlaw.dal.ca/cjlt/vol22/i...

18.07.2025 12:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Grateful to Christopher D’Souza for his thoughtful review of The Privacy Fallacy in the Canadian Journal of Law & Technology. He that our privacy regime is, as he says,β€œnot only outdated, but untenable” in the face of current data practices.

18.07.2025 12:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
<i>The Privacy Fallacy: Harm and Power in the Information Economy</i> by Ignacio Cofone β€˜β€˜Our privacy is besieged by tech companies,”1 laments Ignacio Cofone, Law Professor and privacy aficionado, in The Privacy Fallacy: Harm and Power in the Information Economy. In an enlightening yet h...

CDS highlights something crucial: AI makes new types of inferences that amplify harms we can’t meaningfully consent to or control. β€œNavigating the artificially intelligent world of the near future”, as he puts it, is a call for data accountability digitalcommons.schulichlaw.dal.ca/cjlt/vol22/i...

17.07.2025 22:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Check out recorded talks from our Law & Tech Speaker Series: lnkd.in/eBZ2prw7

In β€œWhy AI Requires Rebuilding Privacy Law”, Prof. @ignaciocofone.bsky.social explains why AI demands shifting privacy law from individual control to harm prevention.

πŸ‘‰ www.youtube.com/watch?v=lsQV...

03.07.2025 17:34 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Table of contents

Table of contents

New draft from me, β€œPrivacy and Disinformation,” on why privacy law, not speech or economic regulation, is the best path forward for fighting social media disinformation.

Read the draft here (and later finalized in UC Law Journal): papers.ssrn.com/sol3/papers....

25.06.2025 22:47 β€” πŸ‘ 101    πŸ” 28    πŸ’¬ 3    πŸ“Œ 3
<span>Notable Privacy Books: A Journey Through History</span> <p><span>In this essay, I discuss notable privacy books from the 1960s to 2020s – seven decades and more than 400 books. I briefly explain why each book is note

What an amazing resource by @daniel-solove.bsky.social. A history of privacy books.

Makes a perfect summer reading list.

papers.ssrn.com/sol3/papers....

12.06.2025 13:22 β€” πŸ‘ 9    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

Helpful article for this, @thomaskadri.bsky.social's Platforms as Blackacres www.uclalawreview.org/platforms-as...

05.06.2025 11:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0