Christine Haskell, PhD's Avatar

Christine Haskell, PhD

@dativeworks.bsky.social

Digital Transformation | Strategy | Adaptive Skill Building @ Scale | Decaffeinated Irritated Optimist…social media is for giving opinions away like kittens.

97 Followers  |  156 Following  |  62 Posts  |  Joined: 16.11.2024  |  2.5462

Latest posts by dativeworks.bsky.social on Bluesky

Preview
The Less You Know About AI, the More You Are Likely to Use It AI can seem magical to those with low AI literacy, a new study finds. That, in turn, might make them more willing to try it.

You don’t say.

02.09.2025 15:08 — 👍 692    🔁 197    💬 11    📌 17
Preview
‘It's Just a Mess:' 23 People Explain How Tariffs Have Suddenly Ruined Their Hobby "The real kick in the teeth is no matter how much manufacturing is brought back to the US these items will never be made in the USA. There is no upside."

Talked to 23 people who suddenly have to pay huge tariffs or otherwise cannot get chainmail from Pakistan, yarn from France, retro computers from Japan, metal music from the Netherlands, DVDs from Germany, cosplay supplies, sunscreen, etc.

www.404media.co/its-just-a-m...

09.09.2025 14:12 — 👍 2906    🔁 1178    💬 88    📌 170
An illustration of me, and the headline: "AI agents are coming for your privacy, warns Meredith Whittaker
The Signal Foundation’s president worries they will also blunt competition and undermine cyber-security"

An illustration of me, and the headline: "AI agents are coming for your privacy, warns Meredith Whittaker The Signal Foundation’s president worries they will also blunt competition and undermine cyber-security"

To put it bluntly, the path currently being taken towards agentic AI leads to an elimination of privacy and security at the application layer. It will not be possible for apps like Signal—the messaging app whose foundation I run—to continue to provide strong privacy guarantees, built on robust and openly validated encryption, if device-makers and OS developers insist on puncturing the metaphoric blood-brain barrier between apps and the OS. Feeding your sensitive Signal messages into an undifferentiated data slurry connected to cloud servers in service of their AI-agent aspirations is a dangerous abdication of responsibility.

To put it bluntly, the path currently being taken towards agentic AI leads to an elimination of privacy and security at the application layer. It will not be possible for apps like Signal—the messaging app whose foundation I run—to continue to provide strong privacy guarantees, built on robust and openly validated encryption, if device-makers and OS developers insist on puncturing the metaphoric blood-brain barrier between apps and the OS. Feeding your sensitive Signal messages into an undifferentiated data slurry connected to cloud servers in service of their AI-agent aspirations is a dangerous abdication of responsibility.

Happily, it’s not too late. There is much that can still be done, particularly when it comes to protecting the sanctity of private data. What’s needed is a fundamental shift in how we approach the development and deployment of AI agents. First, privacy must be the default, and control must remain in the hands of application developers exercising agency on behalf of their users. Developers need the ability to designate applications as “sensitive” and mark them as off-limits to agents, at the OS level and otherwise. This cannot be a convoluted workaround buried in settings; it must be a straightforward, well-documented mechanism (similar to Global Privacy Control) that blocks an agent from accessing our data or taking actions within an app.

Second, radical transparency must be the norm. Vague assurances and marketing-speak are no longer acceptable. OS vendors have an obligation to be clear and precise about their architecture and what data their AI agents are accessing, how it is being used and the measures in place to protect it.

Happily, it’s not too late. There is much that can still be done, particularly when it comes to protecting the sanctity of private data. What’s needed is a fundamental shift in how we approach the development and deployment of AI agents. First, privacy must be the default, and control must remain in the hands of application developers exercising agency on behalf of their users. Developers need the ability to designate applications as “sensitive” and mark them as off-limits to agents, at the OS level and otherwise. This cannot be a convoluted workaround buried in settings; it must be a straightforward, well-documented mechanism (similar to Global Privacy Control) that blocks an agent from accessing our data or taking actions within an app. Second, radical transparency must be the norm. Vague assurances and marketing-speak are no longer acceptable. OS vendors have an obligation to be clear and precise about their architecture and what data their AI agents are accessing, how it is being used and the measures in place to protect it.

📣 NEW -- In The Economist, discussing the privacy perils of AI agents and what AI companies and operating systems need to do--NOW--to protect Signal and much else!

www.economist.com/by-invitatio...

09.09.2025 11:44 — 👍 881    🔁 283    💬 11    📌 31
Preview
AI Darwin Awards Show AI’s Biggest Problem Is Human The AI Darwin Awards is a list of some of the worst tech failures of the year and it’s only going to get bigger.

The AI Darwin Awards are here to catalog the damage that happens when humanity’s hubris meets AI’s incompetence. The simple website contains a list of the dumbest AI disasters from the past year and calls for readers to nominate more.

🔗 www.404media.co/ai-darwin-aw...

09.09.2025 16:36 — 👍 173    🔁 60    💬 1    📌 2
Preview
Reality Is Ruining the Humanoid Robot Hype ​It takes more than building a humanoid robot to build a humanoid robot product.

so the trouble with humanoid robots as the next great tech hype is just that they're shit, they don't work and they have no applications

spectrum.ieee.org/humanoid-rob...

13.09.2025 10:37 — 👍 110    🔁 25    💬 11    📌 8

"Consider the implications if ChatGPT started saying “I don’t know” to even 30% of queries ... Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly."

13.09.2025 06:50 — 👍 772    🔁 167    💬 18    📌 16
Preview
Narrative Control in a Disoriented Age 1/3 The Author is the Architect

We’re not just losing the plot.
We’re living inside systems that rewrite it in real time.
When language is optimized, meaning becomes fragile.
Narrative isn’t just a story; it’s control.

#NarrativePower #Disinformation #SemanticCollapse #DataGovernance
christinehaskell.substack.com/p/narrative-...

27.05.2025 15:06 — 👍 1    🔁 0    💬 0    📌 0

This is messed up.

It is also often the story of automation—labor costs don’t go away, they’re shifted to someone whose labor isn’t compensated. See, e.g. Elish & @cariatida.bsky.social on grocery scanners.

27.05.2025 14:30 — 👍 21    🔁 14    💬 0    📌 0
Preview
We did the math on AI’s energy footprint. Here’s the story you haven’t heard. The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next.

Noting that our AI footprint today is likely the smallest it will ever be, ‪@technologyreview.com‬ offers a comprehensive — and sobering — analysis of how much energy the AI industry uses and where it's headed. www.technologyreview.com/2025/05/20/1...

27.05.2025 14:30 — 👍 18    🔁 14    💬 0    📌 1
Preview
Civitai Ban of Real People Content Deals Major Blow to the Nonconsensual AI Porn Ecosystem Citing pressure from payment processors and new legislation, a critical resource for producing nonconsensual content bans AI models depicting the likeness of real people.

This is a pretty big change to how the entire nonconsensual AI-generated content ecosystem works. I don't think this would have happened without our reporting or support from our subscribers www.404media.co/civitai-ban-...

27.05.2025 14:24 — 👍 136    🔁 39    💬 3    📌 1
Preview
ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows Flock's automatic license plate reader (ALPR) cameras are in more than 5,000 communities around the U.S. Local police are doing lookups in the nationwide system for ICE.

SCOOP: ICE, HSI, and DHS are getting side-door access to the nationwide system of Flock license plate cameras by asking local police to perform lookups for them, new public records show.

ICE does *not* have a contract to use this surveillance tool itself.

www.404media.co/ice-taps-int...

27.05.2025 13:40 — 👍 1130    🔁 728    💬 35    📌 125
Preview
Nick Clegg says asking artists for use permission would ‘kill’ the AI industry Paul McCartne, Elton John and others signed an open letter.

Nick Clegg says asking artists for use permission would ‘kill’ the AI industry

26.05.2025 13:40 — 👍 4486    🔁 865    💬 3659    📌 8933

So true!

27.05.2025 14:53 — 👍 0    🔁 0    💬 0    📌 0
Handwritten quote on a white sheet with an illustrated face at the bottom and pink hearts around the text. The quote reads:
"If you love a book, write a nice review. It gives the author encouragement for bad days when they want to take up scorpion petting."
— Liana Brooks

Illustration and lettering by Debbie Ridpath Ohi. A paintbrush lies across the page on a wooden surface.

Handwritten quote on a white sheet with an illustrated face at the bottom and pink hearts around the text. The quote reads: "If you love a book, write a nice review. It gives the author encouragement for bad days when they want to take up scorpion petting." — Liana Brooks Illustration and lettering by Debbie Ridpath Ohi. A paintbrush lies across the page on a wooden surface.

If you love a book, write a nice review. It gives the author encouragement for bad days when they want to take up scorpion petting. - @lianabrooks.bsky.social

#WriterSky #BookSky #KidLit

26.05.2025 12:00 — 👍 6011    🔁 877    💬 84    📌 37
Preview
The Sustainability Liability No One's Pricing Into AI 1/3 Where Risk Gets Mispriced

We made the system.
Then gave it the wheel.

The worst part?
Nobody feels responsible anymore.

(New essay out. It’s about what Steinbeck saw before we did.) christinehaskell.substack.com/p/the-sustai... #SystemDesign
#AIGovernance #LeadershipReflection #EthicalAI #DesignedAccountability

27.05.2025 14:13 — 👍 1    🔁 0    💬 0    📌 0
Preview
This is what a digital coup looks like “We are watching the collapse of the international order in real time, and this is just the start,” says investigative journalist Carole Cadwalladr. In a searing talk, she decries the rise of the “bro...

Watch this video of @carolecadwalla.bsky.social’s new TED Talk. TED has been a cheerleader for the technologies that have destroyed democracy and our social fabric, and Carole does not hold back. She is so courageous … and speaks the truth.

www.ted.com/talks/carole...

10.04.2025 12:10 — 👍 150    🔁 63    💬 9    📌 7
Post image

Sentence of the year: “Critics once described Brexit as the greatest act of economic self-harm by a Western country in the post-World War II era. It may now be getting a run for its money across the Atlantic.” www.nytimes.com/2025/04/13/w...

13.04.2025 13:31 — 👍 278    🔁 68    💬 10    📌 2
China halts critical exports of certain rare earth mineral and magnets [NYT report]

China halts critical exports of certain rare earth mineral and magnets [NYT report]

We are all fucked.

Thank you, POTUS, for giving China an excuse to shut down the export of rare earth minerals and magnets, furthering your efforts to drive us all back to the Stone Age.

13.04.2025 20:58 — 👍 394    🔁 94    💬 20    📌 4
Post image

The government knows an enormous amount about you in the data systems DOGE has sought to access.

Just for starters:
www.nytimes.com/2025/04/09/u...

09.04.2025 15:01 — 👍 391    🔁 190    💬 23    📌 40
Preview
So You Want to Be a Dissident? A practical guide to courage in Trump’s age of fear.

Courage is contagious. But how do we catch it?

For months, @amifieldsmeyer.bsky.social and I have been asking dissidents and activists from around the world how we can topple authoritarianism.

We assembled their lessons into a field guide to courage.

www.newyorker.com/news/the-wee...

12.04.2025 12:39 — 👍 663    🔁 285    💬 25    📌 31
Preview
'I grew up assuming democracy was inevitable': 'Handmaids Tale' actor on the current moment Emmy Award-winning actor Bradley Whitford joins Morning Joe to discuss the first three episodes of the final season of 'The Handmaid's Tale' and to talk about his previous work in 'The West Wing'.

"I grew up assuming democracy was inevitable"

Emmy Award-winning actor Bradley Whitford on the current political climate

11.04.2025 21:34 — 👍 508    🔁 69    💬 22    📌 6

“What did they have that eludes their lesser-known heirs today? The answer is that Kissinger & Brzezinski were immigrants. Newcomers often value America’s freedoms more than its native-born and are statistically far likelier to start companies, win Nobel Prizes & launch schools of thought.”
~Ed Luce

12.04.2025 13:03 — 👍 1417    🔁 188    💬 2    📌 17
Preview
Doctor Behind Award-Winning Parkinson’s Research Among Scientists Purged From NIH Leading scientists at the National Institutes of Health, the US’s leading medical research agency, were swept up Tuesday in the Trump administration's latest firing blitz.

NEW: The doctor behind breakthrough Parkinson’s research was among the scientists purged from the National Institutes of Health, the US’s leading medical research agency. www.wired.com/story/doctor...

02.04.2025 00:13 — 👍 3501    🔁 1943    💬 138    📌 200

I love that you tried though

01.04.2025 12:56 — 👍 0    🔁 0    💬 1    📌 0

Schumer is a nudge.

01.04.2025 12:55 — 👍 0    🔁 0    💬 1    📌 0

This show was…sublime in it portrayal of Human Resources. The persistent ethical question of if our memories make us who we are, are these innies new people created as slaves whose sole purpose is to work. I mean, it reminded me of 90s tech culture.

28.03.2025 03:47 — 👍 2    🔁 0    💬 0    📌 0
Preview
AI Accountability Starts with Government Transparency | TechPolicy.Press Government transparency on AI use is not just a bureaucratic exercise—it is a fundamental component of maintaining public trust, writes Clara Langevin.

Government transparency on AI use is not just a bureaucratic exercise—it is a fundamental component of maintaining public trust, writes Clara Langevin.

20.03.2025 15:06 — 👍 31    🔁 9    💬 3    📌 1
Preview
Bubble Trouble An AI bubble threatens Silicon Valley, and all of us.

Some astonishing numbers in here:
-OpenAI loses $2 for every $1 it makes
-OpenAI projects annual losses of $14 billion by 2026
-To break even OpenAI needs to increase revenue 25x in just 5 years
-33% of VC portfolios are committed to AI
-5 AI-heavy stocks account for 29% of the S&P 500's value

25.03.2025 12:36 — 👍 2733    🔁 1154    💬 92    📌 347

How many times are we gonna see this same story, but with the newspaper name changed

26.03.2025 19:12 — 👍 36    🔁 8    💬 2    📌 0

Can’t help but admire this administration’s commitment to transparency

26.03.2025 23:33 — 👍 187    🔁 36    💬 2    📌 2

@dativeworks is following 20 prominent accounts