Thomas Dietterich's Avatar

Thomas Dietterich

@tdietterich.bsky.social

Safe and robust AI/ML, computational sustainability. Former President AAAI and IMLS. Distinguished Professor Emeritus, Oregon State University. https://web.engr.oregonstate.edu/~tgd/

7,606 Followers  |  490 Following  |  947 Posts  |  Joined: 22.09.2023  |  2.4338

Latest posts by tdietterich.bsky.social on Bluesky

Preview
This Hacker Conference Installed a Literal Anti-Virus Monitoring System At New Zealand's Kawaiican cybersecurity convention, organizers hacked together a way for attendees to track CO2 levels throughout the venueβ€”even before they arrived.

Conference centers (and other public venues) should be installing these. I suppose we could also crowdsource this data if someone could make a wearable Bluetooth version. www.wired.com/story/this-h...

21.11.2025 16:14 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Monarch butterfly tracks in today’s NYT.

Gift link: www.nytimes.com/2025/11/17/s...

20.11.2025 23:36 β€” πŸ‘ 215    πŸ” 48    πŸ’¬ 10    πŸ“Œ 10

I'm told arXiv received an LLM-generated fake LEAN proof. Authors don't even know how to check their "proofs" using LEAN.

21.11.2025 06:07 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I am by no means a prominent public intellectual, but my inbox is increasingly filled with messages from people who have been convinced by sycophantic chatbots that they have discovered revolutionary theories that entirely upend our scientific understanding of the universe.

21.11.2025 02:49 β€” πŸ‘ 3827    πŸ” 668    πŸ’¬ 134    πŸ“Œ 134

If it is garbage, then you don't care if it is paywalled, right?

21.11.2025 06:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Opinion | The Sad and Dangerous Reality Behind β€˜Her’

I agree that emotional addiction to chatbots is the number one risk of AI today. Here is a gift link to an important OpEd in the NYTimes:
www.nytimes.com/2025/11/17/o...

20.11.2025 05:36 β€” πŸ‘ 15    πŸ” 7    πŸ’¬ 0    πŸ“Œ 2

Or maybe a well-designed independent audit

19.11.2025 03:17 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

There are plenty of narrow AI systems that exceed human performance. Example: AlphaFold for protein folding.

Even a simple calculator beats humans at arithmetic.

Properly-deployed, AI can help us address many important problems.

18.11.2025 06:12 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
We Can Now Track Individual Monarch Butterflies. It’s a Revelation.

Love this new technology from Cellular Tracking Technology for tracking Monarch butterflies. Great NYTimes story.
www.nytimes.com/2025/11/17/s...

18.11.2025 06:08 β€” πŸ‘ 17    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

There has been a Promethean thread throughout the history of AI. Bezos is bringing it out into the open.

18.11.2025 05:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yes, maybe that’s the fix. It still feels a bit slippery

17.11.2025 17:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Good question. I've thought a bit about this but can't decide. If memory is free, you could remember everything as you suggest. But otherwise, the rules of cache management would apply, I guess.

17.11.2025 06:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yes. I define learning as an increase in the knowledge of the system. At a minimum, that requires memory. Memory without generalization is "rote learning". Speaking from experience, maybe we should call generalization without memory a kind of "senior moment"?

16.11.2025 20:07 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I generally agree with your analysis here. I didn't intend my comment as an attack or as "whataboutism", but rather to have exactly this discussion. Thank you

16.11.2025 19:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Sorry for my confusing analogy.

15.11.2025 23:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I can give instructions to "agentic" LLM systems and they will execute them. That is a form of programming. I don't think the vendor of such systems is liable for every single token that is produced or action that is taken. But the vendor should be responsible for harms caused by LLM flaws

15.11.2025 23:25 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 4    πŸ“Œ 0

The question is how does the voting public understand the slogan. In the 60s, some folks interpreted "Peace Now" as "Let's withdraw from Vietnam" and others interpreted it as "Let's unilaterally disarm".

15.11.2025 22:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Machine learning is a mimicry technology, so of course these LLMs mimic us. But that is not necessarily evidence about the nature of general intelligence (i.e., intelligence more general than ours).

15.11.2025 22:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I agree that bad medical advice is the fault of the vendor, not the user.

15.11.2025 02:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

OpenAI in the ChatGPT case, as well as all of these β€œagentic” systems that are coming on the market.

15.11.2025 02:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Oh, I worry! (But I still fly.)

15.11.2025 02:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

There is a gray zone where the user tells an AI system to commit a crime (and it does). Under what conditions is the AI vendor an accessory to the crime? @rcalo.bsky.social ? Is this in your book?

15.11.2025 02:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Some nuance is required. If I write a computer program that prints out something libelous, for example, the vendor is not liable. But if a compiler bug causes someone to be harmed, the vendor should be liable.

15.11.2025 02:06 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0

So many nonsense ad hoc pipelines could be prevented by requiring that they work on synthetic data.

I tend to think of experiments as special cases of inference, since most of the problems I work on cannot be studied in experiments. But I get that many researchers see experiments as base analogy.

10.11.2025 12:41 β€” πŸ‘ 61    πŸ” 10    πŸ’¬ 3    πŸ“Œ 0

We are not "banning" reviews; we are just requiring peer review first. Good review articles are important for the field!

08.11.2025 18:57 β€” πŸ‘ 21    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1
Chinese Satellite Destruction Stirs Debate | Arms Control Association

Russia and China combined have created twice as much debris as the US. China notoriously blew up a satellite as a test. It would be supreme justice if it was Chinese-sourced debris that struck the Chinese spacecraft.
www.armscontrol.org/act/2007-03/...

08.11.2025 02:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1

β€œArchival” and β€œworkshop” don’t usually go together. If you can provide evidence of strong peer review, that’s the key. You may need to do that in an appeal, as we don’t have any mechanism in our submission system for providing such evidence

05.11.2025 21:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You still should fix the first paragraph. We will be releasing review articles and position papers, but only after they have passed peer review.

04.11.2025 19:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is a very good point. It is one of the reasons why I think generic chatbots should probably be outlawed.

04.11.2025 00:19 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Here is a good use: LLMs as proof assistants in mathematical research.

Here is a bad use: Automated synthesis of misinformation for social media.

It *is* a new technology, people are trying to figure out how to use it both for good and for ill. Not all technology has a "use" when it is invented.

03.11.2025 03:31 β€” πŸ‘ 18    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@tdietterich is following 20 prominent accounts