Christopher Kanan's Avatar

Christopher Kanan

@chriskanan.bsky.social

AI Scientist | Professor | Techno-Optimist chriskanan.com

238 Followers  |  70 Following  |  19 Posts  |  Joined: 16.11.2024  |  1.9201

Latest posts by chriskanan.bsky.social on Bluesky

Preview
Retiring “AGI”: Two Paths for Intelligence Why today’s systems will transform work but will not yield human-like minds

"AI doomer" media gets clicks and listens, but there is no logical reason to believe that today's frontier LLMs will "wake up" and become the terminator. The major problem is that “AGI” is an overloaded term. We should retire it: syntheticminds.substack.com/p/retiring-a...

07.10.2025 13:41 — 👍 0    🔁 0    💬 0    📌 0
Preview
Chris Kanan, University of Rochester - Can we teach AI to learn like humans? - On University of Rochester Week: Human intelligence and artificial intelligence learn differently, but can that change? Chris Kanan, associate professor of computer science at the Hajim School of Engi...

I was featured on the Academic Minute, which airs on over 70 radio stations around the US and Canada. In the segment, I introduce continual learning for the non-AI audience: academicminute.org/2025/02/chri...

#ai #academicminute #continuallearning

20.02.2025 18:43 — 👍 1    🔁 0    💬 0    📌 0

To be clear, I don’t believe we should halt AI progress. Higher education must adapt. But I worry that most universities, already overwhelmed by ongoing crises, lack the agility and foresight to make the tough decisions needed to survive.

13.02.2025 12:39 — 👍 1    🔁 0    💬 0    📌 0
Preview
AI & The Existential Crisis Facing Higher Education I’ve been a professor working on the frontiers of AI for about a decade. I consider training and mentoring the next generation of engineers and scientists to be a great privilege, and I think…

As a professor working at the frontiers of AI, I’ve grown increasingly concerned about the cataclysmic impact AI could have on college enrollments in the coming decades—on top of the decline already underway for other reasons.

https://buff.ly/4hDInSq

#HigherEducation #AI #EnrollmentCrisis

13.02.2025 12:39 — 👍 2    🔁 0    💬 2    📌 0

Given that roughly half of the academic AI papers published in our top-tier conferences are produced by Chinese universities, this would catastrophically impair AI research in the USA if researchers cannot download code or weights if they were developed by Chinese institutions.

02.02.2025 16:08 — 👍 2    🔁 0    💬 0    📌 0
Preview
DeepSeek fallout: GOP Sen Josh Hawley seeks to cut off all US-China collaboration on AI development This week the U.S. tech sector was routed by the Chinese launch of DeepSeek, and Sen. Josh Hawley is putting forth legislation to prevent that from happening again.

A proposed AI bill would, based on my read (and ChatGPT's), make it illegal in the USA to download AI code or weights created by Chinese companies, universities, etc. This is catastrophically shortsighted.

https://buff.ly/4hpjTfC

Bill: https://buff.ly/40Vyd9J

02.02.2025 16:08 — 👍 3    🔁 0    💬 1    📌 0

The only barrier is having access to the right kind of chips, and DeepSeek figured out how to more effectively use the chips they have. The learnings from DeepSeek about how to use FP8 will enable AI folks worldwide to get more from NVIDIA's newer chips.

27.01.2025 14:54 — 👍 1    🔁 0    💬 1    📌 0

A huge percentage of the PhD students trained in the USA in AI are Chinese, where we only have about 30% domestic students nationwide. We aren't getting domestic applications to do PhDs in the USA. Why people think China wouldn't have AI expertise confuses me.

27.01.2025 14:54 — 👍 0    🔁 0    💬 1    📌 0
Post image

I can see why Microsoft stock would be impacted by this news due to their OpenAI investment, but I really don't get the others. DeepSeek used FP8 on NVIDIA's chips to get a big boost in training, among other things, but I think this fear is overblown.

27.01.2025 14:54 — 👍 3    🔁 0    💬 1    📌 0

There are too many unknowns to justify using a fixed compute-based threshold. Policymakers should focus on regulating specific high-risk AI applications, similar to how the FDA regulates AI software as a medical device.

16.01.2025 21:23 — 👍 1    🔁 0    💬 0    📌 0

Lastly, many trying to scale LLMs beyond systems like GPT-4 have hit diminishing returns, shifting their focus to test-time compute. This involves using more compute to "think" about responses during inference rather than in model training, and the regulation does not address this trend at all.

16.01.2025 21:23 — 👍 1    🔁 0    💬 1    📌 0

It is unlikely that AI progress will remain tied to inefficient transformer-based models trained on massive datasets.

16.01.2025 21:23 — 👍 0    🔁 0    💬 1    📌 0

Second, the 10^26 operations threshold appears to be based on what may be required to train future large language models using today’s methods. However, future advances in algorithms and architectures could significantly reduce the computational demands for training such models.

16.01.2025 21:23 — 👍 0    🔁 0    💬 1    📌 0

The current regulation seems misguided for several reasons. First, it assumes that scaling models automatically leads to something dangerous. This is a flawed assumption, as simply increasing model size and compute does not necessarily result in harmful capabilities.

16.01.2025 21:23 — 👍 0    🔁 0    💬 1    📌 0

The new US Export Control proposed rule for the amount of compute used to train AI systems is open for comment. I don't think it makes much sense. It puts export controls on AI models trained with over 10^26 "operations." Here is the link: www.federalregister.gov/documents/20...

#ai #regulation

16.01.2025 21:23 — 👍 0    🔁 0    💬 1    📌 0
Preview
Cognitive Science Speaker Series | Continual Learning for Vision and Multi-Modal Large Language Models | Events | RITRIT Logo with TextRIT logoRIT logo and full name (footer) Speaker: Christopher Kanan, Ph.D. Title: Continual Learning for Vision and Multi-Modal Large Language Models Short Bio: Christopher Kanan is an Associate Professor of Computer Science at the Universit...

I'm looking forward to visiting RIT on Friday. I'll be giving a talk on my lab's recent works on large-scale deep learning systems for continual learning and our work on using continual learning to overcome linguistic forgetting in multi-modal LLMs.

www.rit.edu/events/cogni...

12.01.2025 15:53 — 👍 1    🔁 0    💬 0    📌 0
INSIGHT can produce accurate segmentations using only slide-level labels. The two images on the left show the input image and the ground truth segmentations (not used for training). The right images show the pixel-wise predictions produced by INSIGHT.

INSIGHT can produce accurate segmentations using only slide-level labels. The two images on the left show the input image and the ground truth segmentations (not used for training). The right images show the pixel-wise predictions produced by INSIGHT.

We’re excited to share INSIGHT, which integrates interpretability directly into its architecture, enabling classification and weakly supervised segmentation without pixel-level annotation.

Web: zhangdylan83.github.io/ewsmia/
arXiv: arxiv.org/abs/2412.02012

#AI #medicalAI #radiology #pathology

14.12.2024 18:52 — 👍 2    🔁 0    💬 0    📌 0

I agree, but I don't think it was because I was a student. I think it is because of how enormous the conferences have become. I had a ton of fun at CoLLAs-2024, but it was single-track and only had a few hundred people vs 10-20k people.

21.11.2024 18:06 — 👍 1    🔁 0    💬 1    📌 0
Post image

The Call for Papers for CoLLAs 2025, the premier venue for continual and lifelong learning research in AI is out: lifelong-ml.cc

The Abstract Deadline is Feb 21, 2025. It will be held in Philadelphia in August.

#continuallearning #deeplearning #lifelonglearning #ai #collas2025

16.11.2024 15:01 — 👍 2    🔁 0    💬 0    📌 0

@chriskanan is following 19 prominent accounts