Matt Seybold's Avatar

Matt Seybold

@mattseybold.bsky.social

American Vandal Pod | Prof of AmLit & Twain Studies + Director of Media Studies, Elmira College | Resident Scholar @MarkTwain.bsky.social | Political Economy of Mass Media TheAmericanVandal.substack.com MattSeybold.com buymeacoffee.com/americanvandalpod

9,614 Followers  |  2,046 Following  |  6,240 Posts  |  Joined: 01.06.2023
Posts Following

Posts by Matt Seybold (@mattseybold.bsky.social)

I know exactly what you mean.

04.03.2026 05:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Dammit. That’s never good.

04.03.2026 05:04 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Well, you did come up! Somebody said something about believing there was something inherently relational about education. And I was like, β€œTMC says that’s not just belief. There’s science to support it!”

04.03.2026 03:53 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I underestimated how excited people would be to talk about Canvas. It was great.

04.03.2026 03:50 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I’m on the big screen.

04.03.2026 03:25 β€” πŸ‘ 38    πŸ” 2    πŸ’¬ 6    πŸ“Œ 0

Not yet, but is part of book in progress.

04.03.2026 03:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I’m going to try to grab a version for the pod. Tech willing.

03.03.2026 14:13 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I have decided to give a talk today almost entirely about the history of the Canvas LMS. Wish me luck.

03.03.2026 13:50 β€” πŸ‘ 50    πŸ” 0    πŸ’¬ 4    πŸ“Œ 0

On my way to hotel in Utah. I have now collected 45 of the 52 states (and what should be states, DC & PR).

03.03.2026 05:54 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image 03.03.2026 00:12 β€” πŸ‘ 20    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1

The Looooong C18

02.03.2026 15:56 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

These cybersecurity boot camps are not run by the same β€œmicro-university” contractor I wrote about last year, but they seem to be using a similar β€œpowered by” model. And the following applies:

02.03.2026 15:55 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

A great example of why we need, as @annieabrams.bsky.social has said, more C19 (and C18) U.S. literature professors in the literacy, ed policy, and EdTech debates.

02.03.2026 15:15 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 2
The hyperscalers sit at the intersection of infrastructure and applications, and many have already monetized Al indirectly through higher cloud demand. But the return hurdles ahead are substantial. Achieving a 10% return on current Al investments could require $650 billion in annual revenue, or $35 from every iPhone user monthly'. While that may be feasible, it's a high bar that assumes sustained, broad-based adoption.

The hyperscalers sit at the intersection of infrastructure and applications, and many have already monetized Al indirectly through higher cloud demand. But the return hurdles ahead are substantial. Achieving a 10% return on current Al investments could require $650 billion in annual revenue, or $35 from every iPhone user monthly'. While that may be feasible, it's a high bar that assumes sustained, broad-based adoption.

*$35 from every iPhone user monthly*

02.03.2026 02:28 β€” πŸ‘ 72    πŸ” 20    πŸ’¬ 9    πŸ“Œ 14
Preview
Is AI already killing people by accident? The writer Tyler Austin Harper (of The Atlantic, etc.) sent me a thread this morning, asking whether a mistargeting yesterday that killed nearly 150 school children in Iran could have been the result ...

Quote from post by @garymarcus.bsky.social.

01.03.2026 22:13 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
There is a second problem, beyond the technical. The technical problem is that current Al simply isn't reliable; mistakes will absolutely made. Some will cost lives, some will cost many lives. Some may lead to further escalation (a mass killing of school children could well do that; in the worst case, a series of escalations triggered by AI-triggered mistakes could lead to a nuclear war. Given the current status in the Middle East, this concern is not merely academic.
The moral problem is that militaries may well wish to use Al cloak moral responsibility. One can, for example, use an Al tool to select targets, and blame the Al. It is important to realize that real choices are made at the front end, by those who use the AI. How many civilian casualties are acceptable? What error rate is permissible?
Al can follow a set of criteria (with more or less precision depending on the quality of the algorithms and data), but humans set those criteria. In my own view the biggest problem with the algorithms targeting Gaza was not necessarily the algorithms per se (about which not much may be public) but the decision to tolerate a large number of civilian casualties as part of the targeting

There is a second problem, beyond the technical. The technical problem is that current Al simply isn't reliable; mistakes will absolutely made. Some will cost lives, some will cost many lives. Some may lead to further escalation (a mass killing of school children could well do that; in the worst case, a series of escalations triggered by AI-triggered mistakes could lead to a nuclear war. Given the current status in the Middle East, this concern is not merely academic. The moral problem is that militaries may well wish to use Al cloak moral responsibility. One can, for example, use an Al tool to select targets, and blame the Al. It is important to realize that real choices are made at the front end, by those who use the AI. How many civilian casualties are acceptable? What error rate is permissible? Al can follow a set of criteria (with more or less precision depending on the quality of the algorithms and data), but humans set those criteria. In my own view the biggest problem with the algorithms targeting Gaza was not necessarily the algorithms per se (about which not much may be public) but the decision to tolerate a large number of civilian casualties as part of the targeting

β€œCloaking moral responsibility” is one of the only things I would fully trust AI to do at present, & that may be all the power elite want it for anyway.

01.03.2026 22:11 β€” πŸ‘ 27    πŸ” 7    πŸ’¬ 4    πŸ“Œ 1
Preview
"You have to use it. You have to trust it.": Forced Adoption of AI is The Subtext of Davos Authoritarianism is the AI Bailout; But If AI Doesn't Need a Bailout, Does it Need Authoritarianism?

[From listening to recorded conversation between Nadella + Fink]: they've "largely given up on organic adoption by consumers. They have moved on to a new dream of forced adoption mandated by government and managerial coercion." @mattseybold.bsky.social
theamericanvandal.substack.com/p/you-have-t...

01.03.2026 21:04 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Depending on how much other labor is required to apply, I might consider just submitting with no letter. Senior people do this for our residency sometimes & we have never used it to disqualify them.

01.03.2026 20:00 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

I’m pretty sure you (named prof from flagship) are exactly the riff-raff archives want to associate themselves with.

01.03.2026 19:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Everybody hates every part of this process.

01.03.2026 19:09 β€” πŸ‘ 20    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Too many β€œmiddle ground” AI argumentsβ€”β€œI have concerns, too, but we have to adapt”—proceed from what is to me a peculiar embrace of β€œinevitability” which seems to be magical thinking, a way of depoliticizing the political, of self-soothing in the face of an overwhelming challenge.

01.03.2026 16:41 β€” πŸ‘ 414    πŸ” 102    πŸ’¬ 15    πŸ“Œ 15

Far be it from me to blame SF women for judging dudes reading tech books.

But I’ve been thinking about this as emblematic of broader assumption that reading biography implies idolizing the subject.

When I read biographies they are almost always about subjects towards whom I feel ambivalent at best

01.03.2026 16:51 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I was reading β€œGoing Infinite” on a plane, a plane leaving SF, a week ago, & the lady sitting across from me asked, β€œIsn’t that about the Crypto King?”

I said, β€œYes.” And she felt the need to reply, firmly, β€œHe was NOT a good man.”

I agreed, though I could tell she didn’t fully believe me, which…

01.03.2026 16:51 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Nothing good to say, so here’s a video of my dog catching a frisbee.

01.03.2026 14:55 β€” πŸ‘ 23    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Assassinating the sickly 86 year old fundamentalist leader of an unpopular regime facing regular internal protests seems dumb

01.03.2026 13:13 β€” πŸ‘ 54    πŸ” 8    πŸ’¬ 2    πŸ“Œ 1

he wants to drop the bomb, he has always wanted to drop it

the apocalypse with which he associates it is an alternative he welcomes

an alternative to the future in which his power, his privilege, and, most importantly, his pleasure, no longer exist

28.02.2026 20:15 β€” πŸ‘ 164    πŸ” 27    πŸ’¬ 5    πŸ“Œ 2

Give education a vaccine for Silicon Valley thought leaders, Bill.

28.02.2026 15:09 β€” πŸ‘ 17    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I will never be anti-vaxxer, but I have a certain sympathy for those who got there from the starting point of profoundly not giving a shit about what Bill Gates thinks.

28.02.2026 15:06 β€” πŸ‘ 27    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

In the newly restored version of Pudd’nhead Wilson, based on Twain’s original β€œunbissected” manuscript, a pair of conjoined twins run against one another for elected office. And I feel like reading this might be the cure for thinking the problem of two-party system is division & solution is centrism

28.02.2026 15:02 β€” πŸ‘ 20    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

One of the most insane things about ed-reform is that a very small collection of people are able to be wrong again and again and again with zero repercussions. In fact, every failure is a chance for them to come up with a new solution and write/sell a book about it.

28.02.2026 13:49 β€” πŸ‘ 50    πŸ” 10    πŸ’¬ 5    πŸ“Œ 1