Dr. Jay

Dr. Jay

@mckupo.bsky.social

**Assistant Professor** of Cognitive Science at Carleton College. Figure-Ground thinking, AI Ethics, Embodied Cognition, Tetsugaku.

541 Followers 1,498 Following 151 Posts Joined Jul 2023
4 months ago

📣 I am hiring a postdoc! aial.ie/hiring/postd...

applications from suitable candidates that are passionate about investigating the use of genAI in public service operations with the aim of keeping governments transparent and accountable are welcome

pls share with your networks

147 162 3 9
4 months ago

apparently a lot of people need to hear this: harmful practices that violate fundamental rights are not a matter of ethics or morality. please don’t frame it as “unethical”. the “ethics” lens undermines the fact that is it unacceptable under any condition. not a matter of debate

72 24 1 1
6 months ago
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

3,790 1,897 110 390
5 months ago
Comment by Tom Diettrich on a linkedin post reading:

"You can't "test-in quality" in engineering; you can't "review-in quality" in research. We need incentives for people to do better research. Our system today assumes that 75% of submitted papers are low quality, and it is probably right (I'll bet it is higher). If this were a manufacturing organization, an 75% defect rate would result in bankruptcy. 

Imagine a world in which you could have an AI system check the correctness/quality of your paper. If your paper passed that bar, then it could be published (say, on arXiv). Subsequent human review could assess its importance to the field. 

In such a system, authors would be incentivized to satisfy the AI system. This will lead to searching for exploits in the AI system. A possible solution is to select the AI evaluator at random from a large pool and limit the number of permitted submissions. I imagine our colleagues in mechanism design can improve on this idea."

Original:
https://www.linkedin.com/feed/update/urn:li:activity:7381685800549257216/?commentUrn=urn%3Ali%3Acomment%3A(activity%3A7381685800549257216%2C7382628060044599296)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7382628060044599296%2Curn%3Ali%3Aactivity%3A7381685800549257216)

Here's a rule of thumb: If "AI" seems like a good solution, you are probably both misjudging what the "AI" can do and misframing the problem.

>>

498 107 20 16
5 months ago
Video thumbnail

UNTIL IT’S DONE, Ep. 4: Sylvia Rivera

In the 1970s, queer New Yorkers had been pushed to the margins of NYC. Our trans neighbors faced immense cruelty. But in Sylvia Rivera, they found a champion.

As we combat Trump’s politics of darkness, her legacy can light the path forward.

21,679 6,198 398 1,162
5 months ago

we wrote this over 5 yrs ago

dl.acm.org/doi/abs/10.1...

65 17 1 0
5 months ago

As a cognitive scientist, I confirm that we don't know how humans think.

2 1 0 0
5 months ago

as a cognitive scientist, I confirm

80 14 0 2
5 months ago
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1

New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/

439 164 12 72
5 months ago
Find your own voice: against LLM slop in academic writing – The Ideophone

Since you ask: we don't need tools that reduce the art of academic writing to average authorless output.
Find your own voice: ideophone.org/find-your-ow...

Also, the efficiency frame is suspect. We don't need more papers, faster; we need slow science. osf.io/preprints/os...

7 2 0 0
5 months ago
Preview
An interoceptive model of energy allostasis linking metabolic and mental health Interactions between metabolic interoception and regulation may drive comorbidity between mental and metabolic ill-health.

What drives the bidirectional relationship between metabolic and mental ill-health?

Read our new metabolic psychiatry paper, “An interoceptive model of energy allostasis linking metabolic and mental health” www.science.org/doi/10.1126/... led by @saramehrhof.bsky.social @hugofleming.bsky.social

55 21 5 2
5 months ago
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday

the ideology is well documented in Gebru & Torres's paper

The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence

www.firstmonday.org/ojs/index.ph...

20 7 1 0
5 months ago

i would rather read your imperfectly written, typo riddled idiosyncratic ideas that show you have read my papers and are familiar with my lab’s work than a grammatically perfect and generic genAI written application/email … every damn time

138 20 1 3
5 months ago

Getting close to 50k views and I'm wondering is it just everybody is scared to say this and pleased I did? Because if there's so many of us who agree, trust me I'd know if 1k people disagreed with me let alone 50k, why are we letting AI ruin our universities?

Together we can turn back the tide.

329 103 19 2
5 months ago
Preview
60 violations in 50 days: Inside ICE’s giant tent facility at Ft. Bliss As the Trump Administration rushes to open massive makeshift holding centers nationwide, one former official called the list of violations at Fort Bliss among the worst she’s ever seen.

One immigrant detained at Fort Bliss was given psychotropic medication with no record of consent. Another was placed on suicide watch, with no record of anyone actually watching them.

A new investigation reveals horrific violations at the Fort Bliss detention facility, even by ICE’s own standards.

696 375 22 16
5 months ago

this is so good. our paper is mentioned (or rather quoted extensively) from ~25mins onwards

22 5 0 0
5 months ago

feel welcome to read our paper: firstmonday.org/ojs/index.ph...

56 12 3 4
5 months ago
Post image

LIVE NOW!🔥

We have our fellow East Coast friends @brujajagaming.bsky.social and @k0ppk0pp.bsky.social here today to play a chaotic game of casual commander!

Thank you to our sponsors @dragonshield.bsky.social & @moxfield.com !

Don’t forget to like & subscribe! RT to share! #magicthegathering #edh

47 13 3 7
5 months ago

but let's focus on the *potential* benefits...

34 13 1 1
5 months ago
Preview
Blog 1: All-round is the new excellent | Radboud University Recent years have seen a lot of buzz around Recognition & Rewards. Everyone welcomes improvement in how we recognize and reward scholars, but we rarely hear about one of the most palpable side effects...

recognisable — see e.g. 'Allround is the new excellent' from a while back www.ru.nl/en/staff/new... (part of a series of blog posts a bunch of us wrote from inside the continental European system)

5 1 0 0
6 months ago

Cursed like every start of the academic year but extra

15 6 0 0
6 months ago

“She also detailed the strategy of “credentialism,” by which women hoped that if they accrued enough credentials, their gender became irrelevant. (It did not.)”

In my experience my gender became more relevant the higher I got in the academic hierarchy. Misogynist attacks get worse too

108 32 2 2
6 months ago

Sam Altman poking a laptop and asking it to hurry up

98 9 3 0
6 months ago

See also: roleplaying games.

33 4 0 0
6 months ago

About that... we audited the open source status of Lumo and found it came in rock bottom in the EU Open Source AI Index osai-index.eu/news/lumo-pr... — consider sharing more details to rise through the openness ranks, @proton.me 🫣

#OpenSource #OpenWashing #lumo

29 13 2 0
6 months ago
Preview
LEGO Stops Shipping Individual Bricks to United States After Trump's Tariffs The popular Pick a Brick service let LEGO fanatics get the exact piece they wanted. It’s no longer available to US customers.

Trump take LEGO

www.404media.co/lego-stops-s...

2,482 974 68 248
6 months ago
JOSEPH WEIZENBAUM
COMPUTER POWER AND HUMAN REASON
FROM JUDGMENT TO CALCULATION

I finally read computer scientist Joseph Weizenbaum’s 1976 classic “Computer Power and Human Reason.”

This book deserves a massive revival in our current age of grotesque and largely thoughtless AI creep into everything:

1,101 282 48 49
6 months ago
What could be more obvious than the fact that, whatever intelligence a computer can muster, however it may be acquired, it must always and necessarily be absolutely alien to any and all authentic human concerns?
The very asking of the question, "What does a judge (or a psychiatrist) know that we cannot tell a computer?" is a monstrous obscenity. That it has to be put into print at all, even for the purpose of exposing its morbidity, is a sign of the madness of our times.
Computers can make judicial decisions, computers can make psychiatric judgments. They can flip coins in much more sophisticated ways than can the most patient human being. The point is that they ought not be given such tasks. They may even be able to arrive at "correct" decisions in some cases-but always and necessarily on bases no human being should be willing to accept.
There have been many debates on "Computers and Mind." What I conclude here is that the relevant issues are neither technological nor even mathematical; they are ethical. They cannot be settled by asking questions beginning with "can." The limits of the applicability of computers are ultimately statable only in terms of oughts. What emerges as the most elementary insight is that, since we do not now have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom.

There’s an enormous amount of stuff in this book I’d like to highlight, but start with:

“What emerges as the most elementary insight is that, since we do not now have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom.”

1,093 309 17 17
6 months ago

Just FYI I really enjoyed the book that is being “critiqued” here. You might like it too. bookshop.org/p/books/the-...

32 5 3 0
6 months ago

Somehow they brought the convo to Dostoevsky AND Disco Elysium and it was like hitting a piñata of nerdery.

56 2 2 0