Sepehr Razavi's Avatar

Sepehr Razavi

@srazavi.bsky.social

Doctoral student @ox.ac.uk and Member of Social Computation and Representation Lab - https://www.socrlab.net/people

187 Followers  |  413 Following  |  75 Posts  |  Joined: 22.09.2023  |  1.6732

Latest posts by srazavi.bsky.social on Bluesky

Proudly published with @andreaeyleen.bsky.social:

A metatheory of classical and modern connectionism. doi.org/10.1037/rev0...

We touch on what has been up with connectionism as a framework for computational modelling β€” & for everything it seems these days with AI and LLMs β€” pre-2010 vs post.

1/n

17.10.2025 12:53 β€” πŸ‘ 56    πŸ” 19    πŸ’¬ 3    πŸ“Œ 4
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users β€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users β€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🀩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 β€” πŸ‘ 3242    πŸ” 1652    πŸ’¬ 100    πŸ“Œ 285

It’s even a bit bizarre (to not say nonsensical) to read consciousness into theTuring Test given that Turing explicitly rejects a counter-argument from consciousness as not being measurable in the 1950 paper.

17.10.2025 19:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Congrats Kenny!

13.10.2025 19:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Bonus right outside the frame: a St-George’s Cross the size of a squash court

07.10.2025 08:44 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Confederate flag in plain sight in affluent Britain…. 2025!

07.10.2025 08:42 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What a fantastic opportunity, consider applying!

30.09.2025 23:36 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Environment watchdog buried report on lead in children’s blood to placate mining companies, emails show Documents tabled in NSW parliament show state agency took four years to publish report and told miners it would be put online β€˜quietly’ but EPA says it was released to community earlier

This is a complete dereliction of responsibility from all involved. www.theguardian.com/australia-ne...

22.09.2025 01:41 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Post image 18.09.2025 21:30 β€” πŸ‘ 19    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Thanks for your interest Liberty :) Joe should be in touch!

18.09.2025 22:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is happening right now!

18.09.2025 12:01 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This is happening in two days! Cannot wait for this talk, please consider joining us

16.09.2025 15:49 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Successful British acculturation is seeing an Adrian Chiles headline and thinking « I see where he’s coming fromΒ Β»

11.09.2025 16:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Hi Zahra thanks for your interest ;), would you mind either sharing your email address here or send me a quick line @ mert5045@ox.ac.uk

11.09.2025 10:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Feel free to reach out to me, or better yet, send an email to @joebarnby.com to receive a link to this talk. Looking forward to seeing many of you then :)

11.09.2025 08:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This work provides a mechanistic bridge between subjective beliefs about agency and their impact on learning and well-being.

11.09.2025 08:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

These studies also explore how agency-modulated reinforcement learning is represented in the brain, how it changes across development, how it is associated with early-life adversity, and how it is related to mental health symptoms.

11.09.2025 08:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In this talk, Dr Dorfman will present a series of studies demonstrating that agency beliefs can modulate the extent to which individuals learn from positive relative to negative outcomes and that this process can be explained by a novel Bayesian reinforcement learning model.

11.09.2025 08:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Agency beliefs have a substantial impact on mental health, but the mechanisms through which agency and mental health interact are unclear.

11.09.2025 08:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

However, it is often impossible to know for certain whether we have control over the environment, so we must instead make inferences and form beliefs about our agency.

11.09.2025 08:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In order to navigate an uncertain world, we must make flexible predictions for maximizing rewards, minimizing punishments, and guiding future behavior. These predictions are most accurate, and feedback most useful, when our own actions are responsible for the consequences we receive.

11.09.2025 08:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Next week, we have the distinct pleasure of (e-)hosting the fantastic @hayleydorfman.bsky.social at the SoCR Lab's Invited Talk Series. Dr Dorfman will presenting some of her recent work on agency-modulated reinforcement learning πŸ§΅πŸ‘‡

11.09.2025 08:04 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 1    πŸ“Œ 2
Preview
Sepehr Razavi, Michael Moutoussis, Peter Dayan, Nichola Raihani, Vaughan Bell & Joseph Barnby, Pseudo-approaches lead to pseudo-explanations: reply to Corlett et al. - PhilPapers

"Assuming that functional specialisation necessarily implies an β€˜encapsulated module’ is a widely recognised error even in evolutionary accounts."

05.09.2025 17:26 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

What a brilliant group of people to work with! Looking forward to expanding some of the crucial ideas only touched upon here. Feel free to share your thoughts, insights, etc.

05.09.2025 07:31 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸ“– Our letter of reply to 'pseudosocial' cognition is now out in @cp-trendscognsci.bsky.social

www.sciencedirect.com/science/arti...

Led by the talented @srazavi.bsky.social + written with @vaughanbell.bsky.social, Peter Dayan, @nicholaraihani.bsky.social, Michael Moutoussis

#NeuroPsychSky

05.09.2025 07:15 β€” πŸ‘ 16    πŸ” 9    πŸ’¬ 0    πŸ“Œ 1
Computational postdoc ad for KCL funded by the Wellcome Trust on the NEPTUNE project

Computational postdoc ad for KCL funded by the Wellcome Trust on the NEPTUNE project

🧠 We're hiring a computational postdoc!

3+ years with me & @mitulamehta.bsky.social on @wellcometrust.bsky.social funded social cognition/paranoia research at the IoPPN.

Lead & develop computational work, collaborate with experimentalists on psychosis/THC data.

DM for details! lnkd.in/eCMy9Jf5

03.09.2025 07:13 β€” πŸ‘ 41    πŸ” 36    πŸ’¬ 4    πŸ“Œ 1

Academic publishing in the humanities really is all a crapshoot at the end of the day no matter how many improvements we try to find

26.08.2025 17:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Your paper looks interesting and I am sorry you’re going through this. Unsolicited advice: to my non-expert eyes, what is missing is a sense of what hangs on this (even for Spinoza exegesis).

26.08.2025 05:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Congrats Chanelle :)

22.08.2025 13:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Congratulations! Looking forward to reading this :)

21.08.2025 18:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@srazavi is following 20 prominent accounts