Artificial Intelligence Research Centre, CitAI.'s Avatar

Artificial Intelligence Research Centre, CitAI.

@citai.bsky.social

We are an interdisciplinary team at City, University of London specialised in the intersection between the development of novel AI techniques, AGI, and XAI with a keen interest in its legal, ethical and social impact. https://cit-ai.net/

1,169 Followers  |  40 Following  |  37 Posts  |  Joined: 31.07.2023
Posts Following

Posts by Artificial Intelligence Research Centre, CitAI. (@citai.bsky.social)

arxiv.org/abs/2602.07519

14.02.2026 10:30 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
PALMS: Pavlovian Associative Learning Models Simulator Simulations are an indispensable step in the cycle of theory development and refinement, helping researchers formulate precise definitions, generate models, and make accurate predictions. This paper i...

New preprint and simulator of associative learning attentional models. Have fun! πŸ‘οΈ
arxiv.org/abs/2602.07519
cal-r.org/index.php?id...
#simulation #associative_learning #attention

10.02.2026 14:49 β€” πŸ‘ 16    πŸ” 13    πŸ’¬ 2    πŸ“Œ 1

Also, including me.

10.02.2026 19:14 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
Access Now - A call to EU legislators: protect rights and reject the call to delete transparency safeguard in AI Act We, the undersigned organisations and individuals, urge you in the strongest possible terms to reject the deletion of the Article 49(2) transparency safeguard for high-risk AI systems that is proposed...

We are demanding that EU co-legislators reject attempts in the AI Omnibus to remove a key transparency safeguard from the AI Act.

We cannot open a loophole that would let providers exempt themselves from the AI Act’s high risk requirements with no transparency www.accessnow.org/press-releas...

11.02.2026 09:47 β€” πŸ‘ 304    πŸ” 112    πŸ’¬ 4    πŸ“Œ 4
Table 1
A non-comprehensive list of different (not mutually exclusive) meanings of the word Al, including Al as idea, Al as a type of system, Al as a field of study, and Al as institution(al unit).

Table 1 A non-comprehensive list of different (not mutually exclusive) meanings of the word Al, including Al as idea, Al as a type of system, Al as a field of study, and Al as institution(al unit).

The term β€˜Artificial Intelligence’ (AI) means many things to many people (see Table 1). (...) One meaning of β€˜AI’ that seems often forgotten these days is one that played a crucial role in the birth of cognitive science as an interdiscipline in the 1970s and ’80s." 2/n

16.08.2024 19:42 β€” πŸ‘ 190    πŸ” 47    πŸ’¬ 7    πŸ“Œ 12

This is NOT what AI was about
"What can researchers do if they suspect that their manuscripts have been peer reviewed using artificial intelligence (AI)?"

02.12.2025 19:43 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This -> "We have deprived the youth of the joys of discovery, the thrill to feel yourself close to an answer, and the wonderful, rewarding feeling of a moment of lucidity after hard work."

01.12.2025 09:15 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Scripto_Asinine or sound minds?

Perhaps of interest to some of you.
"Quick societal adoption of tools often reflects a demand driven by business interests or culture. Reflecting on generative AI, especially Large Language Models, I believe there is a strong lifestyle generation component that nurtures..

cal-r.org/mondragon/S8...

30.11.2025 15:44 β€” πŸ‘ 7    πŸ” 5    πŸ’¬ 0    πŸ“Œ 2
Preview
Advancing the Biological Plausibility and Efficacy of Hebbian Convolutional Neural Networks The research presented in this paper advances the integration of Hebbian learning into Convolutional Neural Networks (CNNs) for image processing, syst…

Ablation study and resulting optimal architecture that considerably improves recent research on CNN-Hebbian learning integration with competition mechanisms.
with @julian-jn.bsky.social
www.sciencedirect.com/science/arti...

09.06.2025 17:41 β€” πŸ‘ 8    πŸ” 4    πŸ’¬ 0    πŸ“Œ 2
Preview
EU set to water down landmark AI act after Big Tech pressure Commission proposes pauses to provisions in digital rule book

Surprise, surprise. Here we go.

β€œThe European Commission is proposing a pause to parts of its landmark artificial intelligence laws amid intense pressure from Big Tech companies and the US government.”

www.ft.com/content/af6c...

07.11.2025 14:03 β€” πŸ‘ 21    πŸ” 16    πŸ’¬ 4    πŸ“Œ 4
Against the Uncritical Adoption of AI Technologies in Academia by Guest et. al. (2025)
YouTube video by librebel Against the Uncritical Adoption of AI Technologies in Academia by Guest et. al. (2025)

✨ This is wonderful 🎬 🍿

Librebel on Youtube reads out our position paper:

Guest, O., Suarez, … & van Rooij, I. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. Zenodo. doi.org/10.5281/zeno...

www.youtube.com/watch?v=cJNO... @olivia.science @marentierra.bsky.social

08.11.2025 00:22 β€” πŸ‘ 39    πŸ” 15    πŸ’¬ 2    πŸ“Œ 1
Preview
AI Is Hollowing Out Higher Education Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.

β€œUltimately, the collective strategy of AI companies threatens to deskill precisely those people who are essential for society to function(…) automation of knowledge and culture by private companies is a worrying prospect – conjuring dystopian and outright fascistic scenarios.” β€” @olivia.science

17.10.2025 22:51 β€” πŸ‘ 438    πŸ” 223    πŸ’¬ 6    πŸ“Œ 26
Preview
Algebras of actions in an agent's representations of the world Learning efficient representations allows robust processing of data, data that can then be generalised across different tasks and domains, and it is t…

A quick summary for busy people:

We propose a mathematical framework for learning representations by extracting the algebra of transformations of worlds from the agent's perspective.
1/n
www.sciencedirect.com/science/arti...

11.09.2025 17:40 β€” πŸ‘ 30    πŸ” 12    πŸ’¬ 2    πŸ“Œ 1
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users β€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users β€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🀩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 β€” πŸ‘ 3776    πŸ” 1889    πŸ’¬ 110    πŸ“Œ 389

Some pictures of this enlightening event organised by @hcid.city and @citai.bsky.social

05.09.2025 19:14 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Algebras of actions in an agent's representations of the world Learning efficient representations allows robust processing of data, data that can then be generalised across different tasks and domains, and it is t…

Available open access.

www.sciencedirect.com/science/arti...

21.08.2025 15:41 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1

I am pleased to announce that this paper has been accepted for publication in Artificial Intelligence (AIJ)! 😊

arxiv.org/abs/2310.01536

11.08.2025 13:43 β€” πŸ‘ 8    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1
Preview
Algebras of actions in an agent's representations of the world In this paper, we propose a framework to extract the algebra of the transformations of worlds from the perspective of an agent. As a starting point, we use our framework to reproduce the symmetry-base...

New version available. The readability of the paper has been greatly improved. Enjoy!
arxiv.org/abs/2310.01536

29.07.2025 09:30 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Professor Margaret Boden - Human-level AI: Is it Looming or Illusory?
YouTube video by CSER Cambridge Professor Margaret Boden - Human-level AI: Is it Looming or Illusory?

Did you all know that Margaret Boden founded the first School of Cognitive and Computing Sciences in 1987.

No?

Well, now you do 😌

Btw, listen to this great talk by her. I'll be adding this to the resources from our 1st year students in Intro to AI.

www.youtube.com/watch?v=wPRA...

01.09.2024 19:21 β€” πŸ‘ 86    πŸ” 26    πŸ’¬ 9    πŸ“Œ 3
The Margaret Boden Lecture - Lecture One by Professor Margaret Boden (Sussex)
YouTube video by Future of Intelligence The Margaret Boden Lecture - Lecture One by Professor Margaret Boden (Sussex)

Inaugural Lecture of the Margaret Boden Lecture series at the Leverhulme Centre for the Future of Intelligence

www.youtube.com/watch?v=zNr2...

28.07.2025 16:18 β€” πŸ‘ 4    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

Those who were lucky to know her personally can only share her defence of the university as the place for fundamental and speculative research and debate and her commitment to challenge powers-that-be.
🌸

2/2

28.07.2025 15:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Obituary: Professor Maggie Boden The University of Sussex mourns the loss of Professor Margaret (Maggie) Boden, a pioneering figure in cognitive science and artificial intelligence.

Sad news. RIP Maggie Boden.
Maggie was an AI pioneer and a truly interdisciplinary scholar, integrating in her work philosophy, psychology, and computer science -worth emphasising in an era of massive, data-hungry GenAI architectures.
1/n
staff.sussex.ac.uk/news/article...

28.07.2025 15:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
24.07.2025 14:51 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Comics & AI: Critical Prompts A multidisciplinary conference on the future of comics, technology, and creativity

You are invited to a one-day multidisciplinary conference on the future of comics, technology, and creativity. Abstracts (200 words) + bios (100 words) due: 10 July 2025. Don’t miss this opportunity to rethink comics and AI with a multidisciplinary community! #CFP: comicsandai.org #ComicsStudies

15.05.2025 06:07 β€” πŸ‘ 17    πŸ” 13    πŸ’¬ 2    πŸ“Œ 4
12.05.2025 17:29 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
CitAI-Seminars Website of the Artificial Reseach Centre, CitAI. Seminars.

#CitAI_Seminars
Tuesday, 22-04 (Zoom, 16:00 BST) Marin Lujak Artificial Intelligence Research Group on
'Scalable, efficient and distributed multi-agent coordination of automated agricultural vehicle fleets'
If interested, pls get in touch with
the organisers.
More info cit-ai.net/seminars.html

11.04.2025 14:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
The AI Continent Action Plan The European Union is committed and determined to become a global leader in Artificial Intelligence, a leading AI continent.

The EU has just announced a plan to establish Europe’s leadership in AI.
digital-strategy.ec.europa.eu/en/library/a...

Five key points:

1. Creating a solid computing infrastructure: strengthening the network of AI Factories and establishing resource-efficient Gigafactories

1/n

09.04.2025 13:50 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
A representational framework for learning and encoding structurally enriched trajectories in complex agent environments The ability of artificial intelligence agents to make optimal decisions and generalise them to different domains and tasks is compromised in complex scenarios. One way to address this issue has focuse...

New preprint with @corinacatarau.bsky.social and
@ealonso.bsky.social from @citai.bsky.social:
A representational framework for learning and encoding structurally enriched trajectories in complex agent environments
arxiv.org/abs/2503.13194

18.03.2025 14:52 β€” πŸ‘ 11    πŸ” 9    πŸ’¬ 2    πŸ“Œ 1
Preview
How we are pioneering artificial intelligence applications in public health The UK Health Security Agency (UKHSA) is harnessing the power of artificial intelligence (AI) to address health security challenges. Here are 3 examples of projects that demonstrate how we're using cu...

AI in health security:

1- LLMs to speed up qualitative analysis in public health research.
2- AI to detect foodborne illness outbreaks.
3- AI to make public health guidance more consistent.
ukhsa.blog.gov.uk/2025/03/14/h...

14.03.2025 14:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Andrew Barto and Richard Sutton are the recipients of the 2024 ACM A.M. Turing Award for developing the conceptual and algorithmic foundations of reinforcement learning. Andrew Barto and Richard Sutton are the recipients of the 2024 ACM A.M. Turing Award for developing the conceptual and algorithmic foundations of reinforcement learning. In a series of papers beginning in the 1980s, Barto and Sutton introduced the main ideas, constructed the mathematical foundations, and developed important algorithms for reinforcement learningβ€”one of the most important approaches for creating intelligent systems.

Congratulations to Rich Sutton and Andrey Barto!
www.acm.org/media-center...

05.03.2025 16:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0