Lorraine Hope

Lorraine Hope

@lorrainehope.bsky.social

Professor of Applied Cognitive Psychology at University of Portsmouth, UK. Special interest in memory performance and memory elicitation techniques. Views own.

831 Followers 552 Following 37 Posts Joined Oct 2023
3 days ago

This is astonishingly brilliant! And basically a government information film 👏

0 0 0 0
6 days ago

I’ll be there - all going to plan!

2 0 0 0
6 months ago
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

3,790 1,897 110 390
3 months ago
Post image

AI presents a fundamental threat to our ability to use polls to assess public opinion. Bad actors who are able to infiltrate panels can flip close election polls for less than the cost of a Starbucks coffee. Models will also infer and confirm hypotheses in experiments. Current quality checks fail

211 97 4 26
3 weeks ago

Hard recommend for the Mystery AI Hype Theatre 3000 Podcast - although you will end up sitting with your head in your hands, maybe crying a bit…

7 3 0 0
3 weeks ago

Imagine believing that using text-generating machines to perform clinical assessments & replace expert advice from real, qualified humans could improve health care.

Such machines will deskill experts & surely kill.

And you can bet the machines & their makers won’t be held accountable.

15 8 0 0
3 weeks ago

Bureaucratic benchmarks are soul-crushing because they leave a big gap between what we care about and what can be measured.

When we forget about the things we actually care about (like making interesting discoveries) and we write worse papers to get more publications, the metric eats the value.

11 3 0 0
1 month ago

If your study is framed as asking whether "AI" does X as well as humans do, it's fundamentally misguided and I'd argue not scientifically sound.

A short 🧵>>

200 54 5 5
1 month ago
Preview
AI is not a peer, so it can’t do peer review If we still believe that science is a vocation grounded in argument, curiosity and care, we can’t delegate judgement to machines, says Akhil Bhardwaj

'to treat peer review as a throughput problem is to misunderstand what is at stake. Review is not simply a production stage in the research pipeline; it is one of the few remaining spaces where the scientific community talks to itself.' 1/3

367 156 6 20
1 month ago

I hate this. I hate that scholars and teachers are supposed to be digital fraud experts. I hate that this part of their job description is becoming larger and larger. I hate the widening distrust. I hate a culture that aggressively devalues the curiosity and humility required for ongoing learning.

183 74 5 0
2 months ago
Title page with abstract: 

The current AI hype cycle combined with Psychology’s various crises make for a perfect storm. Psychology, on the one hand, has a history of weak theoretical foundations, a neglect for computational and formal skills, and a hyperempiricist privileging of experimental tasks and testing for effects. Artificial Intelligence, on the other hand, has a history of conflating artifacts for theories of cognition, or even minds themselves, and its engineering offspring likes to move fast and break things. Many of our contemporaries now want to combine the worst of these two worlds. What could possibly go wrong? Quite a lot. Does this mean that Psychology and Artificial Intelligence can best part ways? Not at all. There are very fruitful ways in which the two disciplines can interact and theoretically contribute to Cognitive Science, for instance, by studying the scope and limits of computational models of human cognition. But to reap the fruits one needs to understand how to steer clear of potential traps.

Will do a brief thread with highlights:

“Many of our contemporaries now want to combine the worst of these two worlds [i.e., Psychology and Artificial Intelligence].

What could possibly go wrong?

Quite a lot.”

2/🧵

67 17 2 0
2 months ago

It is EXHAUSTING not only being made responsible for coming up with new kinds of assignments for our students; it's also tedious reading op-eds that suggest the core problem is a crisis in teaching. But, as Chris and I lay out here, this isn't a crisis in teaching; it's an attack on learning.

1,677 508 13 23
2 months ago
66 21 1 0
2 months ago

The latest QRP (although it goes well beyond ‘questionable’ and straight into the realm of junk data fraud IMHO): LLM-hacking

0 0 0 0
2 months ago

Good luck drawing reliable conclusions from the answers that Qualtrics' AI model provides to your survey questions... bsky.app/profile/joac...

27 7 0 1
2 months ago
title: Cheap science, real harm: the cost of replacing human
participation with synthetic data

author: Abeba Birhane

abstract: Driven by the goals of augmenting diversity, increasing speed, reducing cost, the
use of synthetic data as a replacement for human participants is gaining traction
in AI research and product development. This talk critically examines the claim
that synthetic data can “augment diversity,” arguing that this notion is empirically
unsubstantiated, conceptually flawed, and epistemically harmful. While speed and
cost-efficiency may be achievable, they often come at the expense of rigour, insight,
and robust science. Drawing on research from dataset audits, model evaluations,
Black feminist scholarship, and complexity science, I argue that replacing human
participants with synthetic data risks producing both real-world and epistemic
harms at worst and superficial knowledge and cheap science at best

I wrote this brief talk on why “augmenting diversity” with LLMs is empirically unsubstantiable, conceptually flawed, and epistemically harmful and a nice surprise to see the organisers have made it public

synthetic-data-workshop.github.io/papers/13.pdf

826 258 20 10
2 months ago
Preview
Memory, Misinformation, and the Need to Replicate Suppose you and a friend witness a car crash.

Delighted to that my grant proposal with Anita Eerland, Verbs and Eyewitness Testimony: A Multilab Registered Replication Report, has been funded by @NWO (Dutch Research Council) through OpenScience.nl. Excited to get started on the project I describe here.

rolfzwaan.substack.com/p/memory-mis...

13 5 0 0
2 months ago

Very interesting - and look forward to reading. Do you have any thoughts about the extent to which there might be a developmental angle to trait-like over-confidence?

1 0 1 0
2 months ago
Post image

🚨 Now out in Psych Science 🚨

We report an adversarial collaboration (with @donandrewmoore.bsky.social) testing whether overconfidence is genuinely a trait

The paper was led by Jabin Binnendyk & Sophia Li (who is fantastic and on the job market!) Free copy here: journals.sagepub.com/eprint/7JIYS...

127 41 8 6
2 months ago
Preview
UK to re-join Erasmus+ – here are six benefits of the European exchange scheme Erasmus+ an accessible and well-supported programme.

Erasmus+ an accessible and well-supported programme.

17 4 0 3
2 months ago
Preview
The Underclass Is in Session What do we see when we view the structure of academic labor as it is, not as we wish it to be?

"an ever-widening gap between those who do the work and those who administer it. And an even larger gap exists between those tasked with most of the teaching and those who do most of the budgeting."
www.aaup.org/underclass-s...
#Highered #PhDchat #research #teaching #academicsky

19 5 0 0
1 year ago
A broken record (vinyl music album)

We know the drivers of research waste in academia are

⚠️Pressure to maximize papers and PhD students
⚠️Endless demands on time due to poor management
⚠️Stakeholders don't insist on robust quality systems to underpin mission critical work

Solutions that don't address these are pointless.

58 24 2 0
3 months ago

“Berg's point is that AI doesn't merely automate tasks — it automates the very processes through which people develop their skills.”

86 28 1 2
3 months ago

The most precious commodity you have is your attention. You don’t have to waste it on poor-faith debates or arguments with strangers if you don’t think they’ll be productive. You can prioritize the things that matter to you and make your life richer.

11,818 3,003 131 196
3 months ago

I feel you, ancient Mongolian ceramic hedgehog. I feel you.

2,316 937 19 9
3 months ago

It’s widely known (and, I think, pretty uncontroversial) that learning requires effort — specifically, if you don’t have to work at getting the knowledge, it won’t stick.

Even if an LLM could be trusted to give you correct information 100% of the time, it would be an inferior method of learning it.

5,622 1,587 88 46
3 months ago

Absolutely this…there are still many predators evading their comeuppance, including in my own field.

2 0 0 0
3 months ago
Post image

new paper by Sean Westwood:

With current technology, it is impossible to tell whether survey respondents are real or bots. Among other things, makes it easy for bad actors to manipulate outcomes. No good news here for the future of online-based survey research

776 391 41 126
3 months ago

This seems bad on like 15 different fronts

8 2 1 1
3 months ago
Redirecting

New paper by @emilyspearing.bsky.social et al. out now in the Journal of Environmental Psychology

Black Summer Arson: Examining the Impact of Climate Misinformation and Corrections on Reasoning

doi.org/10.1016/j.je...

2 2 1 0