This is astonishingly brilliant! And basically a government information film 👏
I’ll be there - all going to plan!
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
AI presents a fundamental threat to our ability to use polls to assess public opinion. Bad actors who are able to infiltrate panels can flip close election polls for less than the cost of a Starbucks coffee. Models will also infer and confirm hypotheses in experiments. Current quality checks fail
Hard recommend for the Mystery AI Hype Theatre 3000 Podcast - although you will end up sitting with your head in your hands, maybe crying a bit…
Imagine believing that using text-generating machines to perform clinical assessments & replace expert advice from real, qualified humans could improve health care.
Such machines will deskill experts & surely kill.
And you can bet the machines & their makers won’t be held accountable.
Bureaucratic benchmarks are soul-crushing because they leave a big gap between what we care about and what can be measured.
When we forget about the things we actually care about (like making interesting discoveries) and we write worse papers to get more publications, the metric eats the value.
If your study is framed as asking whether "AI" does X as well as humans do, it's fundamentally misguided and I'd argue not scientifically sound.
A short 🧵>>
'to treat peer review as a throughput problem is to misunderstand what is at stake. Review is not simply a production stage in the research pipeline; it is one of the few remaining spaces where the scientific community talks to itself.' 1/3
I hate this. I hate that scholars and teachers are supposed to be digital fraud experts. I hate that this part of their job description is becoming larger and larger. I hate the widening distrust. I hate a culture that aggressively devalues the curiosity and humility required for ongoing learning.
Will do a brief thread with highlights:
“Many of our contemporaries now want to combine the worst of these two worlds [i.e., Psychology and Artificial Intelligence].
What could possibly go wrong?
Quite a lot.”
2/🧵
It is EXHAUSTING not only being made responsible for coming up with new kinds of assignments for our students; it's also tedious reading op-eds that suggest the core problem is a crisis in teaching. But, as Chris and I lay out here, this isn't a crisis in teaching; it's an attack on learning.
The latest QRP (although it goes well beyond ‘questionable’ and straight into the realm of junk data fraud IMHO): LLM-hacking
Good luck drawing reliable conclusions from the answers that Qualtrics' AI model provides to your survey questions... bsky.app/profile/joac...
I wrote this brief talk on why “augmenting diversity” with LLMs is empirically unsubstantiable, conceptually flawed, and epistemically harmful and a nice surprise to see the organisers have made it public
synthetic-data-workshop.github.io/papers/13.pdf
Delighted to that my grant proposal with Anita Eerland, Verbs and Eyewitness Testimony: A Multilab Registered Replication Report, has been funded by @NWO (Dutch Research Council) through OpenScience.nl. Excited to get started on the project I describe here.
rolfzwaan.substack.com/p/memory-mis...
Very interesting - and look forward to reading. Do you have any thoughts about the extent to which there might be a developmental angle to trait-like over-confidence?
🚨 Now out in Psych Science 🚨
We report an adversarial collaboration (with @donandrewmoore.bsky.social) testing whether overconfidence is genuinely a trait
The paper was led by Jabin Binnendyk & Sophia Li (who is fantastic and on the job market!) Free copy here: journals.sagepub.com/eprint/7JIYS...
"an ever-widening gap between those who do the work and those who administer it. And an even larger gap exists between those tasked with most of the teaching and those who do most of the budgeting."
www.aaup.org/underclass-s...
#Highered #PhDchat #research #teaching #academicsky
We know the drivers of research waste in academia are
⚠️Pressure to maximize papers and PhD students
⚠️Endless demands on time due to poor management
⚠️Stakeholders don't insist on robust quality systems to underpin mission critical work
Solutions that don't address these are pointless.
“Berg's point is that AI doesn't merely automate tasks — it automates the very processes through which people develop their skills.”
The most precious commodity you have is your attention. You don’t have to waste it on poor-faith debates or arguments with strangers if you don’t think they’ll be productive. You can prioritize the things that matter to you and make your life richer.
I feel you, ancient Mongolian ceramic hedgehog. I feel you.
It’s widely known (and, I think, pretty uncontroversial) that learning requires effort — specifically, if you don’t have to work at getting the knowledge, it won’t stick.
Even if an LLM could be trusted to give you correct information 100% of the time, it would be an inferior method of learning it.
Absolutely this…there are still many predators evading their comeuppance, including in my own field.
new paper by Sean Westwood:
With current technology, it is impossible to tell whether survey respondents are real or bots. Among other things, makes it easy for bad actors to manipulate outcomes. No good news here for the future of online-based survey research
This seems bad on like 15 different fronts
New paper by @emilyspearing.bsky.social et al. out now in the Journal of Environmental Psychology
Black Summer Arson: Examining the Impact of Climate Misinformation and Corrections on Reasoning
doi.org/10.1016/j.je...