Adam Brandt's Avatar

Adam Brandt

@adambrandt.bsky.social

Senior Lecturer in Applied Linguistics at Newcastle University (UK). Uses #EMCA to research social interaction, and particularly how people communicate with (and through) technologies, such as #conversationalAI.

735 Followers  |  207 Following  |  43 Posts  |  Joined: 03.12.2023  |  2.2766

Latest posts by adambrandt.bsky.social on Bluesky

Anyone in the north-east area early next month may be interested in this workshop. Using #EMCA methods to explore with the public how to create safe spaces for stories of racist experiences to be hear.

10.10.2025 13:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

This month we spoke to Professor Steven Bloch, expert in speech & language therapy and conversation analysis at @uclpals.bsky.social. We find out more about how he got into this fascinating field of research.
www.ucl.ac.uk/brain-scienc...

08.10.2025 10:08 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

A (now-defunct) AI bot makes calls to restaurants etc.

It was apparently pretty successful at call openings, second summonses, uh(m)s that precede a reason for the call, and other-initiated self-repair.

Authors: test actions, not intelligence. #EMCA

journals.sagepub.com/doi/full/10....

08.10.2025 07:51 β€” πŸ‘ 5    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
AGF 2026 | IDS

#EMCA alert!
The 25th Conference on Discourse and Conversation Analysis is taking place 25-27 March 2026 in Mannheim, Germany.

The theme is Technology Use and Social Interaction, and it has a fantastic lineup of keynote speakers and workshop leads!
www.ids-mannheim.de/aktuell/vera...

09.10.2025 14:59 β€” πŸ‘ 13    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0
Preview
Enduring Connections and New Directions: Qualitative Research in Communication Differences and Disorders | Qualitative Research in Communication Differences and Disorders If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

The Journal of Interaction Research in Communication Disorders is now Qualitative Research in Communication Differences and Disorders!
A great home for #EMCA research on what is often termed 'atypical interaction'.
Find out more in this editorial:
utppublishing.com/doi/10.3138/...

09.10.2025 14:52 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
The conversational action test: Detecting the artificial sociality of artificial intelligence - Saul Albert, William Housley, Rein Ove Sikveland, Elizabeth Stokoe, 2025 Drawing on the β€œVoigt-Kampff Empathy Test”—a science fiction version of Turing’s famous thought experimentβ€”we propose the Conversational Action Test (CAT): a ne...

@saulalbert.bsky.social @lizstokoe.bsky.social
Rein Ove Sikveland

β˜€οΈβ˜€οΈβ˜€οΈβ˜€οΈ

journals.sagepub.com/doi/full/10....

06.10.2025 17:34 β€” πŸ‘ 10    πŸ” 5    πŸ’¬ 0    πŸ“Œ 1
Meme showing a worker labelled "academic staff" digging a hole in the ground while 10 others look labelled with management titles such as "Director of Human Resources" look on. The caption underneath reads "The only way we can cut costs is to reduce the number of academic staff..."

Meme showing a worker labelled "academic staff" digging a hole in the ground while 10 others look labelled with management titles such as "Director of Human Resources" look on. The caption underneath reads "The only way we can cut costs is to reduce the number of academic staff..."

A meme for the modern university...

29.09.2025 20:44 β€” πŸ‘ 289    πŸ” 111    πŸ’¬ 8    πŸ“Œ 9
Image of event poster with speakers 

18:15 – 18:45 | Angus Addlesee β€” Applied Scientist at Amazon
πŸ€– Deploying an LLM-Based Conversational Agent on a Hospital Robot
Real-world challenges and insights from deploying a voice-enabled robot in a clinical setting.

​18:50 – 19:15 | Tatiana Shavrina, PhD β€” Research Scientist at Meta
πŸ”¬From conversational AI to autonomous scientific discovery with AI Agents.
The talk will cover the overview of the current challenges and new opportunities for LLM and LLM-based Agents.

​19:15 – 19:30 | Short Break β˜•

​19:30 – 19:55 | Lorraine Burrell β€” Conversation Design Lead at Lloyds Banking
Talk TBA

​20:00 – 20:30 | Alan Nichol β€” Co-founder & CTO at Rasa
πŸ”§ Why Tool Calling Breaks Your AI Agentsβ€”and What to Do Instead
Explore the pitfalls of tool use in agent design and how to avoid them.

Image of event poster with speakers 18:15 – 18:45 | Angus Addlesee β€” Applied Scientist at Amazon πŸ€– Deploying an LLM-Based Conversational Agent on a Hospital Robot Real-world challenges and insights from deploying a voice-enabled robot in a clinical setting. ​18:50 – 19:15 | Tatiana Shavrina, PhD β€” Research Scientist at Meta πŸ”¬From conversational AI to autonomous scientific discovery with AI Agents. The talk will cover the overview of the current challenges and new opportunities for LLM and LLM-based Agents. ​19:15 – 19:30 | Short Break β˜• ​19:30 – 19:55 | Lorraine Burrell β€” Conversation Design Lead at Lloyds Banking Talk TBA ​20:00 – 20:30 | Alan Nichol β€” Co-founder & CTO at Rasa πŸ”§ Why Tool Calling Breaks Your AI Agentsβ€”and What to Do Instead Explore the pitfalls of tool use in agent design and how to avoid them.

#EMCA folks - if you're working on #chatbots or other conversational technologies and are in London on 16th July, this event looks fantastic!

#Conversational #AI Meetup London, Weds 16th July, 18.00.

Including a talk on LLM-Based conversational agents in hospital robots

πŸ”— lu.ma/5zzqtt33

20.06.2025 07:13 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨Save the date!🚨
πŸ—£οΈSpread the word! πŸ—£οΈ

The next ICOP-L2 conference will be held at Newcastle University on πŸ—“οΈ24-26 August 2026πŸ—“οΈ
icopl2.org

More details on plenary speakers, workshops, the call for abstracts, and more, coming soon!

#EMCA #L2interaction

06.06.2025 11:19 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Thank you Liz! ❀️

29.05.2025 10:36 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Super interesting (and useful) Special Section of ROLSI on all things ethics and data collection for #EMCA research.

Well done (and thank you!) to all involved in putting this together.

28.05.2025 08:46 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Sign the Petition End unnecessary redundancies at Newcastle University

Please read and kindly consider signing in support of academics at Newcastle University (including from our team in Applied Linguistics & Communication) who are facing the threat of redundancy this summer:
www.change.org/p/end-unnece...

23.05.2025 11:01 β€” πŸ‘ 8    πŸ” 7    πŸ’¬ 0    πŸ“Œ 1

We are looking for CLAN and ELAN users interested in converting 1 or 2 transcripts to the DOTE format. We have tested a Python script the last couple of days - and it would be interesting to try with some "real" data. Please get in touch. #DOTE #ELAN #CLAN #transcription #EMCA #VIDEO

23.05.2025 11:35 β€” πŸ‘ 4    πŸ” 5    πŸ’¬ 2    πŸ“Œ 0
Preview
Sign the Petition End unnecessary redundancies at Newcastle University

Please read and kindly consider signing in support of academics at Newcastle University (including from our team in Applied Linguistics & Communication) who are facing the threat of redundancy this summer:
www.change.org/p/end-unnece...

23.05.2025 11:01 β€” πŸ‘ 8    πŸ” 7    πŸ’¬ 0    πŸ“Œ 1

This sounds fantastic!

11.04.2025 11:34 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preamble

All research at our institution, from ideation and execution to analysis and reporting, is bound by the Netherlands Code of Conduct for Research Integrity. This code specifies five core values that organise and inform research conduct: Honesty, Scrupulousness, Transparency, Independence and Responsibility.

One way to summarise the guidelines in this document is to say they are about taking these core values seriously. When it comes to using Generative AI in or for research, the question is if and how this can be done honestly, scrupulously, transparently, independently, and responsibly.

A key ethical challenge is that most current Generative AI undermines these values by design [3–5; details below]. Input data is legally questionable; output reproduces biases and erases authorship; fine-tuning involves exploitation; access is gated; versioning is opaque; and use taxes the environment.

While most of these issues apply across societal spheres, there is something especially pernicious about text generators in academia, where writing is not merely an output format but a means of thinking, crediting, arguing, and structuring thoughts. Hollowing out these skills carries foundational risks.

A common argument for Generative AI is a promise of higher productivity [5]. Yet productivity does not equal insight, and when kept unchecked it may hinder innovation and creativity [6, 7]. We do not need more papers, faster; we rather need more thoughtful, deep work, also known as slow science [8–10].

For these reasons, the first principle when it comes to Generative AI is to not use it unless you can do so honestly, scrupulously, transparently, independently and responsibly. The ubiquity of tools like ChatGPT is no reason to skimp on standards of research integrity; if anything, it requires more vigilance.

Preamble All research at our institution, from ideation and execution to analysis and reporting, is bound by the Netherlands Code of Conduct for Research Integrity. This code specifies five core values that organise and inform research conduct: Honesty, Scrupulousness, Transparency, Independence and Responsibility. One way to summarise the guidelines in this document is to say they are about taking these core values seriously. When it comes to using Generative AI in or for research, the question is if and how this can be done honestly, scrupulously, transparently, independently, and responsibly. A key ethical challenge is that most current Generative AI undermines these values by design [3–5; details below]. Input data is legally questionable; output reproduces biases and erases authorship; fine-tuning involves exploitation; access is gated; versioning is opaque; and use taxes the environment. While most of these issues apply across societal spheres, there is something especially pernicious about text generators in academia, where writing is not merely an output format but a means of thinking, crediting, arguing, and structuring thoughts. Hollowing out these skills carries foundational risks. A common argument for Generative AI is a promise of higher productivity [5]. Yet productivity does not equal insight, and when kept unchecked it may hinder innovation and creativity [6, 7]. We do not need more papers, faster; we rather need more thoughtful, deep work, also known as slow science [8–10]. For these reasons, the first principle when it comes to Generative AI is to not use it unless you can do so honestly, scrupulously, transparently, independently and responsibly. The ubiquity of tools like ChatGPT is no reason to skimp on standards of research integrity; if anything, it requires more vigilance.

A year ago our faculty commissioned & adopted guidance on GenAI and research integrity. Preamble below, pdf at osf.io/preprints/os..., text also at ideophone.org/generative-a...

Key to these guidelines is a values-first rather than a technology-first approach, based on NL code of research conduct

09.04.2025 09:45 β€” πŸ‘ 86    πŸ” 45    πŸ’¬ 6    πŸ“Œ 4
Preview
Bill Gates: Within 10 years, AI will replace many doctors and teachersβ€”humans won't be needed β€˜for most things' Over the next decade, advances in artificial intelligence will mean that humans will no longer be needed "for most things" in the world, says Bill Gates.

It’s patently false, but notice that β€œhumans won’t be needed” to these people is identified as the desired future rather than the deeply dystopian vision that it actually is.

27.03.2025 18:38 β€” πŸ‘ 2275    πŸ” 595    πŸ’¬ 209    πŸ“Œ 474
Preview
Education Committee announces session on higher education funding - Committees - UK Parliament Education Committee Chair Helen Hayes MP has today announced a deep dive evidence session examining funding issues in the higher education sector. 

Better than nothing, hopefully a start:
committees.parliament.uk/committee/20...

14.03.2025 13:31 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

My reading of it would definitely be Alexa's first reading (we use 'us' that way round these parts, and I can imagine someone saying this with this meaning here, although can't say it's common).

07.03.2025 11:13 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I’m sorry you had that experience, but β€œNo cabs” is a beautiful, almost poetic, ending (for us as readers - hope you didn’t end up having to walk!).

27.02.2025 13:33 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

No we didn’t, but that’s a lesson learned for next time.

23.02.2025 21:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

As promised, here are the slides I shared with students to convince them to NOT use chatGPT and other artificial stupidity.

TL;DR? AI is evil, unsustainable and stupid, and I'd much rather they use their own brains, make their own mistakes, and actually learn something. πŸͺ„

23.02.2025 13:45 β€” πŸ‘ 5609    πŸ” 2106    πŸ’¬ 236    πŸ“Œ 114

Our β€˜Late Breaking Work’ submission for CHI2025 in Yokohama has sadly been rejected. Some positive comments from reviewers, but rejected on the grounds of not enough statistical data, lack of details about ethical approval, and lack of detail about #EMCA analytic process (is it thematic analysis?) πŸ€¦πŸ»β€β™‚οΈ

22.02.2025 20:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Aside from the obvious quality of observation and argument, I’m always impressed by the work of #LSE (and Liz!) in how they present their ideas in an engaging and interesting way, for all audiences. If only other institutions aspired to such standards of academic engagement.

11.02.2025 17:44 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

What a brilliant new edition of the ISCA newsletter - a nice reminder that there are so many fantastic conferences and seminars covering many areas of #EMCA / #ILEMCA. Looking forward to seeing what 2025 brings our way!

Direct link to the newsletter:
www.conversationanalysis.org/members-foru...

10.02.2025 15:09 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Hats off to anyone in UK academia right now who is turning up for work, marking essays, meeting students, giving lectures, holding seminars, being there for colleagues and also facing the threat of redundancy, voluntary or compulsory. It’s a grim and surreal time #UKhigherEd

03.02.2025 20:30 β€” πŸ‘ 116    πŸ” 19    πŸ’¬ 1    πŸ“Œ 0
Preview
Humans, Machines, Language - 2025 conference Find us on the Sociolinguistic Events Calendar: https://baal.org.uk/slxevents/

Another excellent-sounding conference, aiming to bring together language researchers and people from the tech industry.
Abstract submission deadline tomorrow:
sites.google.com/view/humans-...

30.01.2025 09:41 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thank you for sharing these, Gene - they are wonderful.
In this one, aside from your fantastic and generous explanation, I am struck by the intelligence and curiosity in the student's email (alongside their wonderful formulations - 'what is going down', 'throw out a research project idea').

29.01.2025 11:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Well that's certainly another way, although the advantages are probably more narrow than the other two.

29.01.2025 11:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Hoping it's a case of both of the above for you, Charles!

29.01.2025 11:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@adambrandt is following 20 prominent accounts