Adam Brandt's Avatar

Adam Brandt

@adambrandt.bsky.social

Senior Lecturer in Applied Linguistics at Newcastle University (UK). Uses #EMCA to research social interaction, and particularly how people communicate with (and through) technologies, such as #conversationalAI.

727 Followers  |  203 Following  |  40 Posts  |  Joined: 03.12.2023  |  2.3878

Latest posts by adambrandt.bsky.social on Bluesky

Image of event poster with speakers 

18:15 – 18:45 | Angus Addlesee β€” Applied Scientist at Amazon
πŸ€– Deploying an LLM-Based Conversational Agent on a Hospital Robot
Real-world challenges and insights from deploying a voice-enabled robot in a clinical setting.

​18:50 – 19:15 | Tatiana Shavrina, PhD β€” Research Scientist at Meta
πŸ”¬From conversational AI to autonomous scientific discovery with AI Agents.
The talk will cover the overview of the current challenges and new opportunities for LLM and LLM-based Agents.

​19:15 – 19:30 | Short Break β˜•

​19:30 – 19:55 | Lorraine Burrell β€” Conversation Design Lead at Lloyds Banking
Talk TBA

​20:00 – 20:30 | Alan Nichol β€” Co-founder & CTO at Rasa
πŸ”§ Why Tool Calling Breaks Your AI Agentsβ€”and What to Do Instead
Explore the pitfalls of tool use in agent design and how to avoid them.

Image of event poster with speakers 18:15 – 18:45 | Angus Addlesee β€” Applied Scientist at Amazon πŸ€– Deploying an LLM-Based Conversational Agent on a Hospital Robot Real-world challenges and insights from deploying a voice-enabled robot in a clinical setting. ​18:50 – 19:15 | Tatiana Shavrina, PhD β€” Research Scientist at Meta πŸ”¬From conversational AI to autonomous scientific discovery with AI Agents. The talk will cover the overview of the current challenges and new opportunities for LLM and LLM-based Agents. ​19:15 – 19:30 | Short Break β˜• ​19:30 – 19:55 | Lorraine Burrell β€” Conversation Design Lead at Lloyds Banking Talk TBA ​20:00 – 20:30 | Alan Nichol β€” Co-founder & CTO at Rasa πŸ”§ Why Tool Calling Breaks Your AI Agentsβ€”and What to Do Instead Explore the pitfalls of tool use in agent design and how to avoid them.

#EMCA folks - if you're working on #chatbots or other conversational technologies and are in London on 16th July, this event looks fantastic!

#Conversational #AI Meetup London, Weds 16th July, 18.00.

Including a talk on LLM-Based conversational agents in hospital robots

πŸ”— lu.ma/5zzqtt33

20.06.2025 07:13 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨Save the date!🚨
πŸ—£οΈSpread the word! πŸ—£οΈ

The next ICOP-L2 conference will be held at Newcastle University on πŸ—“οΈ24-26 August 2026πŸ—“οΈ
icopl2.org

More details on plenary speakers, workshops, the call for abstracts, and more, coming soon!

#EMCA #L2interaction

06.06.2025 11:19 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Thank you Liz! ❀️

29.05.2025 10:36 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Super interesting (and useful) Special Section of ROLSI on all things ethics and data collection for #EMCA research.

Well done (and thank you!) to all involved in putting this together.

28.05.2025 08:46 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Sign the Petition End unnecessary redundancies at Newcastle University

Please read and kindly consider signing in support of academics at Newcastle University (including from our team in Applied Linguistics & Communication) who are facing the threat of redundancy this summer:
www.change.org/p/end-unnece...

23.05.2025 11:01 β€” πŸ‘ 7    πŸ” 7    πŸ’¬ 0    πŸ“Œ 1

We are looking for CLAN and ELAN users interested in converting 1 or 2 transcripts to the DOTE format. We have tested a Python script the last couple of days - and it would be interesting to try with some "real" data. Please get in touch. #DOTE #ELAN #CLAN #transcription #EMCA #VIDEO

23.05.2025 11:35 β€” πŸ‘ 4    πŸ” 5    πŸ’¬ 2    πŸ“Œ 0
Preview
Sign the Petition End unnecessary redundancies at Newcastle University

Please read and kindly consider signing in support of academics at Newcastle University (including from our team in Applied Linguistics & Communication) who are facing the threat of redundancy this summer:
www.change.org/p/end-unnece...

23.05.2025 11:01 β€” πŸ‘ 7    πŸ” 7    πŸ’¬ 0    πŸ“Œ 1

This sounds fantastic!

11.04.2025 11:34 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preamble

All research at our institution, from ideation and execution to analysis and reporting, is bound by the Netherlands Code of Conduct for Research Integrity. This code specifies five core values that organise and inform research conduct: Honesty, Scrupulousness, Transparency, Independence and Responsibility.

One way to summarise the guidelines in this document is to say they are about taking these core values seriously. When it comes to using Generative AI in or for research, the question is if and how this can be done honestly, scrupulously, transparently, independently, and responsibly.

A key ethical challenge is that most current Generative AI undermines these values by design [3–5; details below]. Input data is legally questionable; output reproduces biases and erases authorship; fine-tuning involves exploitation; access is gated; versioning is opaque; and use taxes the environment.

While most of these issues apply across societal spheres, there is something especially pernicious about text generators in academia, where writing is not merely an output format but a means of thinking, crediting, arguing, and structuring thoughts. Hollowing out these skills carries foundational risks.

A common argument for Generative AI is a promise of higher productivity [5]. Yet productivity does not equal insight, and when kept unchecked it may hinder innovation and creativity [6, 7]. We do not need more papers, faster; we rather need more thoughtful, deep work, also known as slow science [8–10].

For these reasons, the first principle when it comes to Generative AI is to not use it unless you can do so honestly, scrupulously, transparently, independently and responsibly. The ubiquity of tools like ChatGPT is no reason to skimp on standards of research integrity; if anything, it requires more vigilance.

Preamble All research at our institution, from ideation and execution to analysis and reporting, is bound by the Netherlands Code of Conduct for Research Integrity. This code specifies five core values that organise and inform research conduct: Honesty, Scrupulousness, Transparency, Independence and Responsibility. One way to summarise the guidelines in this document is to say they are about taking these core values seriously. When it comes to using Generative AI in or for research, the question is if and how this can be done honestly, scrupulously, transparently, independently, and responsibly. A key ethical challenge is that most current Generative AI undermines these values by design [3–5; details below]. Input data is legally questionable; output reproduces biases and erases authorship; fine-tuning involves exploitation; access is gated; versioning is opaque; and use taxes the environment. While most of these issues apply across societal spheres, there is something especially pernicious about text generators in academia, where writing is not merely an output format but a means of thinking, crediting, arguing, and structuring thoughts. Hollowing out these skills carries foundational risks. A common argument for Generative AI is a promise of higher productivity [5]. Yet productivity does not equal insight, and when kept unchecked it may hinder innovation and creativity [6, 7]. We do not need more papers, faster; we rather need more thoughtful, deep work, also known as slow science [8–10]. For these reasons, the first principle when it comes to Generative AI is to not use it unless you can do so honestly, scrupulously, transparently, independently and responsibly. The ubiquity of tools like ChatGPT is no reason to skimp on standards of research integrity; if anything, it requires more vigilance.

A year ago our faculty commissioned & adopted guidance on GenAI and research integrity. Preamble below, pdf at osf.io/preprints/os..., text also at ideophone.org/generative-a...

Key to these guidelines is a values-first rather than a technology-first approach, based on NL code of research conduct

09.04.2025 09:45 β€” πŸ‘ 86    πŸ” 45    πŸ’¬ 8    πŸ“Œ 4
Preview
Bill Gates: Within 10 years, AI will replace many doctors and teachersβ€”humans won't be needed β€˜for most things' Over the next decade, advances in artificial intelligence will mean that humans will no longer be needed "for most things" in the world, says Bill Gates.

It’s patently false, but notice that β€œhumans won’t be needed” to these people is identified as the desired future rather than the deeply dystopian vision that it actually is.

27.03.2025 18:38 β€” πŸ‘ 2294    πŸ” 599    πŸ’¬ 211    πŸ“Œ 485
Preview
Education Committee announces session on higher education funding - Committees - UK Parliament Education Committee Chair Helen Hayes MP has today announced a deep dive evidence session examining funding issues in the higher education sector. 

Better than nothing, hopefully a start:
committees.parliament.uk/committee/20...

14.03.2025 13:31 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

My reading of it would definitely be Alexa's first reading (we use 'us' that way round these parts, and I can imagine someone saying this with this meaning here, although can't say it's common).

07.03.2025 11:13 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I’m sorry you had that experience, but β€œNo cabs” is a beautiful, almost poetic, ending (for us as readers - hope you didn’t end up having to walk!).

27.02.2025 13:33 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

No we didn’t, but that’s a lesson learned for next time.

23.02.2025 21:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

As promised, here are the slides I shared with students to convince them to NOT use chatGPT and other artificial stupidity.

TL;DR? AI is evil, unsustainable and stupid, and I'd much rather they use their own brains, make their own mistakes, and actually learn something. πŸͺ„

23.02.2025 13:45 β€” πŸ‘ 5630    πŸ” 2105    πŸ’¬ 243    πŸ“Œ 110

Our β€˜Late Breaking Work’ submission for CHI2025 in Yokohama has sadly been rejected. Some positive comments from reviewers, but rejected on the grounds of not enough statistical data, lack of details about ethical approval, and lack of detail about #EMCA analytic process (is it thematic analysis?) πŸ€¦πŸ»β€β™‚οΈ

22.02.2025 20:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Aside from the obvious quality of observation and argument, I’m always impressed by the work of #LSE (and Liz!) in how they present their ideas in an engaging and interesting way, for all audiences. If only other institutions aspired to such standards of academic engagement.

11.02.2025 17:44 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

What a brilliant new edition of the ISCA newsletter - a nice reminder that there are so many fantastic conferences and seminars covering many areas of #EMCA / #ILEMCA. Looking forward to seeing what 2025 brings our way!

Direct link to the newsletter:
www.conversationanalysis.org/members-foru...

10.02.2025 15:09 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Hats off to anyone in UK academia right now who is turning up for work, marking essays, meeting students, giving lectures, holding seminars, being there for colleagues and also facing the threat of redundancy, voluntary or compulsory. It’s a grim and surreal time #UKhigherEd

03.02.2025 20:30 β€” πŸ‘ 117    πŸ” 19    πŸ’¬ 1    πŸ“Œ 0
Preview
Humans, Machines, Language - 2025 conference Find us on the Sociolinguistic Events Calendar: https://baal.org.uk/slxevents/

Another excellent-sounding conference, aiming to bring together language researchers and people from the tech industry.
Abstract submission deadline tomorrow:
sites.google.com/view/humans-...

30.01.2025 09:41 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thank you for sharing these, Gene - they are wonderful.
In this one, aside from your fantastic and generous explanation, I am struck by the intelligence and curiosity in the student's email (alongside their wonderful formulations - 'what is going down', 'throw out a research project idea').

29.01.2025 11:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Well that's certainly another way, although the advantages are probably more narrow than the other two.

29.01.2025 11:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Hoping it's a case of both of the above for you, Charles!

29.01.2025 11:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thank you Liz. It seems to go with the territory at the moment. They have been many universities before us and sadly there will surely be more after us.

27.01.2025 08:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

NU employs over 6000 people, so it’s probably more accurate to say around 5950 jobs are β€˜at risk’.

27.01.2025 08:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β€œAt risk” may imply this might not happen, but senior mgmt have confirmed they will be making β€˜Β£20m of salary savings’ (300FTE) by 31 July. When we reach what they’re calling β€˜the CR phase’, decisions about academics will be based on research grant success and number of students on programmes.

27.01.2025 08:12 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 3    πŸ“Œ 0

I just stumbled across an old course handout. I thought I might share it here.

25.01.2025 01:26 β€” πŸ‘ 22    πŸ” 9    πŸ’¬ 1    πŸ“Œ 4
EMCA PGR Training at the University of Liverpool, March 2025 - emcawiki

Michael Mair, Phil Brooker and Chris Elsey are running a two-day, in-person foundational introduction to ethnomethodology and ethnomethodological conversation analysis on the 3rd and 4th March this year. Further details and registration here: emcawiki.net/EMCA_PGR_Tra... #emca #sociology

23.01.2025 14:01 β€” πŸ‘ 15    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0
Post image

MΓ€lardalen INteraction and Didactics (MIND) Research Group's data session programme is now out! Please get in touch with us if you want to attend our hybrid data sessions: mindresearchgroup.org/contact/ #CA #conversationanalysis #interaction #discourse #teaching #learning #education

20.01.2025 13:29 β€” πŸ‘ 8    πŸ” 9    πŸ’¬ 0    πŸ“Œ 0

We are very proud to release 360mash - a light weight and completely free software for doing anonymisation.

16.01.2025 08:57 β€” πŸ‘ 13    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0

@adambrandt is following 20 prominent accounts