Matt Groh's Avatar

Matt Groh

@mattgroh.bsky.social

Assistant professor at Northwestern Kellogg | human AI collaboration | computational social science | affective computing

996 Followers  |  133 Following  |  42 Posts  |  Joined: 08.08.2023  |  2.2317

Latest posts by mattgroh.bsky.social on Bluesky

Post image

On my way to @ic2s2.bsky.social in NorrkΓΆping!! Super excited to share this year’s projects in the HAIC lab revealing how (M)LLMs can offer insights into human behavior & cognition

More at human-ai-collaboration-lab.kellogg.northwestern.edu/ic2s2

See you there!

#IC2S2

21.07.2025 08:25 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks! I imagine we'd see similar results in the Novelty Challenge that when experts are reliable we can fine-tune LLMs to be reliable but experts may only be reliable in some disciplines/settings and less reliable in others.

Very cool challenge!!

18.06.2025 13:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

When are LLMs-as-judge reliable?

That's a big question for frontier labs and it's a big question for computational social science.

Excited to share our findings (led by @aakriti1kumar.bsky.social!) on how to address this question for any subjective task & specifically for empathic communications

17.06.2025 15:23 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Thank you for sharing your brilliance, quirks, and wisdom. I started reading your work after coming across your Aeon article on Awe many years ago, and I feel inspired everytime I read what you write.

29.05.2025 14:17 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
When Put to the Test, Are We Any Good at Spotting AI Fakes? For the most part, yes! And the more we look, the better we get.

Awesome write up in Kellogg Insight on our paper published at #CHI2025 this week!

insight.kellogg.northwestern.edu/article/are-...

01.05.2025 19:10 β€” πŸ‘ 8    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
negrain.bsky.social

And follow negrain.bsky.social who just joined Blue Sky today!

25.04.2025 15:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
CHI 2025 - Characterizing Photorealism and Artifacts in Diffusion Model-Generated Images
YouTube video by Negar Kamali CHI 2025 - Characterizing Photorealism and Artifacts in Diffusion Model-Generated Images

If you're curious about learning more, say hi to
Negar Kamali at #CHI2025 and see video

Awesome collaboration with
Karyn, @aakriti1kumar.bsky.social, Angelos, @jessicahullman.bsky.social

Video: youtu.be/PL_ggNzMd-o?...
Preprint: arxiv.org/pdf/2502.11989
CHI: dl.acm.org/doi/10.1145/...

25.04.2025 15:15 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

This taxonomy offers a shared language (and see our how to guide on arXiv for many examples) to help people better communicate what looks or feels off.

It's also a framework that can generalize to multimedia.

Consider this, what do you notice at the 16s mark about her legs?

25.04.2025 15:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Based on generating thousands of images, reading the AI-generated images and digital forensics literatures (and social media and journalistic commentary), analyzing 30k+ participant comments, we propose a taxonomy for characterizing diffusion model artifacts in images

25.04.2025 15:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image Post image Post image

Scene complexity, artifact types, display time, and human curation of AI-generated images all play significant roles in how accurately people distinguish real and AI-generated images.

25.04.2025 15:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

We examine photorealism in generative AI by measuring people's accuracy at distinguishing 450 AI-generated and 150 real images

Photorealism varies from image to image and person to person

83% of AI-generated images are identified as AI better than random chance would predict

25.04.2025 15:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ’‘New paper at #CHI2025 πŸ’‘

Large scale experiment with 750k obs addressing

(1) How photorealistic are today's AI-generated images?

(2) What features of images influence people's ability to distinguish real/fake?

(3) How should we categorize artifacts?

25.04.2025 15:15 β€” πŸ‘ 16    πŸ” 4    πŸ’¬ 2    πŸ“Œ 0
Preview
When combinations of humans and AI are useful: A systematic review and meta-analysis - Nature Human Behaviour Vaccaro et al. present a systematic review and meta-analysis of the performance of human–AI combinations, finding that on average, human–AI combinations performed significantly worse than the best of ...

Agreed with your observation that disciplinary perspectives can be too narrow minded on this problem and forget the big picture of both sides

www.nature.com/articles/s41... does a really nice job systematically reviewing the Human-AI collaboration literature across a bunch of different domains

03.04.2025 14:18 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

At a high level, it depends on:

- human expertise
- human understanding for what the AI system is capable of
- quality of AI explanations
- task-specific potential for cognitive biases and satisficing constraints to influence humans
- instance-specific potential for OOD data to influence AI

03.04.2025 14:18 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Qualtrics Survey | Qualtrics Experience Management Survey Software, Enterprise Survey software for enterprise feedback management and CRM solutions. Enables high-quality data collection, panel management and results analysis. Perfect for market resear...

NICO Intake form: kellogg.qualtrics.com/jfe/form/SV_...

03.04.2025 13:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ“£ πŸ“£ Postdoc Opportunity at Northwestern

Dashun Wang and I are seeking a creative, technical, interdisciplinary researcher for a joint postdoc fellowship between our labs.

If you're passionate about Human-AI Collaboration and Science of Science, this may be for you! πŸš€

Please share widely!

02.04.2025 13:00 β€” πŸ‘ 5    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1

You're welcome!! Def makes makers who move between both worlds feel very seen

02.04.2025 12:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Maker's Schedule, Manager's Schedule

Impressive on the 20 minute bits approach!

I definitely need 4 hour windows for productive, creative work.

Paul Graham's essay on the Maker/Manager schedule (paulgraham.com/makersschedu...) offers some tips for how to create schedules that address roles where one is both a Maker and Manager

29.03.2025 20:13 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

V2 of the Human and Machine Intelligence πŸ˜ŠπŸ€–πŸ§  is in the books!

So many fantastic discussions as we witnessed the frontier of AI shift even further into hyperdrive✨

Props to students for all the hard work and big thanks to teaching assistants and guest speakers πŸ™

20.03.2025 00:53 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

and present evidence that perception is more than simply transforming light into representations of objects and their features, perception also automatically extracts relations between objects!

07.03.2025 16:01 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

What is perception? What do we really see when we look at the world?

And, why does the amodal completion illusion lead us to see a super long reindeer in the image on the right?

This week @chazfirestone.bsky.social joined the NU CogSci seminar series to address these fundamental questions

07.03.2025 16:01 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 1    πŸ“Œ 2

well said!

08.01.2025 14:41 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

PDF with links here: mattgroh.com/pdfs/annual_...

01.01.2025 18:08 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

2024 marks the official launch of the Human-AI Collaboration Lab, so I wrote a one page letter to introduce the lab, share highlights, and begin a lab tradition of reflecting on the year and sharing what we're working on in an easy to digest annual letter to share with friends and colleagues.

31.12.2024 18:54 β€” πŸ‘ 13    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
Closing Health Disparities: A.I. in Medicine She's Thinking Podcast Β· Episode

Fun to join Ellie, Joanna, and Naira on the She's Thinking podcast about our research on human-AI collaboration in medicine

And really cool to hear Dr. Katie Fraser's research on AI for early detection of neurodegenerative diseases in the first half of the episode!

open.spotify.com/episode/3X5H...

12.12.2024 16:54 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Copy of W25_MORS950_Human_and_Machine_Intelligence.docx Human and Machine Intelligence (MORS 950) Professor Matt Groh | he/him/his | matthew.groh@kellogg.northwestern.edu Section 31 | Evanston Section 81 | Chicago Office Hours: Schedule available on Ca...

I'm teaching my second iteration of "Human and Machine Intelligence" πŸ§ πŸ€– for Kellogg MBAs.

I updated the syllabus with a couple 2024 books + new lectures and readings.

What else do you think MBA students should be reading on this topic?

docs.google.com/document/d/1...

11.12.2024 21:56 β€” πŸ‘ 12    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Real vs. fake: Can you spot AI-generated images? Some images generated by artificial intelligence have become so convincingly real that there is no surefire way to spot the fakes. But experts say there are still things we can try to detect fakes.

"Media literacy is super awesome. But it needs to extend to AI literacy," says @mattgroh.bsky.social in this story ‡️

We agree, which is why we have a page dedicated to teaching about #AI, with resources including quizzes & an infographic: newslit.org/ai/

www.fox47news.com/politics/dis...

10.12.2024 22:08 β€” πŸ‘ 18    πŸ” 10    πŸ’¬ 0    πŸ“Œ 3
Post image

New paper out in @ScienceMagazine! In 8 studies (multiple platforms, methods, time periods) we find: misinformation evokes more outrage than trustworthy news, when it does it's shared more + ppl are less likely to read before sharing. w/ @killianmcl1 @Klonick @mollycrockett πŸ§΅πŸ‘‡

28.11.2024 19:06 β€” πŸ‘ 3572    πŸ” 1170    πŸ’¬ 129    πŸ“Œ 139
Post image

Welcome Bluesky followers!

A 🧡 about my JMP: Using language to generate hypotheses

This paper explores how language shapes behavior. Our contribution, however, is not in testing specific hypotheses- its in generating them (using #LLMs + #ML + #BehSci)

But how exactly?
www.rafaelmbatista.com/jmp/

15.11.2024 16:50 β€” πŸ‘ 50    πŸ” 12    πŸ’¬ 3    πŸ“Œ 1

Neat! And your example is a great showcase of one of the many possible definitions of domain expertise.

22.11.2024 20:01 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@mattgroh is following 20 prominent accounts