Yevgeni Berzak's Avatar

Yevgeni Berzak

@whylikethis.bsky.social

Assistant Prof. at the Technion. Computational Psycholinguistics, NLP, Cognitive Science. https://lacclab.github.io/

736 Followers  |  551 Following  |  20 Posts  |  Joined: 07.10.2023  |  1.8943

Latest posts by whylikethis.bsky.social on Bluesky

Post image

It's officially been 75 years since the proposal of the Turing Test, a good time bring up 'The Minimal Turing Test':

www.sciencedirect.com/science/arti...

02.10.2025 15:37 โ€” ๐Ÿ‘ 28    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

if any

11.09.2025 05:23 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

my friend/colleague Frank Jรคkel wrote a book on AI. I sadly don't know German but I happily know Frank, and I've heard him talking about this for a while now, and just on that basis I'd recommend the German speakers in the audience check it out

24.08.2025 15:50 โ€” ๐Ÿ‘ 16    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2
ACL 2025 Tutorial: Eye Tracking and NLP ACL 2025 Tutorial on Eye Tracking and NLP

We had a lot of fun delivering the Eye Tracking and NLP tutorial at ACL! The slides are available on the tutorial website acl2025-eyetracking-and-nlp.github.io

20.08.2025 07:28 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Help us record firefly flashes! ๐Ÿ‘‡๐Ÿ™

25.06.2025 21:32 โ€” ๐Ÿ‘ 65    ๐Ÿ” 55    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 0
Post image Post image Post image Post image

Out now in TiCS, something i've been thinking about a lot:

"Physics vs. graphics as an organizing dichotomy in cognition"

(by Balaban & me)

relevant for many people, related to imagination, intuitive physics, mental simulation, aphantasia, and more

authors.elsevier.com/a/1lBaC4sIRv...

02.06.2025 12:51 โ€” ๐Ÿ‘ 151    ๐Ÿ” 45    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 13
OECS thematic collections.

OECS thematic collections.

If you haven't been looking recently at the Open Encyclopedia of Cognitive Science (oecs.mit.edu), here's your reminder that we are a free, open access resource for learning about the science of mind.

Today we are launching our new Thematic Collections to organize our growing set of articles!

30.05.2025 00:18 โ€” ๐Ÿ‘ 417    ๐Ÿ” 183    ๐Ÿ’ฌ 10    ๐Ÿ“Œ 7
Preview
Post-Doctoral position - Department of Linguistics University of California, Davis is hiring. Apply now!

I'm hiring a postdoc to start this fall! Come work with me? recruit.ucdavis.edu/JPF07123

30.05.2025 01:30 โ€” ๐Ÿ‘ 25    ๐Ÿ” 25    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
LinkedIn This link will take you to a page thatโ€™s not on LinkedIn

Data and documentation: github.com/lacclab/OneS...

Preprint: osf.io/preprints/ps...

Exciting recent work with OneStop from our lab (more on this soon!!): github.com/lacclab/OneS...

29.05.2025 11:12 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ‘๏ธโ€๐Ÿ—จ๏ธ 4 sub-corpora: ๐Ÿ“– reading for comprehension, ๐Ÿ”Ž๐Ÿ“– information seeking, ๐Ÿ“–๐Ÿ“– repeated reading, ๐Ÿ”Ž๐Ÿ“–๐Ÿ“– information seeking in repeated reading.

๐Ÿ‹๐Ÿฝ Text difficulty level manipulation: reading original and simplified texts.

๐Ÿ‘Œ High quality recordings with an EyeLink 1000 Plus eye tracker.

29.05.2025 11:12 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ‘ฅ 360 participants (English L1) & 152 hours of eye movement recordings - more data than all the publicly available English L1 eye tracking corpora combined!

๐Ÿ—ž๏ธ 30 newswire articles in English (162 paragraphs) with reading comprehension questions and auxiliary text annotations.

29.05.2025 11:12 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

๐Ÿ‘€ ๐Ÿ“– Big news! ๐Ÿ“– ๐Ÿ‘€
Happy to announce the release of the OneStop Eye Movements dataset! ๐ŸŽ‰ ๐ŸŽ‰
OneStop is the product of over 6 years of experimental design, data collection and data curation.
github.com/lacclab/OneS...

29.05.2025 11:12 โ€” ๐Ÿ‘ 9    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
<em>Reading Research Quarterly</em> | ILA Literacy Journal | Wiley Online Library Recent research on the use of eye movements to predict performance on reading comprehension tasks suggests that while eye movements may be used to measure comprehension, the relationship between eye-...

New paper! We show that eye movements during normal reading (no extra task) are effective at predicting reading comprehension as measured by recall. Both early and late eye-movement measures are key. This research was led by the amazing Diane Mรฉziรจre.

ila.onlinelibrary.wiley.com/doi/10.1002/...

22.05.2025 09:50 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Sentence processing workshop, May 27, 2025

In person (no streaming/zoom) sentence processing workshop at Potsdam with Tal Linzen, Brian Dillon, Titus von der Malsburg, Oezge Bakay, William Timkey, Pia Schoknecht, Michael Vrazitulis, and Johan Hennert:

vasishth.github.io/sentproc-wor...

22.05.2025 07:06 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
A schematic of our method. On the left are shown Bayesian inference (visualized using Bayesโ€™ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled โ€œmeta-learningโ€ combines Bayesian inference and neural networks into a โ€œprior-trained neural networkโ€, described as a neural network that has the priors of a Bayesian model โ€“ visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled โ€œlearningโ€ goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence โ€œcolorless green ideas sleep furiouslyโ€).

A schematic of our method. On the left are shown Bayesian inference (visualized using Bayesโ€™ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled โ€œmeta-learningโ€ combines Bayesian inference and neural networks into a โ€œprior-trained neural networkโ€, described as a neural network that has the priors of a Bayesian model โ€“ visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled โ€œlearningโ€ goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence โ€œcolorless green ideas sleep furiouslyโ€).

๐Ÿค–๐Ÿง  Paper out in Nature Communications! ๐Ÿง ๐Ÿค–

Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?

Our answer: Use meta-learning to distill Bayesian priors into a neural network!

www.nature.com/articles/s41...

1/n

20.05.2025 19:04 โ€” ๐Ÿ‘ 154    ๐Ÿ” 43    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1
On the left is a probabilistic context free grammar (PCFG). On the right is an image of the Transformer architecture. There are arrows going back and forth between the PCFG and the Transformer, showing how the assignment goes back and forth between them.

On the left is a probabilistic context free grammar (PCFG). On the right is an image of the Transformer architecture. There are arrows going back and forth between the PCFG and the Transformer, showing how the assignment goes back and forth between them.

Made a new assignment for a class on Computational Psycholinguistics:
- I trained a Transformer language model on sentences sampled from a PCFG
- The students' task: Given the Transformer, try to infer the PCFG (w/ a leaderboard for who got closest)

Would recommend!

1/n

02.05.2025 15:30 โ€” ๐Ÿ‘ 21    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Check out our new work on introspection in LLMs! ๐Ÿ”

TL;DR we find no evidence that LLMs have privileged access to their own knowledge.

Beyond the study of LLM introspection, our findings inform an ongoing debate in linguistics research: prompting (eg grammaticality judgments) =/= prob measurement!

12.03.2025 17:43 โ€” ๐Ÿ‘ 50    ๐Ÿ” 7    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1

it's only Consciousness if it comes from the Consciousness region of the brain, otherwise its just sparkling attention

11.03.2025 12:36 โ€” ๐Ÿ‘ 30    ๐Ÿ” 3    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 1

new preprint on Theory of Mind in LLMs, a topic I know a lot of people care about (I care. I'm part of people):

"Re-evaluating Theory of Mind evaluation in large language models"

(by Hu* @jennhu.bsky.social , Sosa, and me)

link: arxiv.org/pdf/2502.21098

06.03.2025 13:33 โ€” ๐Ÿ‘ 93    ๐Ÿ” 28    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 6
Post image

Out today in Nature Machine Intelligence!

From childhood on, people can create novel, playful, and creative goals. Models have yet to capture this ability. We propose a new way to represent goals and report a model that can generate human-like goals in a playful setting... 1/N

21.02.2025 16:29 โ€” ๐Ÿ‘ 135    ๐Ÿ” 40    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 4

Hello! I'm looking to hire a post-doc, to start this Summer or Fall.

It'd be great if you could share this widely with people you think might be interested.

More details on the position & how to apply: bit.ly/cocodev_post...

Official posting here: academicpositions.harvard.edu/postings/14723

13.02.2025 14:07 โ€” ๐Ÿ‘ 109    ๐Ÿ” 88    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 3
Preview
EvLab Our research aims to understand how the language system works and how it fits into the broader landscape of the human mind and brain.

Our language neuroscience lab (evlab.mit.edu) is looking for a new lab manager/FT RA to start in the summer. Apply here: tinyurl.com/3r346k66 We'll start reviewing apps in early Mar. (Unfortunately, MIT does not sponsor visas for these positions, but OPT works.)

05.02.2025 14:43 โ€” ๐Ÿ‘ 30    ๐Ÿ” 20    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
The 3rd Workshop on Eye Movements and the Assessment of Reading ComprehensionJune 5โ€“7, 2025, University of Stuttgart

The 3rd Workshop on Eye Movements and the Assessment of Reading Comprehension will take place on June 5โ€“7, 2025 at the University of Stuttgart!
Submit an abstract by March 1st and join us!
tmalsburg.github.io/Comprehensio...

03.02.2025 12:15 โ€” ๐Ÿ‘ 4    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Troland Research Award โ€“ NAS Two Troland Research Awards of $75,000 are given annually to recognize unusual achievement by early-career researchers (preferably 45 years of age or younger) and to further empirical research within ...

So excited to receive the Troland Award!! Huge congrats to the other winnerโ€”Nick Turk-Browne! And TY, as always, to my mentors&nominators, to my amazing labbies past&present, and to all the wonderful and supportive colleagues in our broader scientific community. <3 www.nasonline.org/award/trolan...

23.01.2025 17:50 โ€” ๐Ÿ‘ 220    ๐Ÿ” 21    ๐Ÿ’ฌ 34    ๐Ÿ“Œ 0

Fantastic resource!

23.01.2025 17:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Speech and Language Processing Speech and Language Processing

Happy New Year everyone! Jim and I just put up our January 2025 release of Speech and Language Processing! Check it out here: web.stanford.edu/~jurafsky/sl...

12.01.2025 20:44 โ€” ๐Ÿ‘ 151    ๐Ÿ” 50    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Postdoctoral Research Associate (Fixed Term) - Job Opportunities - University of Cambridge Postdoctoral Research Associate (Fixed Term) in the MRC Cognition and Brain Sciences Unit at the University of Cambridge.

postdoc opportunity in @alexwoolgar.bsky.social and my lab, based in Cambridge UK! seeking someone with excellent analytical skills to join our project using time-resolved human neuroimaging to study receptive language processing in non-speaking autistic individuals ๐Ÿง โœจ

www.jobs.cam.ac.uk/job/48835/

14.01.2025 01:20 โ€” ๐Ÿ‘ 62    ๐Ÿ” 37    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2
Expanding the Toolkit: Large Language Models in Humanities Research Call for Papers: Expanding the Toolkit: Large Language Models in Humanities Research.

Reminder that we are looking for papers using LLMs for humanities research, for a special issue of the Computational Humanities Research Journal.

Deadline January 31st!

#NLP #DigitalHumanities #CulturalAnalytics

08.01.2025 17:32 โ€” ๐Ÿ‘ 105    ๐Ÿ” 58    ๐Ÿ’ฌ 5    ๐Ÿ“Œ 1
Post image 02.01.2025 10:09 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

New potential side effect of participating in an eyetracking study in our lab - curly eyelashes #thingswedoforscience

02.01.2025 10:09 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

@whylikethis is following 20 prominent accounts