Alexander Hoyle's Avatar

Alexander Hoyle

@alexanderhoyle.bsky.social

Postdoctoral fellow at ETH AI Center, working on Computational Social Science + NLP. Previously a PhD in CS at UMD, advised by Philip Resnik. Internships at MSR, AI2. he/him. On the job market this cycle! alexanderhoyle.com

2,414 Followers  |  491 Following  |  266 Posts  |  Joined: 05.09.2023
Posts Following

Posts by Alexander Hoyle (@alexanderhoyle.bsky.social)

I have, and he is almost certainly incorrect about Anthropicβ€˜s profitability

05.03.2026 13:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I'm so sorry to hear this. Your work is consistently thoughtful, distinctive, and impactful. I have so appreciated your mentorship and perspective, and am sure you will find a place where your voice is valued

03.03.2026 21:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

hoping to develop something like this :)

23.02.2026 13:49 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Philosophy of Language and Computation II, Spring 2025 | Rycolab This graduate class, partly taught like a seminar, is designed to help you understand the philosophical underpinnings of modern work in natural language processing (NLP), most of which centered around...

You might find this interesting rycolab.io/classes/phil...

20.02.2026 06:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Already a thing!

www.cs.umd.edu/~jbg//docs/2...
aclanthology.org/2023.acl-lon...
(I’m sure there’s more)

05.02.2026 17:53 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I appreciate the shout-out to our work showing that LDA tends to work better, although it would have been nice to include it as a comparison :)

28.01.2026 09:54 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I thought these were deeper explorations of LM β€œunderstandingβ€œ than the Stochastic Parrots paper:
direct.mit.edu/coli/article...
julianmichael.org/blog/2020/07...
direct.mit.edu/coli/article...

Also see Will Merrill’s work for more formal treatments

22.01.2026 22:43 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yeah, that denominator was what I was wondering about, since my understanding is usually it's something like 5-10 onsites per season?

I did a "toe-dipping" last cycle (first year of postdoc), and I wasn't sure how to consider the proportion of onsites I got relative to screenings

20.01.2026 14:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

I was discussing first rounds with a colleague recently---do you have a sense of the typical "conversion" rate to onsites?

[Also, sorry for dropping into your mentions for the second time today]

20.01.2026 14:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Does it create good diagrams?

20.01.2026 11:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Oh amazing! Do you have the datasets you used to test it? I’ve been playing around with a project related to this topic

20.01.2026 10:45 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

This is super great---a nice, illustrative collection of measurement issues. Thanks for sharing

19.01.2026 14:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

FWIW, auto completion models are worse because they are cheap and run all the time. the larger models/integrated systems (like Claude code) are significantly better

09.01.2026 12:51 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

I think you are relying on two assumptions: (a) they used a commercial LLM to begin with and (b) that commercial LLMs train on all data that is passed to them

I don't know about (a), but (b) is not the case when using the API endpoints for the major providers

platform.openai.com/docs/concepts

08.01.2026 16:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Sincere question: how do you distinguish your paper being passed to a commercial LLM from your paper being indexed by commercial search engines?

08.01.2026 16:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It’s also weirdly dependency-averse (maybe it has been RLHF’d away from using outdated/old libraries?). I was vibe coding an iOS app and it reimplemented a full photo editor. I know nothing about the Swift ecosystem, but I figured there had to have been some existing library (there is of course)

08.01.2026 10:40 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Now I'm late to replying---thanks, this means a lot coming from you! For future work, are looking at ways to apply it to mech interp, but it's still in the early stages. I'm also interested in the "reverse" problem: transformations that improve similarities along a given dimension (e.g., syntax)

07.01.2026 11:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Happy to be in the second round!

03.01.2026 09:33 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Was just browsing β€œFor Youβ€œ and didn’t expect to see one of ours here! Some here I haven’t seen, thanks for the pointers :)

02.01.2026 20:51 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Obsidian does calendar management?

02.01.2026 15:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Interestingβ€”what’s the difference from Claude Code/Codex?

29.12.2025 01:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Ah l see you wrote about this in the original thread, apologies!

19.12.2025 16:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

that’s fair, the author is definitely putting forward a position. Your argument is that cog sci anticipated these developments somewhat, or? (May have not gathered it in the thread)

19.12.2025 16:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
The Case That A.I. Is Thinking ChatGPT does not have an inner life. Yet it seems to know what it’s talking about.

Just read a nice New Yorker piece that interviews mostly cognitive scientists/neuroscientists who are (mostly) very impressed:

www.newyorker.com/magazine/202...

19.12.2025 09:59 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It’s unfortunate because I’ve found LDA (mallet/tomotopy, not gensim) is actually very good for tweets

09.12.2025 11:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

my poor heart πŸ’”

09.12.2025 11:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Realizing neither of you have John Williams' Stoner, which is a beautiful book

06.12.2025 21:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I enjoyed Changing Places, and there's two more in the "campus trilogy". I don't recall much about the third (if I even got to it?), but Small World was fun---more in the left quadrant

06.12.2025 21:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It's funny, when I've spoken to some ML people---including, just recently, Dave Blei himself---the question is often, "who's still using topic models?" Meanwhile in DH/Social science, it's "please, enough topic models!"

06.12.2025 21:04 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Some collections of work in these directions:

openreview.net/pdf?id=gJcEM...
arxiv.org/abs/2210.13382
iclr.cc/virtual/2025...
www.worldmodelworkshop.org

06.12.2025 13:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0