Sorry, no, it's an in-person full-time role.
03.12.2025 01:45 β π 0 π 0 π¬ 1 π 0@gregdnlp.bsky.social
CS professor at NYU. Large language models and NLP. he/him
Sorry, no, it's an in-person full-time role.
03.12.2025 01:45 β π 0 π 0 π¬ 1 π 0Hiring researchers & engineers to work on
βbuilding reliable software on top of unreliable LLM primitives
βstatistical evaluation of real-world deployments of LLM-based systems
Iβm speaking about this on two NeurIPS workshop panels:
ποΈSaturday β Reliable ML Workshop
ποΈSunday β LLM Evaluation Workshop
More details here:
cims.nyu.edu/taur/postdoc...
Interfolio link to apply coming soon! Feel free to email me in the meantime following the instructions there.
π’ Postdoc position π’
Iβm recruiting a postdoc for my lab at NYU! Topics include LM reasoning, creativity, limitations of scaling, AI for science, & more! Apply by Feb 1.
(Different from NYU Faculty Fellows, which are also great but less connected to my lab.)
Link in π§΅
Two brief advertisements!
TTIC is recruiting both tenure-track and research assistant professors: ttic.edu/faculty-hiri...
NYU is recruiting faculty fellows: apply.interfolio.com/174686
Happy to chat with anyone considering either of these options
Unfortunately I won't be at #COLM2025 this week, but please check out our work being presented by my collaborators/advisors!
If you are interested in evals of open-ended tasks/creativity please reach out and we can schedule a chat! :)
Find my students and collaborators at COLM this week!
Tuesday morning: @juand-r.bsky.social and @ramyanamuduri.bsky.social 's papers (find them if you missed it!)
Wednesday pm: @manyawadhwa.bsky.social 's EvalAgent
Thursday am: @anirudhkhatry.bsky.social 's CRUST-Bench oral spotlight + poster
Excited to present this at #COLM2025 tomorrow! (Tuesday, 11:00 AM poster session)
06.10.2025 20:40 β π 10 π 4 π¬ 0 π 0Check out this feature about AstroVisBench, our upcoming NeurIPS D&B paper about code workflows and visualization in the astronomy domain! Great testbed for the interaction of code + VLM reasoning models.
25.09.2025 20:43 β π 2 π 0 π¬ 0 π 0Picture of the UT Tower taken by me on my first day at UT as a postdoc in 2023!
NewsποΈ
I will return to UT Austin as an Assistant Professor of Linguistics this fall, and join its vibrant community of Computational Linguists, NLPers, and Cognitive Scientists!π€
Excited to develop ideas about linguistic and conceptual generalization (recruitment details soon!)
Great to work on this benchmark with astronomers in our NSF-Simons CosmicAI institute! What I like about it:
(1) focus on data processing & visualization, a "bite-sized" AI4Sci task (not automating all of research)
(2) eval with VLM-as-a-judge (possible with strong, modern VLMs)
The end of US leadership in science, technology, and innovation.
All in one little table.
A tremendous gift to China, courtesy of the GOP.
nsf-gov-resources.nsf.gov/files/00-NSF...
Super excited Marin is finally out! Come see what we've been building! Code/platform for training fully reproducible models end-to-end, from data to evals. Plus a new high quality 8B base model. Percy did a good job explaining it on the other place. marin.community
x.com/percyliang/s...
Check out Anirudh's work on a new benchmark for C-to-Rust transpilation! 100 realistic-scale C projects, plus target Rust interfaces + Rust tests that let us validate the transpiled code beyond what prior benchmarks allow.
23.04.2025 18:37 β π 5 π 1 π¬ 0 π 0πMeet CRUST-Bench, a dataset for C-to-Rust transpilation for full codebases π οΈ
A dataset of 100 real-world C repositories across various domains, each paired with:
π¦ Handwritten safe Rust interfaces.
π§ͺ Rust test cases to validate correctness.
π§΅[1/6]
Check out Manya's work on evaluation for open-ended tasks! The criteria from EvalAgent can be plugged into LLM-as-a-judge or used for refinement. Great tool with a ton of potential, and there's LOTS to do here for making LLMs better at writing!
22.04.2025 16:30 β π 3 π 2 π¬ 0 π 0Check out Ramya et al.'s work on understanding discourse similarities in LLM-generated text! We see this as an important step in quantifying the "sameyness" of LLM text, which we think will be a step towards fixing it!
21.04.2025 22:10 β π 6 π 1 π¬ 0 π 0South by Semantics Workshop Title: "Not-your-mother's connectionism: LLMs as cognitive models" Speaker: Ellie Pavlick (Brown University) Date and time: April 23, 2025. 3:30 - 5 PM. Location: GDC 6.302
Our final South by Semantics lecture at UT Austin is happening on Wednesday April 23!
21.04.2025 13:39 β π 15 π 4 π¬ 2 π 0Check out @juand-r.bsky.social and @wenxuand.bsky.social 's work on improving generator-validator gaps in LLMs! I really like the formulation of the G-V gap we present, and I was pleasantly surprised by how well the ranking-based training closed the gap. Looking forward to following up in this area!
16.04.2025 18:18 β π 11 π 2 π¬ 0 π 0If you're scooping up students off the street for writing op-eds, you're secret police, and should be treated accordingly.
26.03.2025 20:00 β π 9104 π 2322 π¬ 97 π 39I'm excited to announce two papers of ours which will be presented this summer at @naaclmeeting.bsky.social eting.bsky.social and @iclr-conf.bsky.social !
π§΅
Excited about Proofwala, @amitayush.bsky.social's new framework for ML-aided theorem-proving.
* Paper: arxiv.org/abs/2502.04671
* Code: github.com/trishullab/p...
Proofwala allows the collection of proof-step data from multiple proof assistants (Coq and Lean) and multilingual training. (1/3)
Popular or not Dems cannot bend on the need for trans people to be treated with basic humanity and respect. If we give up that because the right made trans people unpopular, we give up everything. Theyβll dice us group by group like a salami. We die on this hill or we die alone in a ditch
05.02.2025 21:19 β π 6844 π 1337 π¬ 140 π 166Here are just a few of the NSF review panels that were shut down today, Chuck.
This is research that would have made us competitive in computer science that will now be delayed by many months if not lost forever.
AI is fine but right now the top priority is keeping the lights on at NSF and NIH.
kicking off 2025 with our OLMo 2 tech report while payin homage to the sequelest of sequels π«‘
π 2 OLMo 2 Furious π₯ is everythin we learned since OLMo 1, with deep dives into:
π stable pretrain recipe
π lr anneal π€ data curricula π€ soups
π tulu post-train recipe
π compute infra setup
ππ§΅
Congrats to Prasann and all the other awardees! Full list is here: cra.org/about/awards...
03.01.2025 14:39 β π 1 π 0 π¬ 0 π 0Before his post-training work, Prasann did a great project on representing LM outputs with lattices, which remains one of my favorite algorithms-oriented papers from my group in the last few years, with a lot of potential for interesting follow-up work!
03.01.2025 14:39 β π 3 π 0 π¬ 1 π 0He then advanced our understanding of online DPO methods: how can we combine the strengths of reward models and DPO? (also at COLM 2024)
03.01.2025 14:39 β π 2 π 0 π¬ 1 π 0...established a critical weakness of RLHF with open reward models: spurious correlation with length (COLM 2024)
03.01.2025 14:39 β π 1 π 0 π¬ 1 π 0Huge congrats to @prasannsinghal.bsky.social for being one of the 8 CRA Outstanding Undergraduate Researcher Award winners! It has been an absolute privilege to work with Prasann during his time at UT. (And he's applying for PhD programs this year...hint hint...)
Prasann's work π§΅