Ellen Vitercik's Avatar

Ellen Vitercik

@ellen-v.bsky.social

Assistant Professor at Stanford Machine learning, algorithm design, econ-CS https://vitercik.github.io/

769 Followers  |  115 Following  |  20 Posts  |  Joined: 15.11.2024  |  1.8867

Latest posts by ellen-v.bsky.social on Bluesky

The main conceptual contribution is a way to sidestep the ฮฉ(log n) barrier introduced by standard probabilistic metric embeddings. Instead, Yingxi & Mingwei found a clever way to bound our algorithmโ€™s cost directly on a deterministic embedding & compare it to OPT, bounded via majorization arguments.

27.01.2026 17:54 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We:
โ€ข Move ๐—ฏ๐—ฒ๐˜†๐—ผ๐—ป๐—ฑ the standard ๐—ถ.๐—ถ.๐—ฑ. model: each request comes from its own distribution with a mild smoothness condition.
โ€ข Require ๐—ป๐—ผ ๐—ฑ๐—ถ๐˜€๐˜๐—ฟ๐—ถ๐—ฏ๐˜‚๐˜๐—ถ๐—ผ๐—ป๐—ฎ๐—น ๐—ธ๐—ป๐—ผ๐˜„๐—น๐—ฒ๐—ฑ๐—ด๐—ฒ: we use only one sample from each request distribution.
โ€ข Achieve an ๐—ข(๐Ÿญ) competitive ratio for d-dimensional Euclidean metrics for d > 2.

27.01.2026 17:54 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

We study a classic online metric matching problem in which n servers (e.g., rideshare drivers) are available in advance and n requests (e.g., riders) arrive one by one. Each request must be immediately matched to an available server, paying the distance between the two in an underlying metric.

27.01.2026 17:54 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Smoothed Analysis of Online Metric Matching with a Single Sample: Beyond Metric Distortion In the online metric matching problem, $n$ servers and $n$ requests lie in a metric space. Servers are available upfront, and requests arrive sequentially. An arriving request must be matched immediat...

arXiv: arxiv.org/abs/2510.20288

27.01.2026 17:54 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
ITCS 2026 - Smoothed Analysis of Online Metric Matching with a Single Sample
YouTube video by Mingwei Yang ITCS 2026 - Smoothed Analysis of Online Metric Matching with a Single Sample

This week at the Innovations in Theoretical Computer Science (ITCS) conference, Mingwei Yang is presenting our paper:
๐—ฆ๐—บ๐—ผ๐—ผ๐˜๐—ต๐—ฒ๐—ฑ ๐—”๐—ป๐—ฎ๐—น๐˜†๐˜€๐—ถ๐˜€ ๐—ผ๐—ณ ๐—ข๐—ป๐—น๐—ถ๐—ป๐—ฒ ๐— ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฐ ๐— ๐—ฎ๐˜๐—ฐ๐—ต๐—ถ๐—ป๐—ด ๐˜„๐—ถ๐˜๐—ต ๐—ฎ ๐—ฆ๐—ถ๐—ป๐—ด๐—น๐—ฒ ๐—ฆ๐—ฎ๐—บ๐—ฝ๐—น๐—ฒ: ๐—•๐—ฒ๐˜†๐—ผ๐—ป๐—ฑ ๐— ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฐ ๐——๐—ถ๐˜€๐˜๐—ผ๐—ฟ๐˜๐—ถ๐—ผ๐—ป
by Yingxi Li, myself, and Mingwei Yang
See Mingwei's talk here: youtu.be/yEBPI9c7OE8?...

27.01.2026 17:54 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
LLMs for Optimization Tutorial Fair Clustering Tutorial

Tutorial page (agenda + reading list): conlaw.github.io/llm_opt_tuto...

Thanks to Lรฉonard Boussioux and Madeleine Udell for helping put the proposal together.

20.01.2026 01:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Optimization is central to planning, scheduling, and decision-making, but deploying solvers requires deep expertise. Our tutorial covers how LLMs can support the end-to-end optimization pipeline (model formulation, solver configuration, and model validation) and highlights open research directions.

20.01.2026 01:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image Post image Post image

@lawlessopt.bsky.social and I are excited to present our #AAAI2026 tutorial on โ€œLLMs for Optimization: Modeling, Solving, and Validating with Generative AI.โ€

When: Tuesday, Jan 20, 2026, 8:30amโ€“12:30pm SGT
Where: Garnet 216 (Singapore EXPO)

(Connorโ€™s intro slides are shown here.)
CC @aaai.org

20.01.2026 01:50 โ€” ๐Ÿ‘ 8    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Topic 4: Theoretical Guarantees

- Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Methods (Caramanis et al., NeurIPSโ€™23)
- Approximation Algorithms for Combinatorial Optimization with Predictions (Antoniadis et al., ICLRโ€™25)

02.12.2025 21:55 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Topic 3: Math Optimization

- OptiMUS-0.3: Using LLMs to Model and Solve Optimization Problems at Scale (AhmadiTeshnizi et al., arXivโ€™25)
- Contrastive Predict-and-Search for Mixed Integer Linear Programs (Huang et al., ICMLโ€™24)
- Differentiable Integer Linear Programming (Geng et al., ICLRโ€™25)

02.12.2025 21:55 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Topic 2: Graph Neural Networks

- One Model, Any CSP: GNNs as Fast Global Search Heuristics for Constraint Satisfaction (Tรถnshoff et al., IJCAIโ€™23)
- Dual Algorithmic Reasoning (Numeroso et al., ICLRโ€™23)
- DIFUSCO: Graph-based Diffusion Solvers for Combinatorial Optimization (Sun & Yang, NeurIPSโ€™23)

02.12.2025 21:55 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Topic 1: Transformers & LLMs

- What Learning Algorithm is In-Context Learning? (Akyรผrek et al., ICLRโ€™23)
- Transformers as Statisticians (Bai et al., NeurIPSโ€™23)
- We Need An Algorithmic Understanding of Generative AI (Eberle et al., ICMLโ€™25)
- Evolution of Heuristics (Liu et al., ICMLโ€™24)

02.12.2025 21:55 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Iโ€™m excited to share the materials from my Stanford seminar course, โ€œAI for Algorithmic Reasoning and Optimizationโ€: vitercik.github.io/ai4algs_25/. It covered formal algorithmic frameworks for analyzing LLM reasoning, GNNs for combinatorial/mathematical optimization, and theoretical guarantees.

02.12.2025 21:55 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

On top of his research, my PhD students and I can attest that heโ€™s a thoughtful, generous collaborator and mentor.

Please donโ€™t hesitate to reach out if youโ€™d like me to share my very strong recommendation letter.

(Photo credit: @cpaior.bsky.social.)

16.11.2025 18:53 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
OptiMUS-0.3: Using Large Language Models to Model and Solve Optimization Problems at Scale Optimization problems are pervasive in sectors from manufacturing and distribution to healthcare. However, most such problems are still solved heuristically by hand rather than optimally by state-of-t...

Connor has done exciting work on leveraging LLMs to model and solve large-scale optimization problems (arxiv.org/abs/2407.19633, arxiv.org/abs/2412.12038), developing mathematical optimization tools to make ML models more interpretable (arxiv.org/abs/2502.16380), among many other contributions.

16.11.2025 18:53 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Please keep an eye out for Connor Lawless (@lawlessopt.bsky.social) on the faculty job market! Connor is a Stanford Human-Centered AI Postdoc, co-hosted by myself and Madeleine Udell. His research combines ML, computational optimization, and HCI, with the goal of building human-centered AI systems.

16.11.2025 18:52 โ€” ๐Ÿ‘ 6    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Preview
Understanding Fixed Predictions via Confined Regions Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Existing approaches to audit fixed predictions do so on a pointwise basis, which requires ac...

Excited to be chatting about our new paper "Understanding Fixed Predictions via Confined Regions" (joint work with @berkustun.bsky.social, Lily Weng, and Madeleine Udell) at #ICML2025!

๐Ÿ• Wed 16 Jul 4:30 p.m. PDT โ€” 7 p.m. PDT
๐Ÿ“East Exhibition Hall A-B #E-1104
๐Ÿ”— arxiv.org/abs/2502.16380

14.07.2025 16:08 โ€” ๐Ÿ‘ 5    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Our โœจspotlight paperโœจ "Primal-Dual Neural Algorithmic Reasoning" is coming to #ICML2025!

We bring Neural Algorithmic Reasoning (NAR) to the NP-hard frontier ๐Ÿ’ฅ

๐Ÿ—“ Poster session: Tuesday 11:00โ€“13:30
๐Ÿ“ East Exhibition Hall A-B, # E-3003
๐Ÿ”— openreview.net/pdf?id=iBpkz...

๐Ÿงต

13.07.2025 21:34 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Join us for a Wikipedia edit-a-thon at #ACMEC25!
When: July 8th, 8PM-10PM
Where: Stanford Econ Landau 139
Website: sites.google.com/view/econcs-...

Come hangout, grab snacks, and edit/create Wikipedia pages for EC topics.

Suggest topics/articles that need attention: docs.google.com/spreadsheets...

02.07.2025 20:17 โ€” ๐Ÿ‘ 12    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Congrats Kira!!

05.04.2025 05:08 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
LLMs for Cold-Start Cutting Plane Separator Configuration Mixed integer linear programming (MILP) solvers ship with a staggering number of parameters that are challenging to select a priori for all but expert optimization users, but can have an outsized impa...

Super excited about this new work with Yingxi Li, Anders Wikun, @ellen-v.bsky.social, and Madeleine Udell forthcoming at CPAIOR2025:

LLMs for Cold-Start Cutting Plane Separator Configuration

๐Ÿ”—: arxiv.org/abs/2412.12038

16.03.2025 17:38 โ€” ๐Ÿ‘ 11    ๐Ÿ” 5    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Pulled a shoulder muscle trying to stay cool on the golf course in front of my PhD students and postdoc ๐Ÿ˜… ๐ŸŒโ€โ™€๏ธ

12.12.2024 17:19 โ€” ๐Ÿ‘ 18    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿ“ข Join us at #NeurIPS2024 for an in-person Learning Theory Alliance mentorship event!
๐Ÿ“… When: Thurs, Dec 12 | 7:30-9:30 PM PST
๐Ÿ”ฅ What: Fireside chat w/ Misha Belkin (UCSD) on Learning Theory Research in the Era of LLMs, + mentoring tables w/ amazing mentors.
Donโ€™t miss it if youโ€™re at NeurIPS!

10.12.2024 14:52 โ€” ๐Ÿ‘ 9    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Hi Emily, could you please add me? Thanks for making it!

19.11.2024 15:05 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Can you add me? ๐Ÿ˜€

18.11.2024 04:26 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@ellen-v is following 20 prominent accounts