Ethan Epperly's Avatar

Ethan Epperly

@ethanepperly.bsky.social

PhD candidate in applied math at Caltech interested in computational linear algebra he/him

165 Followers  |  98 Following  |  23 Posts  |  Joined: 05.12.2023  |  1.7501

Latest posts by ethanepperly.bsky.social on Bluesky

Gaussian integration by parts. Let z be a standard Gaussian random variable. Then E[zf(z)] = E[f'(z)].

Gaussian integration by parts. Let z be a standard Gaussian random variable. Then E[zf(z)] = E[f'(z)].

New blog post up about the amazingly useful Gaussian integration by parts formula! As an application, we use it to analyze power iteration from a random start www.ethanepperly.com/index.php/20...

05.08.2025 17:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
2025 July Prize Spotlight | SIAM Congratulations to the SIAM prize recipients who will be recognized at AN25, ACDA25, CT25, and GD25!

Very excited to share that I’ve been awarded a SIAM student paper prize! I look forward to seeing any of you who will be at #SIAMAN25 in MontrΓ©al. Thanks to the committee for selecting me for this honor www.siam.org/publications...

11.07.2025 18:28 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Five Years of Blogging – Ethan N. Epperly

Reflections on five years of blogging www.ethanepperly.com/index.php/20...

08.07.2025 21:37 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Ethan Epperly Ethan Epperly Email Forms

Also, if you never want to miss a blog post, you can sign up to receive email notifications here! eepurl.com/i5M2P2

16.06.2025 17:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

New blog post up about the randomized Kaczmarz algorithm. The classic RK algorithms samples rows according to their squared norms, but what happens if you sample them uniformly? The answer surprised me: Uniform sampling is often just as good or even better www.ethanepperly.com/index.php/20...

16.06.2025 17:25 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
A Neat Not-Randomized Algorithm: Polar Express – Ethan N. Epperly

New blog post out about the new Polar Express algorithm of Amsel, Persson, Musco, and Gower for computing the matrix sign function with applications to the Muon optimizer www.ethanepperly.com/index.php/20...

07.06.2025 02:03 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Markov Musings 5: PoincarΓ© Inequalities – Ethan N. Epperly

New blog post out in my series on Markov chains! In this post, I discuss PoincarΓ© inequalities and their connection to mixing of Markov chains www.ethanepperly.com/index.php/20...

24.05.2025 19:17 β€” πŸ‘ 16    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1
Preview
a bunch of red and white balls in the sky Alt: A lot of red and white Pokeballs falling from the sky

🧩 New week, time for our weₐᡉkly quiz! Today, another thing a bit random: PokΓ©mon! Ash and Barry want to catch 'em all: all of them. You know, Pikachu, Jigglypuff, err... Charmander? It's been a while.

So, n PokΓ©mon to catch, and no idea how long it'll take. Gotta help them out! #WeaeklyQuiz

1/

10.03.2025 09:28 β€” πŸ‘ 17    πŸ” 5    πŸ’¬ 1    πŸ“Œ 3
We start by computing $x^\top (A\circ M)x$: $$x^\top (A\circ M)x = \sum_{i,j=1}^n x_i (A\circ M)_{ij} x_j = \sum_{i,j=1}^n x_i A_{ij} M_{ij} x_j.$$Now, we may rearrange the sum, use symmetry of $M$, and repackage it as a trace $$x^\top (A\circ M)x = \sum_{i,j=1}^n x_i A_{ij} x_j M_{ji} = \tr(\operatorname{diag}(x) A \operatorname{diag}(x) M).$$This the trace formula for quadratic forms in the Schur product.

We start by computing $x^\top (A\circ M)x$: $$x^\top (A\circ M)x = \sum_{i,j=1}^n x_i (A\circ M)_{ij} x_j = \sum_{i,j=1}^n x_i A_{ij} M_{ij} x_j.$$Now, we may rearrange the sum, use symmetry of $M$, and repackage it as a trace $$x^\top (A\circ M)x = \sum_{i,j=1}^n x_i A_{ij} x_j M_{ji} = \tr(\operatorname{diag}(x) A \operatorname{diag}(x) M).$$This the trace formula for quadratic forms in the Schur product.

Ack! Typesetting glitch. It was meant to be diag(x) A diag(x) M

25.02.2025 15:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Proof of the Schur product theorem: The Kronecker product $A\otimes M$ of two psd matrices is psd. The entrywise product $A\circ M$ is a principal submatrix of $A\otimes M$: $$A\circ M = ((A\otimes M)_{(i+n(i-1))(i+n(i-1))} : i = 1,\ldots,n).$$All principal submatrices of a psd matrix are psd, so $A\circ M$ is psd.

Proof of the Schur product theorem: The Kronecker product $A\otimes M$ of two psd matrices is psd. The entrywise product $A\circ M$ is a principal submatrix of $A\otimes M$: $$A\circ M = ((A\otimes M)_{(i+n(i-1))(i+n(i-1))} : i = 1,\ldots,n).$$All principal submatrices of a psd matrix are psd, so $A\circ M$ is psd.

New blog post with four proofs of the Schur product theorem. Do you know a fifth? www.ethanepperly.com/index.php/20...

25.02.2025 03:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Note to Self: How Accurate is Sketch and Solve? – Ethan N. Epperly

New blog post up! In it, I look at the question: how accurate is sketch-and-solve method for least squares? A standard bound suggests the residual is within a 1 + O(Ξ·) factor of optimal for an embedding of distortion Ξ·. But this isn't the correct answer! www.ethanepperly.com/index.php/20...

14.02.2025 19:22 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Oooo can you share?

31.12.2024 03:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Delightful little tale by Nick Trefethen: people.maths.ox.ac.uk/trefethen/ba...

16.12.2024 06:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
My Favorite Proof of the Cauchy–Schwarz Inequality – Ethan N. Epperly

What is your favorite proof of the Cauchy–Schwartz inequality? I wrote about my favorite proof, which uses matrix theory, in a new blog post. Check it out! Also included: a matrix theoretic proof of Jensen’s inequality for 1/x www.ethanepperly.com/index.php/20...

12.12.2024 15:40 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Low-Rank Approximation Toolbox – Ethan N. Epperly

Sorry! This is definitely one of the most "advanced" posts I've written on my blog so far. The earlier posts in my "low-rank approximation toolbox" series might be some help, at least! www.ethanepperly.com/index.php/ca...

09.12.2024 17:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Low-Rank Approximation Toolbox: The Gram Correspondence – Ethan N. Epperly

Did you know that randomized Nystrâm approximation of A is equivalent to running the randomized SVD on A⁰ᐧ⁡? This and other surprising facts on this week's blog post on the "Gram correspondence" www.ethanepperly.com/index.php/20...

09.12.2024 17:15 β€” πŸ‘ 18    πŸ” 6    πŸ’¬ 3    πŸ“Œ 0

This whole β€œadvent of research” series of posts by David is really excellent, but I love this one in particular

08.12.2024 01:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Randomized Kaczmarz is Asympotically Unbiased for Least Squares – Ethan N. Epperly

New blog post up! The randomized Kaczmarz algorithm doesn’t converge for inconsistent systems of linear equations, butβ€”as an estimator for the least-squares solutionβ€”it does have an exponentially decreasing bias www.ethanepperly.com/index.php/20...

05.12.2024 17:22 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Randomized Kaczmarz with tail averaging The randomized Kaczmarz (RK) method is a well-known approach for solving linear least-squares problems with a large number of rows. RK accesses and processes just one row at a time, leading to exponen...

Check out the paper! arxiv.org/abs/2411.19877

02.12.2024 17:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Error for different randomized Kaczmarz methods applied to a least-squares problem. Tail-averaged randomized Kaczmarz (TARK) outcompetes the existing methods

Error for different randomized Kaczmarz methods applied to a least-squares problem. Tail-averaged randomized Kaczmarz (TARK) outcompetes the existing methods

New paper out with Gil Goldshlager and Rob Webber! In it, we show that *tail averaging* can be used to improve the accuracy of the randomized Kaczmarz method for solving least-squares problems. The resulting method, TARK, outcompetes other row-access methods for least squares

02.12.2024 17:05 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Cross-posting this - please join us!

Mailing List link: groups.google.com/g/internatio...
YouTube Channel link: www.youtube.com/@MonteCarloS...

30.11.2024 20:51 β€” πŸ‘ 20    πŸ” 5    πŸ’¬ 2    πŸ“Œ 1
Home About The New York Theory Day is a workshop aimed to bring together the theoretical computer science community in the New York metropolitan area for a day of interaction and discussion. The Theory Da...

A reminder about NY Theory Day in a week! Fri Dec 6th! Talks by Amir Abboud, Sanjeev Khanna, Rotem Oshman, and Ron Rothblum! At NYU Tandon!

sites.google.com/view/nyctheo...

Registration is free, but please register for building access.

See you all there!

30.11.2024 17:04 β€” πŸ‘ 45    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0

Just created the Starter Pack for Optimization Researchers to help you on your journey into optimization! πŸš€

Did I miss anyone? Tag them or let me know what to add!

go.bsky.app/VjpyyRw

23.11.2024 23:59 β€” πŸ‘ 38    πŸ” 8    πŸ’¬ 14    πŸ“Œ 0

The first d columns of an a Haar random matrix from the orthogonal group O(n), yes

23.11.2024 18:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A uniformly random matrix with orthonormal rows would be another example

23.11.2024 17:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

New blog post up presenting some beautiful *exact formulas* for sketched least squares with a Gaussian embedding. These beautiful formulas appear to have only been published as recently as 2020; see post for details! www.ethanepperly.com/index.php/20...

21.11.2024 16:59 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Five Interpretations of Kernel Quadrature – Ethan N. Epperly

Very excited to be attending #NeurIPS2023 next week where I’ll be presenting my work β€œKernel quadrature with randomly pivoted Cholesky” with Elvira Moreno. I’ve written a little blog post to explain what kernel quadrature is and what our approach is to it!

05.12.2023 22:19 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@ethanepperly is following 20 prominent accounts