Glenn K. Lockwood's Avatar

Glenn K. Lockwood

@glennklockwood.mast.hpc.social.ap.brid.gy

I am a supercomputing enthusiast, but I usually don't know what I'm talking about. I post about large-scale infrastructure for #HPC and #AI. πŸŒ‰ bridged from https://mast.hpc.social/@glennklockwood on the fediverse by https://fed.brid.gy/

54 Followers  |  0 Following  |  168 Posts  |  Joined: 20.10.2024  |  1.2787

Latest posts by glennklockwood.mast.hpc.social.ap.brid.gy on Bluesky

@jannem Not at where I spent my afternoon though πŸ™‚

05.08.2025 05:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

One thing I really enjoy about working at VAST (or perhaps that I enjoy about not working at Microsoft) is that I can go out and talk to people again as part of my job. Here’s a view from where I got to spend my afternoon today.

#notHPC #butthatsok

05.08.2025 05:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Original post on mast.hpc.social

I picked a good time to quit, eh? I still have the value of my unvested stock in my Yahoo Finance, so I get a daily reminder of how much I walked away from πŸ™‚

Satya’s no dummy, but I do worry that the blowback from the heavy layoffs during blue skies was underestimated. Cutting people to drive […]

31.07.2025 03:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Original post on mast.hpc.social

Very cool to see new neuromorphic coming online. Excited to see where this architecture goes beyond TrueNorth.

β€œ4320 chips with 152 cores each. The chips are 48 to a board with a single chip consuming between 0.8-2.5W. The whole system fits in a single rack” […]

30.07.2025 13:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Original post on mast.hpc.social

The 13th MVAPICH User Group (MUG) Conference is coming up (Aug 18-20 in Columbus). It's a great event for focused technical presentations around MPI and network performance. Great speaker lineup too. Wish I could make it in-person, but virtual attendance is free […]

29.07.2025 21:40 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

I wrote up some notes on how to approach I/O and storage benchmarks in RFPs. I normally don't post here about updates to my digital garden, but I think this page is tidy and useful.

https://www.glennklockwood.com/garden/benchmarking#storage

#HPC

25.07.2025 17:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Original post on mast.hpc.social

TIL that the Linux NFS client recently accepted a "noalignwrite" option that allows you to safely do shared-file writes to non-overlapping, non-4K-aligned extents. No longer have to use direct I/O to do shared-file writes to NFS with this enabled.

See […]

22.07.2025 18:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

I am a sucker for photos of cool #HPC infrastructure, and here is a dense GB200 NVL72 cluster going up somewhere in Canada (I think). Impressive to see this many racks in row; the DC must have facility water which is still uncommon in hyperscale. Source […]

[Original post on mast.hpc.social]

16.07.2025 16:41 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Upon hearing I left MSFT, founder of a big GPU cloud provider reached out with a very strong offer to join them. Funny thing: I applied there back in April and was auto-rejected. Moral of the story: don't chuck an application over the wall if you have an inside line! Applying online is a crapshoot.

14.07.2025 16:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@jannem The whole HPC world will never go all-cloud, but some of the biggest dominoes could have.

11.07.2025 13:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@kpdooty @glennklockwood Definitely. It's hard to appreciate that contrast until you've been in both worlds.

11.07.2025 12:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Original post on mast.hpc.social

In the few days I have between jobs, I wanted to share an unvarnished perspective on what I've learned after spending three years working on supercomputing in the cloud. It's hastily written and lightly edited, but I hope others find it interesting […]

11.07.2025 05:29 β€” πŸ‘ 2    πŸ” 5    πŸ’¬ 2    πŸ“Œ 0

Today is my last day at Microsoft. I’ve learned a lot over the last three years, but I’m ready to try something different.

09.07.2025 12:39 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

In the last 18 hours, I’ve learned way more about adjuvanted vaccines and adverse reactions to them in cats than I ever cared to. And the real kick is that the choice to re-up the cat’s vaccines was an afterthought, because we had to take other cat in for […]

[Original post on mast.hpc.social]

08.07.2025 17:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
CoreWeave Leads the Way with First NVIDIA GB300 NVL72 Deployment CoreWeave launches NVIDIA GB300 NVL72, redefining AI infrastructure with breakthrough performance, cloud integration, and next-gen AI model readiness.

This is cool, but the real proof is in the quality of the frontier models that are trained on Blackwell. And by that metric, GB200 NVL72 has yet to deliver anything.

https://www.coreweave.com/blog/coreweave-leads-the-way-with-first-nvidia-gb300-nvl72-deployment

03.07.2025 19:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Original post on mast.hpc.social

NERSC just announced that IBM and VAST have been selected as the storage providers for the upcoming Doudna #HPC system. Strong statement since NERSC had long invested in Lustre (scratch) and GPFS (community). Very cool to see NERSC not settling for the status quo […]

02.07.2025 17:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Original post on mast.hpc.social

Scott Atchley, who co-keynoted #ISC25, posted a really meaningful response to my ISC25 recap blog post on LinkedIn (https://www.linkedin.com/posts/scottatchley_isc25-olcf-frontier-activity-7345786995765395457-lGoq). He specifically offered additional perspective on the 20 MW exascale milestone […]

02.07.2025 00:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Happy Canada Day, everyone πŸ‡¨πŸ‡¦

01.07.2025 16:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Photos of a new, big, naked Cerebras cluster in Oklahoma appearing on the socials today. Pretty neat. Wonder if this is another G42 install.

28.06.2025 17:09 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Original post on mast.hpc.social

This is an amazingly detailed yet accessible description of how tape storage (media, drives, and libraries) work. Even if you don't care about storage, the engineering that goes into making this all work is fascinating.

https://dl.acm.org/doi/10.1145/3708997

(from […]

28.06.2025 16:40 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Request for Information–Future Generation High Performance Computing Center | HPC @ LLNL This website enables public access to Request for Information No. HPC-007 (RFI) pertaining to a Future Generation High Performance Computing Center. The RFI points of contact are LLNS Contract Analyst Gary Ward (ward31@llnl.gov) and Distinguished Member of Technical Staff Dr. Todd Gamblin (gamblin2@llnl.gov).

LLNL has an interesting vision of the future of HPC and workflows that aligns with a lot of what I heard at ISC: #HPC is no longer just the supercomputer, but the end-to-end services and ecosystem that enable discovery. The description is in Attachment (1) here: https://hpc.llnl.gov/fg-hpcc-rfi

25.06.2025 23:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
ISC'25 recap I had the pleasure of attending the 40th annual ISC High Performance conference this month in Hamburg, Germany. It was a delightful way to t...

Here's my notes from ISC'25 in Hamburg: https://blog.glennklockwood.com/2025/06/isc25-recap.html

#HPC

24.06.2025 06:02 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

65th #Top500 is out!

o The Top 3 #Exascale systems remain unchanged:

#1 El Capitan
#2 Frontier
#3 Aurora

o JUPITER Booster (EuroHPC planned #Exascale system currently being commissioned – hence partial system) at FZH in Germany at #4 is the only new system in the #Top10

#HPC #AI #ISC25

10.06.2025 09:01 β€” πŸ‘ 17    πŸ” 10    πŸ’¬ 2    πŸ“Œ 0
Post image

The #ISC25 conference has a strong keynote speaker lineup from around the world this week. Looking forward to hearing all three.

10.06.2025 07:16 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

The Darshan team and ALCF have release a new collection of I/O profiling logs from the Polaris supercomputer’s production workloads. Surely will be an excellent insight into how I/O needs of real #HPC apps have evolved.

https://zenodo.org/records/15353810

06.06.2025 15:56 β€” πŸ‘ 1    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

Here's a photo of an Azure datacenter coming online later this year that "will run a single, large cluster of hundreds of thousands of interconnected NVIDIA GB200 GPUs," "exabytes of storage," and "millions of CPU compute cores."

Source […]

[Original post on mast.hpc.social]

27.05.2025 20:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Original post on mast.hpc.social

This is wild--"orbital supercomputer" sounded like a misleading headline, but China really is planning to run inference in space a cluster of satellite nodes. Each does 744 TOPS, and they use a laser-based interconnect with "up to" 100G.

#HPC #maybe? […]

18.05.2025 17:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Though 27 yrs old, this article on high-functioning creative teams in it still hold up. TLDR: You can't pay people to be motivated, but intrinsic motivation is essential to creativity.

https://hbr.org/1998/09/how-to-kill-creativity

12.05.2025 21:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Original post on mast.hpc.social

MSFT study quantified that cold plates + 2ph can reduce GHGs by 15-21%, energy by 15-20%, and H2O use by 31-52% vs. 100% air. Great to see hyperscale slowly catch up to #HPC; hope this investment feeds back […]

12.05.2025 17:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Original post on mast.hpc.social

Adding cloud-like capabilities to traditional HPC is a hot topic (e.g., at CUG this year: https://www.linkedin.com/posts/bilel-hadri-a898bb3a_hpc-ai-cug2025-activity-7326093163285127168-U_ic). Prevailing method is to DIY a shadow implementation that tries to do cloud, but in its own weird way […]

10.05.2025 15:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0