Back
Lambda Labs vs Thunder Compute (July 2025): A100 & H100 Pricing, Billing, and Developer Experience
Which GPU cloud is cheaper and easier to use for ML teams—Thunder Compute or Lambda Labs?
Published:
Jul 25, 2025
Last updated:
Jul 25, 2025

TL;DR:
Thunder Compute: A100 80GB at $0.78/hr and H100 80GB at $1.47/hr, billed per minute, $0.15/GB/mo storage, persistent disk, snapshots, and on-the-fly spec changes.
Lambda Labs: A100 80GB (8‑GPU nodes) at $1.79/hr, A100 40GB at $1.29/hr, H100 80GB at $3.29/hr (single GPU) or $2.99/hr (8‑GPU), billed per minute, storage $0.20/GB/mo. See their pricing page, billing docs, and filesystem pricing.
Pricing: A100 & H100
Provider | GPU | VRAM | On‑Demand Price (per GPU / hr) | Billing Increment |
---|---|---|---|---|
Thunder Compute | A100 80 GB | 80 GB | $0.78/hr | Per minute |
Thunder Compute | H100 80 GB | 80 GB | $1.47/hr | Per minute |
Lambda Labs | A100 80 GB (SXM, 8‑GPU nodes) | 80 GB | $1.79/hr | Per minute |
Lambda Labs | A100 40 GB | 40 GB | $1.29/hr | Per minute |
Lambda Labs | H100 80 GB (single GPU SXM) | 80 GB | $3.29/hr | Per minute |
Lambda Labs | H100 80 GB (8‑GPU SXM nodes) | 80 GB | $2.99/hr | Per minute |
Lambda numbers come from the official Lambda Cloud pricing page. Thunder Compute numbers are current as of July 2025.
Billing Granularity & Invoices
Thunder Compute: Billed per minute, so you’re not paying for long idle blocks if you’re spinning instances up and down frequently.
Lambda Labs: Also bills in one-minute increments with weekly invoices; see their on-demand billing docs.
For iterative R&D (short runs, frequent restarts), smaller increments reduce waste.
Storage & Data Persistence
Thunder Compute: Persistent storage is attached to each instance by default; snapshots and spec changes (RAM, vCPU, storage) can be made without tearing everything down. Storage is $0.15/GB/month.
Lambda Labs: Filesystems cost $0.20/GB/month and are billed hourly even when not mounted. Details: filesystem pricing and Lambda’s storage expansion announcement.
If you keep large datasets around or iterate across many experiments, Thunder’s lower storage cost and built-in persistence help keep bills predictable.
Developer Experience & Workflow
Thunder Compute: One-click VS Code integration, simple dashboard, easy spec changes—built for fast iteration by ML engineers and indie researchers.
Lambda Labs: Straightforward cloud UI, Ubuntu images, volume management, and larger node options—but no native one-click VS Code setup like Thunder’s.
If your dev loop lives in VS Code and you want zero setup friction, Thunder’s UX advantage is tangible.
Scaling, Flexibility & Multi-GPU Nodes
Lambda Labs offers 8× GPU SXM nodes (A100/H100) with NVLink—great for large-scale training where you need high interconnect bandwidth. See their pricing.
Thunder Compute lets you hot-swap hardware or upgrade instance resources without a rebuild—ideal for teams that start small and scale gradually, or switch GPU types often.
Who Should Choose Which?
Choose Thunder Compute if:
You want lower on-demand prices for A100/H100.
You value per-minute billing and default persistent storage/snapshots.
Your team wants fast VS Code integration and minimal setup overhead.
Choose Lambda Labs if:
You need large multi-GPU SXM nodes immediately (8× H100/A100).
You’re fine with per-minute billing at higher rates and higher storage cost.
You already rely on Lambda’s tooling or need their specific hardware configs.
Final Take
For most ML teams optimizing cost and iteration speed, Thunder Compute’s pricing and feature set (persistent storage, snapshots, spec tweaks) provide a strong edge. Lambda Labs remains a solid choice if you require big SXM boxes out of the gate, but you’ll pay more per GPU-hour and per GB of storage.

Carl Peterson
Other articles you might like
Learn more about how Thunder Compute will virtualize all GPUs
Try Thunder Compute
Start building AI/ML with the world's cheapest GPUs