Back
Lambda Labs Alternatives: Low-price A100 and H100 Options
Fast, developer-friendly GPU clouds you can use today
Published:
Aug 14, 2025
Last updated:
Aug 14, 2025

TL;DR
If you like Lambda’s managed feel but want lower on-demand prices or simpler dev UX, start with Thunder Compute (A100 80 GB at $0.78 per hour, H100 at $1.47 per hour) and compare against Runpod, Crusoe, Voltage Park, Modal, Paperspace, and marketplace options like Vast.ai.
Quick picks
Cheapest on-demand A100/H100: Thunder Compute (pricing page) – per-second billing, persistent storage, one-click VS Code integration.
Large marketplace with consumer cards: Vast.ai (overview of GPU marketplace) – crowdsourced supply, many RTX 4090 options.
Enterprise-y alternative with public rates: Crusoe (on-demand pricing FAQ).
Low H100 headline price at scale: Voltage Park (pricing) – H100 from $1.99 per hour.
Serverless and workflows: Modal (pricing) – per-second rates translate to ~$2.50/hr for A100, ~$3.95/hr for H100.
Broad ecosystem and notebooks: Paperspace via DigitalOcean (pricing details) – H100 on-demand ~$5.95/hr; A100 also available.
Pricing snapshot (A100 and H100)
Rates are on-demand list prices where published. Some providers sell multi-GPU nodes; figures shown are per-GPU where the provider publishes per-GPU pricing.
Provider | A100 80 GB ($/hr) | H100 80 GB ($/hr) | Notes |
---|---|---|---|
Thunder Compute | 0.78 | 1.47 | Per-second billing, persistent storage 0.15/GB/mo, snapshots, change vCPU/RAM on the fly, one-click VS Code. [Thunder pricing] |
Lambda Labs | 1.79 | 2.99 | Published per-GPU on their 8x nodes. [Lambda GPU Cloud pricing] |
Runpod | 1.64–1.74 | from 1.99 | A100 PCIe 1.64, A100 SXM 1.74, H100 starts 1.99. [Runpod pricing] |
Crusoe Cloud | 1.95 (SXM) | 3.90 | Public on-demand table. [Crusoe pricing] |
Voltage Park | n/a | from 1.99 | H100 headline on-demand price. [Voltage Park pricing] |
Modal | ~2.50 | ~3.95 | Per-second rates converted to hourly. [Modal pricing] |
Paperspace | 3.09 (40 GB) or 3.18 (80 GB) | 5.95 | On-demand per official docs. [Paperspace pricing] |
Marketplace vs managed clouds (important if you need consumer GPUs)
Marketplaces can deliver the lowest cost, but host consistency varies.
Vast.ai is a decentralized, peer-to-peer marketplace aggregating GPUs from both individuals and datacenters—including consumer-grade GPUs like RTX 4090—resulting in high supply variability and often lower prices.
Runpod Community Cloud also lists consumer GPUs with transparent starting prices and community-provided capacity.
If consistent performance, multi-GPU NVLink, or enterprise networking matters, managed clouds (Thunder Compute, Lambda Labs, Crusoe, Voltage Park) are more predictable.
Why teams pick Thunder Compute
Lowest on-demand A100/H100 rates in this comparison—A100 80 GB for $0.78/hr; H100 for $1.47/hr.
Developer velocity—one-click VS Code, per-second billing, persistent disks, snapshots, dynamic vCPU/RAM adjustments.
Simple pricing model—storage at $0.15/GB/month.
See the Thunder Compute pricing page for up-to-date details.
How to choose
For multi-GPU training with fast interconnect: opt for managed providers that explicitly publish SXM node specs and interconnect performance.
For fast prototyping or fine-tuning: prioritize per-second billing, quick restart speeds, and persistent storage.
To minimize cash burn: compare hourly A100 vs H100 costs—A100 often offers more cost-effective compute per token for prototyping models.
If you need consumer GPUs: good for specific workloads like image generation or lightweight training, but verify VRAM, driver compatibility, and host stability.

Carl Peterson
Other articles you might like
Learn more about how Thunder Compute will virtualize all GPUs
Try Thunder Compute
Start building AI/ML with the world's cheapest GPUs