NVIDIA H100 Pricing (September 2025): Cheapest On-Demand Cloud GPU Rates

The table below compares current on-demand, hourly rental prices for a single NVIDIA H100 80GB GPU across major U.S. cloud providers. Prices are normalized per-GPU (even if only multi-GPU instances are offered) and reflect standard on-demand rates in U.S. regions (no spot, reserved, or non-US pricing).
*Normalized September 2025 cost per single H100 GPU, even when only multi-GPU instances are offered by the provider.
Methodology (why you can trust these numbers)
- On-demand only: No reserved-instance, commitment, or prepaid discounts.
- Same class of silicon: All prices refer to NVIDIA H100 80GB GPUs. Thunder Compute’s A100 80GB rate is also shown to help developers evaluate cost-performance tradeoffs.
- Public price lists: Every figure comes from the provider’s current pricing page (or public documentation) on the date above; where a provider sells only 8-GPU nodes we divide by eight to get a single-GPU equivalent.
- USD in U.S. regions: Rates elsewhere can differ by 5-20 %.
A100 cost-performance benchmark
**Thunder Compute’s A100 80GB rate is included for reference. While A100s are a generation older, they remain highly capable for many workloads and offer dramatically better cost-efficiency for prototyping, fine-tuning, and small-scale training.
Why this matters for developers
Price sources: Thunder Compute pricing page, Lambda Labs “GPU Cloud” grid, RunPod pricing, Vast.ai median market price, and the DataCrunch hyperscaler comparison for AWS, Google Cloud, and Azure. vast.ai
Result: Two hours on Thunder Compute’s A100 costs less than 15 minutes on AWS or GCP H100s—and the A100 still gives you roughly 15× more runtime per dollar than hyperscaler H100s.
Takeaways
- Thunder Compute’s A100 rate is 4–8× cheaper than AWS or GCP and ≈2× cheaper than Azure, while its A100 remains the absolute price-performance leader.
- Specialized providers like Vast.ai, RunPod, and Lambda have narrowed the gap, but they still charge 2–3× more than Thunder Compute for equivalent runtime.
- Unless your workload truly needs H100 features (Transformer Engine, higher bandwidth, etc.), the A100 often delivers the best ROI for prototyping, fine-tuning, and small-scale training.
- Bookmark this page—we refresh the numbers quarterly so you don’t have to.
- Building a startup? See our analysis of Startup-Friendly GPU Cloud Providers for credit offers, and spin up an A100 or H100 on Thunder Compute