The table below compares current on-demand, hourly rental prices for a single NVIDIA H100 80GB GPU across major U.S. cloud providers. Prices are normalized per-GPU (even if only multi-GPU instances are offered) and reflect standard on-demand rates in U.S. regions (no spot, reserved, or non-US pricing).
*Normalized January 2026 cost per single H100 GPU, even when only multi-GPU instances are offered by the provider.
Methodology (why you can trust these numbers)
<ul> <li><strong>On-demand only:</strong> No reserved-instance, commitment, or prepaid discounts.</li> <li><strong>Same class of silicon:</strong> All prices refer to NVIDIA H100 80GB GPUs. Thunder Compute’s A100 80GB rate is also shown to help developers evaluate cost-performance tradeoffs.</li> <li><strong>Public price lists:</strong> Every figure comes from the provider’s current pricing page (or public documentation) on the date above; where a provider sells only 8-GPU nodes we divide by eight to get a single-GPU equivalent.</li> <li><strong>USD in U.S. regions:</strong> Rates elsewhere can differ by 5-20 %.</li> </ul>
Why this matters for developers
Price sources: Thunder Compute pricing page, Lambda Labs “GPU Cloud” grid, RunPod pricing, Vast.ai median market price, and the DataCrunch hyperscaler comparison for AWS, Google Cloud, and Azure. vast.ai
Result: Two hours on Thunder Compute’s H100s costs less than 15 minutes on AWS or GCP H100s
Takeaways
<ul> <li>Thunder Compute’s H100 rate is <strong>4–8× cheaper</strong> than AWS or GCP and <strong>≈2× cheaper</strong> than Azure</li> <li>Specialized providers like Vast.ai, RunPod, and Lambda have narrowed the gap, but they still charge <strong>2–3× more</strong> than Thunder Compute for equivalent runtime.</li> <li>Bookmark this page—we refresh the numbers quarterly so you don’t have to.</li> <li>Building a startup? See our analysis of <strong>Startup-Friendly GPU Cloud Providers</strong> for credit offers, and spin up an H100 on <a href="https://www.thundercompute.com/">Thunder Compute</a></li> </ul>
