While the AMD Instinct™ MI300X was promoted as the "H100 killer" for its massive 192 GB of VRAM, its actual value depends on your application and the provider.
The following breakdown compares the current landscape of MI300X on-demand pricing. Niche clouds are aggressively undercutting hyperscale giants to win over developers running large-scale LLM inference.
Breakdown
<ul><li><strong>Vultr and TensorWave</strong> have opened the MI300X price war below $2/hr.</li><li><strong>RunPod</strong> remains the easiest MI300X self-service experience under $3/hr.</li><li><strong>Hyperscalers (Azure, Oracle)</strong> still charge 3-4× more than niche clouds.</li><li><strong>Thunder Compute H100s</strong> stay the absolute price-performance leader for most workflows.</li></ul>
Current on-demand MI300X prices (per GPU)
**Normalized to a single MI300X, even when only multi-GPU nodes are offered.*
Methodology
Why you can trust these numbers:
<ul><li><strong>On-demand only</strong> - no reservations or prepaid contracts.</li><li><strong>Same silicon</strong> - every figure refers to AMD MI300X (192 GB) SKUs.</li><li><strong>Public price lists</strong> - pulled December 16, 2025 from each provider's published pricing pages.</li><li><strong>U.S. regions, USD</strong> - prices elsewhere vary by 5-20 %.</li></ul>
Cost-performance snapshot
Thunder Compute's H100 still delivers more runtime per dollar than most on-demand MI300X options, making it the cost-efficiency pick for prototyping and fine-tuning.
Why this matters for developers
<ul><li><strong>More memory, new math</strong> - MI300X packs 192 GB HBM3 and FP8 support, ideal for 70 B+ parameter models.</li><li><strong>Price gap vs. H100</strong> - Even the cheapest MI300X ($1.85/hr) costs ~25% more than the lowest H100 price ($1.38/hr).</li><li><strong>H100 still wins ROI</strong> - Unless your model truly needs 192 GB or FP8, an H100 80 GB at $1.38/hr often delivers better value.</li><li><strong>Bookmark this page</strong> - we refresh the numbers quarterly so you don't have to.</li></ul>
