Back
AMD MI300X Pricing (September 2025): Cheapest High‑Memory GPUs in the Cloud
On‑demand hourly rates for AMD’s 192 GB MI300X across major U.S. providers
Published:
Aug 1, 2025
Last updated:
Sep 16, 2025

Current on‑demand MI300X prices (per GPU)
Provider | SKU / Instance | On‑Demand $/GPU‑hr* | Notes |
---|---|---|---|
MI300X single‑GPU | $1.85 | Chicago region, pay‑as‑you‑go | |
8 × MI300X bare‑metal | $1.50 | $12/hr node ÷ 8 GPUs | |
MI300X 192 GB | $2.99 | Community Cloud on‑demand | |
MI300X 192 GB | $3.30 | Hourly marketplace rate | |
[Vultr] | 8 × MI300X cluster | $3.99 | $31.92/hr node ÷ 8 GPUs |
BM.GPU.MI300X.8 | $6.00 | 8‑GPU bare‑metal ($48/hr) | |
ND MI300X v5 (8 × MI300X) | $7.86 | $62.85/hr VM ÷ 8 GPUs in West US |
*Normalized to a single MI300X, even when only multi‑GPU nodes are offered.
Methodology (why you can trust these numbers)
On‑demand only – no reservations or prepaid contracts.
Same silicon – every figure refers to AMD MI300X (192 GB) SKUs.
Public price lists – pulled September 16, 2025 from each provider’s published pricing pages.
U.S. regions, USD – prices elsewhere vary by 5‑20 %.
Cost‑performance snapshot
Provider / GPU | Two‑hour runtime | Effective cost |
---|---|---|
Thunder Compute – A100 80 GB | 2 × $0.78 | $1.56 |
Vultr – MI300X | 2 × $1.85 | $3.70 |
TensorWave – MI300X | 2 × $1.50 | $3.00 |
RunPod – MI300X | 2 × $2.99 | $5.98 |
Azure – MI300X | 2 × $7.86 | $15.72 |
Thunder Compute’s A100 still delivers ~2‑5× more runtime per dollar than most on‑demand MI300X options, making it the cost‑efficiency pick for prototyping and fine‑tuning.
Why this matters for developers
More memory, new math – MI300X packs 192 GB HBM3 and FP8 support, ideal for 70 B+ parameter models.
Price gap vs. H100 – Even the cheapest MI300X ($1.85/hr) costs ~25 % less than the lowest H100 price we tracked in July 2025.
A100 still wins ROI – Unless your model truly needs 192 GB or FP8, an A100 80 GB at $0.78/hr often delivers better value.
Bookmark this page – we refresh the numbers quarterly so you don’t have to.
Takeaways
Vultr and TensorWave have opened the MI300X price war below $2/hr.
RunPod remains the easiest MI300X self‑service experience under $3/hr.
Hyperscalers (Azure, Oracle) still charge 3‑4× more than niche clouds.
Thunder Compute A100s stay the absolute price‑performance leader for most workflows.

Carl Peterson
Try Thunder Compute
Start building AI/ML with the world's cheapest GPUs
Other articles you might like
Learn more about GPUs and more