Back
Vast.ai Alternatives (August 2025): Reliable and low-cost cloud GPUs
Shortlist of reliable, low-cost A100 and H100 providers with quick context on marketplaces vs datacenter GPUs
Published:
Aug 14, 2025
Last updated:
Aug 14, 2025

Vast.ai is a GPU marketplace. Hosts range from individual hobbyists to large datacenters, which is why you will often see consumer GPUs like RTX 4090 and 3090 available. Pricing is real time and varies by host and configuration. See the Vast.ai docs and RTX 4090 page for how the marketplace works and the types of cards commonly listed.
When marketplaces shine
Great for bursty inference, experiments, or hobby training on consumer cards
Lowest headline prices if you can shop around and tolerate variability
When they struggle
Mixed reliability and performance due to heterogeneous hardware
Price and availability can fluctuate by the hour
Security and compliance needs may require vetted datacenter hosts
Below are simple, on-demand per-GPU prices for A100 80 GB and H100 80 GB from well known providers. These are standard list rates as of August 2025. Always check the linked pricing pages for the latest numbers.
Thunder Compute
A100 80 GB: 0.78/hr
H100 80 GB: 1.47/hr (coming soon)
Per-second billing, persistent storage for the instance, snapshots, and the ability to change RAM, vCPUs, and storage on the fly. One-click VS Code integration and a simple console. Storage is 0.15/GB/mo.
Lambda
A100 80 GB SXM: 1.79/hr
H100 80 GB SXM: 2.99/hr
See the official pricing tables. Lambda on-demand pricing.
Crusoe Cloud
A100 80 GB SXM: 1.95/hr
H100 80 GB SXM: 3.90/hr
Crusoe shows per-GPU list pricing and reserved options. Crusoe pricing.
CoreWeave
A100 80 GB NVLINK: 2.21/hr
H100 HGX: 4.76/hr
Public price card with storage rates. CoreWeave pricing.
Paperspace (DigitalOcean)
H100 80 GB on-demand: 5.95/hr
Details in the official pricing page. Paperspace pricing.
Genesis Cloud
H100 SXM starting at 1.60/hr per GPU component. Note the minimum is an 8-GPU node for HGX systems. Genesis Cloud pricing.
Note on consumer cards vs datacenter GPUs
Marketplaces commonly list consumer GPUs because supply is crowdsourced. That is why you will see cards like RTX 4090 and 3090 in abundance. If you need predictable training throughput, a tested datacenter A100 80 GB or H100 80 GB is usually the safer choice. See Vast.ai’s explanation of community vs datacenter servers in their docs. Vast.ai overview.
Why teams switch to Thunder Compute
Lowest A100 80 GB price listed here, billed per second
H100 80 GB at 1.47/hr keeps training affordable
Persistent instance storage, snapshots, and spec changes without rebuilds
One-click VS Code and a minimal interface that is easy to onboard
If you are coming from a marketplace, you can expect fewer surprises in performance and uptime, while still paying less than most datacenter clouds.

Carl Peterson
Other articles you might like
Learn more about how Thunder Compute will virtualize all GPUs
Try Thunder Compute
Start building AI/ML with the world's cheapest GPUs