Back
Best Paperspace Alternatives (August 2025): Real A100/H100 Prices, Contracts, and Who Each Provider Fits
Compare alternatives to Paperspace including Thunder Compute, Vast.ai, Lambda Labs, RunPod, and more. See pricing, availability, selection.
Published:
Apr 25, 2025
Last updated:
Aug 1, 2025

TL;DR
If you need on‑demand A100s without lock‑in, Thunder Compute is the cheapest mainstream option at $0.66/GPU‑hr. If you want serverless inference and a large community marketplace, Runpod is strong (A100 Serverless ~$2.17/GPU‑hr Active). For large clusters or H100/H200 access, consider Lambda, CoreWeave, or Oracle Cloud (OCI lists H100 at $10/GPU‑hr). Paperspace’s eye‑catching $1.15/hr A100 requires a 3‑year commitment and a Growth plan for many GPUs; on‑demand A100 is $3.09/hr. Links and numbers below are from provider pricing pages or official docs.
Quick comparison (on‑demand or list price in the U.S., where available)
Prices move fast. Always check each provider’s page before you launch.
Provider | Example A100 price | Example H100 price | Contracts / Notes | Best for |
---|---|---|---|---|
Thunder Compute | $0.66/hr (A100 40GB) | $1.47/hr | True pay‑as‑you‑go. Per‑minute billing. Pricing. | Lowest on‑demand cost for experiments, fine‑tuning, and dev. |
Paperspace (DigitalOcean) | $3.09/hr (A100 40GB) | $5.95/hr | $1.15/hr for A100 is 3‑year reserved; many GPUs need Growth plan ($39/mo). Pricing · Docs. | Teams already on Gradient; long reservations. |
Runpod | Community: A100 80GB often near $1.19/hr (marketplace); Serverless A100 **$2.17/hr Active** / $2.72/hr Flex. Pricing · A100 guide. | H100 Serverless ~$3.35/hr Active. | Serverless inference, fast scale‑out, big GPU marketplace. | |
Lambda | ~$1.29/hr (A100 40GB) | H100 "as low as" $1.85/hr with commitment. Pricing. | Lower with reservations; clusters available. | Research teams that want simple, reliable deep‑learning stacks. |
Vast.ai | ~$1.27/hr (A100 SXM4 typical) | varies by host | Spot‑style marketplace; reliability varies. Marketplace. | Lowest possible rates if you can tolerate variability. |
Oracle Cloud (OCI) | A100 80GB in 8‑GPU node | $10.00/GPU‑hr (H100) list price. Price list · GPU pricing. | Multi‑GPU bare‑metal shapes; strong HPC networking. | Enterprises, fixed‑price clusters, hybrid with free egress. |
Google Cloud | $2.48/hr (V100) | H100 single‑GPU ~>$6/hr equivalent | Per‑GPU list prices; VM cost extra. GPU pricing. | Teams on GCP wanting Vertex AI + TPUs. |
AWS | — | p5 (8× H100) $31.464/hr via Capacity Blocks in us‑west‑2 (~$3.93/GPU‑hr). Capacity Blocks pricing. | Buy time windows ahead of use; traditional on‑demand varies by region. | Production scale with AWS integrations. |
Azure | — | NC H100 v5 single‑GPU $6.98/hr (guidance). Azure pricing. | Complex but broad enterprise coverage. | Microsoft‑centric orgs, compliance, global presence. |
DataCrunch | A100 80GB around $1.65/hr | H100 around $2.19–$2.65/hr | Green energy DCs; aggressive public pricing. A100/H100 posts · Pricing roundup. | Budget H100/A100 with responsive support. |
JarvisLabs | A100 40GB ~$0.79–$1.29/hr (tiered) | H100 ~$2.99/hr | Pause/resume; strong community support. Docs. | Kaggle/fast.ai style workflows, pausable labs. |
Nebius AI | — | H100 $2.95/hr; H200 $3.50/hr | Also lists GB200 and B200. Pricing. | European teams; modern NVIDIA lineup. |
Why teams look for a Paperspace alternative
Pricing clarity. Paperspace promotes $1.15/hr A100, but that rate requires a 36‑month commitment. On‑demand A100 is $3.09/hr and most high‑end GPUs require the Growth subscription. If you only need a few dozen or a few hundred GPU‑hours, the effective hourly cost is far higher than the banner suggests. See the official Paperspace pricing page and the detailed DigitalOcean docs.
Regions and availability. Paperspace operates three datacenter regions (NY2, CA1, AMS1). If your users or data are elsewhere, latency and quotas can bite. DigitalOcean confirms the region count and lists which GPUs are in each site. See the regional availability doc.
Modern GPU access. If you need H100/H200 or multi‑GPU NVLink clusters, several providers now publish lower per‑GPU list prices or offer simpler scale‑out paths.
How to choose (fast)
Estimate hours. Training + fine‑tuning + eval + retries.
Map GPU need. A100 is still excellent; H100 is ~2–3× faster on many LLM workloads and may be cheaper on a time‑to‑result basis even at a higher $/hr.
Decide control vs. convenience. VMs give full control. Serverless removes idle cost and handles autoscaling but limits customization.
Check the extras. Storage, egress, snapshot pricing, per‑minute billing, pause/hibernate, NVLink/InfiniBand, support SLAs.
Avoid surprise commitments. Verify whether the price assumes reserved terms or a monthly plan.
The best Paperspace alternatives (details)
1) Thunder Compute — Lowest on‑demand A100 price, no lock‑in
Price highlights: A100 40GB $0.66/hr, A100 80GB $0.78/hr, H100 $1.47/hr. True pay‑as‑you‑go with per‑minute billing to trim idle time. Thunder Compute pricing.
Best for: Experiments, fine‑tuning, and small/medium inference where cash outlay and predictable on‑demand costs matter.
Why it beats Paperspace for budgets: Seven hours on Thunder often costs what a single hour of on‑demand A100 does on Paperspace once you factor in the Growth plan.
2) Runpod — Serverless inference + big marketplace
Price highlights: Serverless A100 ~$2.17/hr Active (or ~$2.72/hr Flex), H100 ~$3.35/hr Active. Community Pods frequently list A100 80GB near ~$1.19/hr, with Secure Cloud priced higher. Runpod pricing · A100 comparison.
Best for: Fast deployments, autoscaling inference, and teams that don’t want to manage VM lifecycles.
Watch‑outs: Marketplace hosts vary; read specs and ratings. Serverless is priced per‑second and by worker type; model throughput, cold‑start, and concurrency determine the true TCO.
3) Lambda — Research‑friendly, simple stack
Price highlights: Public materials show A100 40GB ~$1.29/hr and H100 "as low as" $1.85/hr with commitments; on‑demand H100 clusters are also available. Lambda pricing.
Best for: Teams that want reliable hardware, solid images, and optional large clusters without hyperscaler complexity.
4) Vast.ai — Lowest prices if you can tolerate variability
Price highlights: Marketplace pricing often shows A100 near $1.27/hr and many consumer RTX cards at pennies per minute. Vast.ai.
Best for: Lowest possible cost, preemptible/interruptible workloads, non‑critical runs.
Watch‑outs: Reliability and bandwidth vary by host; plan for checkpoints and migration.
5) Oracle Cloud (OCI) — Straightforward list pricing for H100/H200
Price highlights: Official list price for H100 is $10.00 per GPU‑hour; H200 is also listed at $10.00 per GPU‑hour. Shapes are multi‑GPU bare‑metal with high‑bandwidth RDMA. OCI price list · GPU pricing.
Best for: Enterprises that value predictable list pricing, big clusters, and free egress.
6) Google Cloud — Broad services + TPUs
Price highlights: Transparent per‑GPU pricing (e.g., V100 $2.48/hr, T4 $0.35/hr). H100 available on A3 instances; effective >$6/GPU‑hr after normalizing. VM cost is additional. GCP GPU pricing.
Best for: Teams already invested in Vertex AI/BigQuery; TPU access.
7) AWS — Scale and ecosystem
Price highlights: p5.48xlarge (8× H100) listed at $31.464/hr in us‑west‑2 when purchased as Capacity Blocks for ML (≈ $3.93/GPU‑hr). Traditional on‑demand pricing varies by region and purchasing model; spot can be significantly lower. Capacity Blocks pricing.
Best for: Enterprise production, security/compliance, global reach.
8) Azure — Single‑GPU H100 VMs
Price highlights: Public guidance around $6.98/hr for single‑GPU NC H100 v5 VMs in U.S. regions. Check the regional calculator for exact rates. Azure pricing.
Best for: Microsoft‑centric orgs and Windows workflows.
9) DataCrunch — Aggressive H100/A100 public rates
Price highlights: Blog examples list H100 at $2.65/hr and A100 80GB at $1.65/hr. H100 vs A100 · Pricing comparison.
Best for: Startups that want published, low H100/A100 prices and responsive support.
10) JarvisLabs — Pausable notebooks and VMs
Price highlights: A100 40GB ~$0.79–$1.29/hr, H100 ~$2.99/hr, with pause/resume to cut idle cost. JarvisLabs H100 pricing.
Best for: Individuals, students, and scrappy teams iterating quickly.
11) Nebius AI — Modern NVIDIA lineup, EU focus
Price highlights: H100 $2.95/hr, H200 $3.50/hr, B200 $5.50/hr (per GPU‑hr). Nebius pricing.
Best for: European teams wanting current-gen parts with clear per‑GPU pricing.
12) Paperspace (DigitalOcean) — When it still makes sense
When to pick it: You’re committed to the Gradient ecosystem, want managed notebooks/workflows, or can fully utilize reserved pricing.
Key numbers to know: On‑demand A100 is $3.09/hr and H100 is $5.95/hr. The headline $1.15/hr A100 requires a 36‑month commitment, and many GPUs require the Growth plan ($39/mo). Regions: NY2, CA1, AMS1. See the official Paperspace pricing and DigitalOcean docs plus regional availability.
Methodology & update cadence
Prices were checked against public pricing pages and official documentation on July 28, 2025. Some providers publish only ranges; marketplace rates fluctuate. When providers only sell multi‑GPU nodes, we note total node price and/or normalize to per‑GPU where reasonable.
Always confirm prices in your target region and add storage, data egress, and VM costs where applicable.
Sources
Paperspace/DigitalOcean: Paperspace pricing · GPU price table · Regional availability
Thunder Compute: Pricing · A100 pricing tracker · H100 pricing tracker
Runpod: Pricing · A100 comparison · Serverless vs Pods
Lambda: Pricing
Vast.ai: Marketplace
Oracle Cloud: GPU pricing · OCI price list · Oracle H200 launch post
Google Cloud: GPU pricing
Azure: VM pricing (use calculator for your region) · Helpful: Vantage pages for NC40ads H100 v5 and NC80adis H100 v5 show current list prices.
DataCrunch: H100 vs A100 · Cloud GPU pricing comparison
JarvisLabs: H100 price guide
Nebius AI: Pricing
CoreWeave: Pricing
Market trackers (helpful for cross‑checking): GetDeploying GPU price index · Cloud‑GPUs.com comparison
Thinking about switching?
Spin up an A100 on Thunder Compute in minutes and benchmark your workload. If it finishes in half the cost, keep it. If not, you’ve validated your path with real numbers.

Carl Peterson
Try Thunder Compute
Start building AI/ML with the world's cheapest GPUs
Other articles you might like
Learn more about GPUs and more