Best Paperspace Alternatives (September 2025): Real Prices and Contracts

TL;DR
If you need on‑demand A100s without lock‑in, Thunder Compute is the cheapest mainstream option at $0.66/GPU‑hr. If you want serverless inference and a large community marketplace, Runpod is strong (A100 Serverless ~$2.17/GPU‑hr Active). For large clusters or H100/H200 access, consider Lambda, CoreWeave, or Oracle Cloud (OCI lists H100 at $10/GPU‑hr). Paperspace’s eye‑catching $1.15/hr A100 requires a 3‑year commitment and a Growth plan for many GPUs; on‑demand A100 is $3.09/hr. Links and numbers below are from provider pricing pages or official docs.
Quick comparison (on‑demand or list price in the U.S., where available)
Prices move fast. Always check each provider’s page before you launch.
Why teams look for a Paperspace alternative
- Pricing clarity. Paperspace promotes $1.15/hr A100, but that rate requires a 36‑month commitment. On‑demand A100 is $3.09/hr and most high‑end GPUs require the Growth subscription. If you only need a few dozen or a few hundred GPU‑hours, the effective hourly cost is far higher than the banner suggests. See the official Paperspace pricing page and the detailed DigitalOcean docs.
- Regions and availability. Paperspace operates three datacenter regions (NY2, CA1, AMS1). If your users or data are elsewhere, latency and quotas can bite. DigitalOcean confirms the region count and lists which GPUs are in each site. See the regional availability doc.
- Modern GPU access. If you need H100/H200 or multi‑GPU NVLink clusters, several providers now publish lower per‑GPU list prices or offer simpler scale‑out paths.
How to choose (fast)
- Estimate hours. Training + fine‑tuning + eval + retries.
- Map GPU need. A100 is still excellent; H100 is ~2–3× faster on many LLM workloads and may be cheaper on a time‑to‑result basis even at a higher $/hr.
- Decide control vs. convenience. VMs give full control. Serverless removes idle cost and handles autoscaling but limits customization.
- Check the extras. Storage, egress, snapshot pricing, per‑minute billing, pause/hibernate, NVLink/InfiniBand, support SLAs.
- Avoid surprise commitments. Verify whether the price assumes reserved terms or a monthly plan.
The best Paperspace alternatives (details)
1. Thunder Compute — Lowest on‑demand A100 price, no lock‑in
- Price highlights: A100 40GB $0.66/hr, A100 80GB $0.78/hr, H100 $1.47/hr. True pay‑as‑you‑go with per‑minute billing to trim idle time. Thunder Compute pricing.
- Best for: Experiments, fine‑tuning, and small/medium inference where cash outlay and predictable on‑demand costs matter.
- Why it beats Paperspace for budgets: Seven hours on Thunder often costs what a single hour of on‑demand A100 does on Paperspace once you factor in the Growth plan.
2. Runpod — Serverless inference + big marketplace
- Price highlights: Serverless A100 ~$2.17/hr Active (or ~$2.72/hr Flex), H100 ~$3.35/hr Active. Community Pods frequently list A100 80GB near ~$1.19/hr, with Secure Cloud priced higher. Runpod pricing · A100 comparison.
- Best for: Fast deployments, autoscaling inference, and teams that don’t want to manage VM lifecycles.
- Watch‑outs: Marketplace hosts vary; read specs and ratings. Serverless is priced per‑second and by worker type; model throughput, cold‑start, and concurrency determine the true TCO.
3. Lambda — Research‑friendly, simple stack
- Price highlights: Public materials show A100 40GB ~$1.29/hr and H100 "as low as" $1.85/hr with commitments; on‑demand H100 clusters are also available. Lambda pricing.
- Best for: Teams that want reliable hardware, solid images, and optional large clusters without hyperscaler complexity.
4. Vast.ai — Lowest prices if you can tolerate variability
- Price highlights: Marketplace pricing often shows A100 near $1.27/hr and many consumer RTX cards at pennies per minute. Vast.ai.
- Best for: Lowest possible cost, preemptible/interruptible workloads, non‑critical runs.
- Watch‑outs: Reliability and bandwidth vary by host; plan for checkpoints and migration.
5. Oracle Cloud (OCI) — Straightforward list pricing for H100/H200
- Price highlights: Official list price for H100 is $10.00 per GPU‑hour; H200 is also listed at $10.00 per GPU‑hour. Shapes are multi‑GPU bare‑metal with high‑bandwidth RDMA. OCI price list · GPU pricing.
- Best for: Enterprises that value predictable list pricing, big clusters, and free egress.
6. Google Cloud — Broad services + TPUs
- Price highlights: Transparent per‑GPU pricing (e.g., V100 $2.48/hr, T4 $0.35/hr). H100 available on A3 instances; effective >$6/GPU‑hr after normalizing. VM cost is additional. GCP GPU pricing.
- Best for: Teams already invested in Vertex AI/BigQuery; TPU access.
7. AWS — Scale and ecosystem
- Price highlights: p5.48xlarge (8× H100) listed at $31.464/hr in us‑west‑2 when purchased as Capacity Blocks for ML (≈ $3.93/GPU‑hr). Traditional on‑demand pricing varies by region and purchasing model; spot can be significantly lower. Capacity Blocks pricing.
- Best for: Enterprise production, security/compliance, global reach.
8. Azure — Single‑GPU H100 VMs
- Price highlights: Public guidance around $6.98/hr for single‑GPU NC H100 v5 VMs in U.S. regions. Check the regional calculator for exact rates. Azure pricing.
- Best for: Microsoft‑centric orgs and Windows workflows.
9. DataCrunch — Aggressive H100/A100 public rates
- Price highlights: Blog examples list H100 at $2.65/hr and A100 80GB at $1.65/hr. H100 vs A100 · Pricing comparison.
- Best for: Startups that want published, low H100/A100 prices and responsive support.
10. JarvisLabs — Pausable notebooks and VMs
- Price highlights: A100 40GB ~$0.79–$1.29/hr, H100 ~$2.99/hr, with pause/resume to cut idle cost. JarvisLabs H100 pricing.
- Best for: Individuals, students, and scrappy teams iterating quickly.
11. Nebius AI — Modern NVIDIA lineup, EU focus
- Price highlights: H100 $2.95/hr, H200 $3.50/hr, B200 $5.50/hr (per GPU‑hr). Nebius pricing.
- Best for: European teams wanting current-gen parts with clear per‑GPU pricing.
12. Paperspace (DigitalOcean) — When it still makes sense
- When to pick it: You’re committed to the Gradient ecosystem, want managed notebooks/workflows, or can fully utilize reserved pricing.
- Key numbers to know: On‑demand A100 is $3.09/hr and H100 is $5.95/hr. The headline $1.15/hr A100 requires a 36‑month commitment, and many GPUs require the Growth plan ($39/mo). Regions: NY2, CA1, AMS1. See the official Paperspace pricing and DigitalOcean docs plus regional availability.
Methodology & update cadence
- Prices were checked against public pricing pages and official documentation on July 28, 2025. Some providers publish only ranges; marketplace rates fluctuate. When providers only sell multi‑GPU nodes, we note total node price and/or normalize to per‑GPU where reasonable.
- Always confirm prices in your target region and add storage, data egress, and VM costs where applicable.
Sources
- Paperspace/DigitalOcean: Paperspace pricing · GPU price table · Regional availability
- Thunder Compute: Pricing · A100 pricing tracker · H100 pricing tracker
- Runpod: Pricing · A100 comparison · Serverless vs Pods
- Lambda: Pricing
- Vast.ai: Marketplace
- Oracle Cloud: GPU pricing · OCI price list · Oracle H200 launch post
- Google Cloud: GPU pricing
- AWS: EC2 on‑demand pricing · Capacity Blocks pricing
- Azure: VM pricing (use calculator for your region) · Helpful: Vantage pages for NC40ads H100 v5 and NC80adis H100 v5 show current list prices.
- DataCrunch: H100 vs A100 · Cloud GPU pricing comparison
- JarvisLabs: H100 price guide
- Nebius AI: Pricing
- CoreWeave: Pricing
- Market trackers (helpful for cross‑checking): GetDeploying GPU price index · Cloud‑GPUs.com comparison
Thinking about switching?
Spin up an A100 on Thunder Compute in minutes and benchmark your workload. If it finishes in half the cost, keep it. If not, you’ve validated your path with real numbers.