Interactive Mode
Run the create command to launch an interactive menu:One-Line Creation
Specify all options in a single command:Configuration Options
| Flag | Description |
|---|---|
--mode | prototyping or production |
--gpu | a6000, a100, or h100 (prototyping), a100 or h100 (production) |
--num-gpus | 1, 2, 4, or 8 (production only) |
--vcpus | CPU cores: 4, 8, 16, or 32 (prototyping only). RAM: 8GB per vCPU |
--template | base, comfy-ui, comfy-ui-wan, ollama, webui-forge, or a snapshot name |
--disk-size-gb | 100-400 GB (prototyping), 100-1000 GB (production) |
Mode Selection
Choose between optimized development pricing or full compatibility:- Prototyping (default): Lower cost with CUDA-level optimizations. Best for development.
- Production: Standard VM with full compatibility. Best for long-running jobs and production workloads.
GPU Options
| GPU | VRAM | Availability |
|---|---|---|
| A6000 | 48GB | Prototyping only |
| A100 | 80GB | Both modes |
| H100 | 80GB | Both modes |
CPU and RAM
Prototyping mode: Configure vCPUs with 8GB RAM per vCPU.- 4 vCPUs = 32GB RAM
- 8 vCPUs = 64GB RAM
- 16 vCPUs = 128GB RAM
- 32 vCPUs = 256GB RAM
- 2 GPUs = 36 vCPUs, 180GB RAM
- 4 GPUs = 72 vCPUs, 360GB RAM
- 8 GPUs = 144 vCPUs, 720GB RAM
By default, 4 vCPUs and 32GB of memory are included with prototyping instances. Additional vCPUs are billed hourly at the rates shown on the pricing page.
Templates
Templates pre-configure your instance for common AI workflows:| Template | Description |
|---|---|
base | Ubuntu with PyTorch + CUDA |
ollama | Ollama server environment |
comfy-ui | ComfyUI for AI image generation |
comfy-ui-wan | ComfyUI with Wan2.1 pre-installed |
webui-forge | WebUI Forge for Stable Diffusion |