Comprehensive reference for the Thunder Compute CLI. Manage instances (create, start, stop, delete), configure GPUs/CPUs, handle files, and use snapshots.
~/.thunder/token
. You can store a token file here to programmatically authenticate, or by setting the TNR_API_TOKEN
environment variable in your shell.
t4
: NVIDIA T4 (16GB VRAM) - Best for most ML workloadsa100
(default): NVIDIA A100 (40GB VRAM) - For large models and high-performance computinga100xl
: NVIDIA A100 (80GB VRAM) - For even larger models, the biggest and the best--num-gpus
flag to specify multiple GPU configurations:
--template
flag when creating an instance:
ollama
: Ollama server environmentcomfy-ui
: ComfyUI for AI image generationwebui-forge
: WebUI Forge for Stable Diffusionstart-<template_name>
when connected. For example:
prototyping
(default): Development mode optimized for intermittent workloadsproduction
: Premium instance with maximum compatibility, stability, and reliability for production workloadsconnect
command to access your instance. This wraps SSH, managing keys while automatically setting up everything you need to get started.
0
) with tnr status
.
-t
or --tunnel
flag:
-t/--tunnel
flagstnr connect 0 -t 8000 -t 8080
forwards both ports 8000 and 8080scp
command:
instance_id:path
(e.g., 0:/home/user/data
)./data
or /home/user/file.txt
)~/
expansion are handled automatically.tnr snapshot <instance_ID> <snapshot_name>
tnr snapshot --list
tnr snapshot --delete <snapshot_name>
--template
flag with the snapshot name:
tnr create --template <snapshot_name>
prototyping
(default): Optimized for cost-effective developmentproduction
: Premium instances with maximum compatibility, stability, and reliabilityinstance_ID
, IP Address
, Disk Size
, GPU Type
, GPU Count
, vCPU Count
, RAM
, and Template
:
--no-wait
flag to disable automatic monitoring for status updates