Account Management
Login
Authenticate the CLI, which provides a link to the console where you can generate an API token.~/.thunder/token. You can store a token file here to programmatically authenticate, or by setting the TNR_API_TOKEN environment variable in your shell.
Logout
Log out of the CLI with:API Token Management
- Generate/manage tokens in the console
- Tokens never expire but can be revoked
- Use unique tokens per device
Managing Instances
Create an Instance
Create a new Thunder Compute instance:Flags
Below are the flags you need to configure if you are not using the standard create command:| Flag | Description |
|---|---|
| —mode | prototyping or production |
| —gpu | t4 or a100 (prototyping only), a100 or h100 (production only) |
| —num-gpus | 1, 2, or 4 (production only) |
| —vcpus | CPU cores (prototyping only): 4, 8, 16, or 32, RAM: 8GB per vCPU. Production: 18 per GPU, RAM: 144GB per GPU |
| —template | base, comfy-ui, comfy-ui-wan, ollama, webui-forge |
| —disk-size-gb | 100-400 GB (prototyping), 100-1000 GB (production) |
Instance storage is ephemeral. Back up important data externally before you
delete an instance. See Using Ephemeral
Storage for recommended workflows.
Mode Configuration
Choose between prototyping and production modes:prototyping(default): Development mode optimized for intermittent workloadsproduction: Premium instance with maximum compatibility, stability, and reliability for production workloads
GPU Configuration
Specify a GPU type:t4: NVIDIA T4 (16GB VRAM) - Best for most ML workloadsa100(default): NVIDIA A100 (40GB VRAM) - For large models and high-performance computinga100xl: NVIDIA A100 (80GB VRAM) - For even larger models, the biggest and the besth100: NVIDIA H100 (80GB VRAM) - Latest generation GPU for cutting-edge AI and production workloads
GPU Count
This flag can only be used when creating a Production instance. To set the GPU count, simply add:CPU Configuration
This flag can only be used when creating a Prototyping instance. To configure custom vCPU count:By default, 4 vCPUs and 32GB of memory are included with your instance.
Additional vCPUs are billed hourly at the rates shown
here
Template Configuration
Templates make it easy to quickly launch common AI tools. Your instance will already be configured with everything you need to get running to generate images, run an LLM, and more. To use a template, add the--template flag when creating an instance:
base: Ubuntu instance with PyTorch + CUDAollama: Ollama server environmentcomfy-ui: ComfyUI for AI image generationcomfy-ui-wan: ComfyUI with Wan2.1 pre-installedwebui-forge: WebUI Forge for Stable Diffusion
start-<template_name> when connected. For example:
Delete an Instance
To run the interactive delete menu, run the following command:Using instances
Connect to an Instance
Use theconnect command to access your instance. This wraps SSH, managing keys while automatically setting up everything you need to get started.
You can run the following command to launch the interactive connect menu:
tnr status to see instance IDs (default 0).
Port Forwarding
Connect with port forwarding with the-t or --tunnel flag:
- Forward multiple ports using repeated
-t/--tunnelflags - Example:
tnr connect 0 -t 8000 -t 8080forwards both ports 8000 and 8080 - Enables local access to remote web servers, APIs, and services
Copy Files
Transfer files between local and remote instance with thescp command:
- Remote:
instance_id:path(e.g.,0:/home/user/data) - Local: Standard paths (e.g.,
./dataor/home/user/file.txt) - Must specify exactly one remote and one local path
- Paths can be either absolute or relative.
File transfers have a 60-second connection timeout. SSH key setup,
compression, and
~/ expansion are handled automatically.Instance Lifecycle
Instances cannot be modified after creation. To change configuration (GPU, vCPU, disk size, or mode), create a new instance with the desired settings and migrate your data before deleting the original.View Instance Status
List all instances and details includinginstance_ID, IP Address, Disk Size, GPU Type, GPU Count, vCPU Count, RAM, and Template:
--no-wait flag to disable automatic monitoring for status updates