Skip to main content

Account Management

Login

Authenticate the CLI, which provides a link to the console where you can generate an API token.
tnr login
Under the hood, this generates and saves an API token to ~/.thunder/token. You can store a token file here to programmatically authenticate, or by setting the TNR_API_TOKEN environment variable in your shell.

Logout

Log out of the CLI with:
tnr logout
This deletes the stored API token.

API Token Management

  • Generate/manage tokens in the console
  • Tokens never expire but can be revoked
  • Use unique tokens per device

Managing Instances

Create an Instance

Create a new Thunder Compute instance:
tnr create
This will open an interactive menu where you can manually configure your new instance. Alternatively, you can start an instance in one command using:
tnr create --mode prototyping --gpu t4 --vcpus 8 --template base --disk-size-gb 100

Flags

Below are the flags you need to configure if you are not using the standard create command:
FlagDescription
—modeprototyping or production
—gput4 or a100 (prototyping only), a100 or h100 (production only)
—num-gpus1, 2, or 4 (production only)
—vcpusCPU cores (prototyping only): 4, 8, 16, or 32, RAM: 8GB per vCPU. Production: 18 per GPU, RAM: 144GB per GPU
—templatebase, comfy-ui, comfy-ui-wan, ollama, webui-forge
—disk-size-gb100-400 GB (prototyping), 100-1000 GB (production)
Instance storage is ephemeral. Back up important data externally before you delete an instance. See Using Ephemeral Storage for recommended workflows.

Mode Configuration

Choose between prototyping and production modes:
tnr create --mode <mode> ...
Available modes:
  • prototyping (default): Development mode optimized for intermittent workloads
  • production: Premium instance with maximum compatibility, stability, and reliability for production workloads

GPU Configuration

Specify a GPU type:
tnr create ... --gpu <gpu_type> ...
Available GPU types:
  • t4: NVIDIA T4 (16GB VRAM) - Best for most ML workloads
  • a100 (default): NVIDIA A100 (40GB VRAM) - For large models and high-performance computing
  • a100xl : NVIDIA A100 (80GB VRAM) - For even larger models, the biggest and the best
  • h100 : NVIDIA H100 (80GB VRAM) - Latest generation GPU for cutting-edge AI and production workloads

GPU Count

This flag can only be used when creating a Production instance. To set the GPU count, simply add:
tnr create ... --num-gpus <gpu_count> ...
vCPU and RAM count scales with the number of GPUs. This is detailed here.

CPU Configuration

This flag can only be used when creating a Prototyping instance. To configure custom vCPU count:
tnr create ... --vcpus <vcpu_count> ...
Each vCPU comes with 8GB of RAM. For example, a 4 core instance has 32GB of RAM, and an 8 core instance has 64GB of RAM. By default, Production instances are set to 18 vCPUs and 144 GB RAM for both A100s and H100s, and this scales with GPU Count. For example, 2 GPUs equates to 36 vCPUs and 288 GB RAM
By default, 4 vCPUs and 32GB of memory are included with your instance. Additional vCPUs are billed hourly at the rates shown here

Template Configuration

Templates make it easy to quickly launch common AI tools. Your instance will already be configured with everything you need to get running to generate images, run an LLM, and more. To use a template, add the --template flag when creating an instance:
tnr create ... --template <template_name> ...
Available templates:
  • base: Ubuntu instance with PyTorch + CUDA
  • ollama: Ollama server environment
  • comfy-ui: ComfyUI for AI image generation
  • comfy-ui-wan: ComfyUI with Wan2.1 pre-installed
  • webui-forge: WebUI Forge for Stable Diffusion
After instance creation, start the server using start-<template_name> when connected. For example:
start-ollama

Delete an Instance

To run the interactive delete menu, run the following command:
tnr delete
This will display all your current instances (if any exist) and will make you confirm before deleting a specific instance. However, the command below allows for instant delete as long as you provide a proper instance_ID. Most likely, your instance_ID will be ‘0’.
tnr delete <instance_ID>
This action permanently removes an instance and all associated data. This guide on [Weights & Biases] shows how to set up external checkpoints.

Using instances

Connect to an Instance

Use the connect command to access your instance. This wraps SSH, managing keys while automatically setting up everything you need to get started. You can run the following command to launch the interactive connect menu:
tnr connect
Alternatively, you can run the command below:
tnr connect <instance_ID>
Instances cannot be restarted once deleted, so always back up important data before destroying them. Use tnr status to see instance IDs (default 0).

Port Forwarding

Connect with port forwarding with the -t or --tunnel flag:
tnr connect <instance_ID> -t <port_1> -t <port_2>
Features:
  • Forward multiple ports using repeated -t/--tunnel flags
  • Example: tnr connect 0 -t 8000 -t 8080 forwards both ports 8000 and 8080
  • Enables local access to remote web servers, APIs, and services

Copy Files

Transfer files between local and remote instance with the scp command:
tnr scp <source_path> <destination_path>
You can transfer files in either direction, from your local machine to an instance, or from the instance to your local machine. You indicate the direction of transfer with the path format, shown below. Path format:
  • Remote: instance_id:path (e.g., 0:/home/user/data)
  • Local: Standard paths (e.g., ./data or /home/user/file.txt)
  • Must specify exactly one remote and one local path
  • Paths can be either absolute or relative.
Examples:
# Upload to instance
tnr scp ./local_file.txt 0:/remote/path/

# Download from instance
tnr scp 0:/remote/file.txt ./local_path/
File transfers have a 60-second connection timeout. SSH key setup, compression, and ~/ expansion are handled automatically.

Instance Lifecycle

Instances cannot be modified after creation. To change configuration (GPU, vCPU, disk size, or mode), create a new instance with the desired settings and migrate your data before deleting the original.

View Instance Status

List all instances and details including instance_ID, IP Address, Disk Size, GPU Type, GPU Count, vCPU Count, RAM, and Template:
tnr status
use the --no-wait flag to disable automatic monitoring for status updates