Skip to main content
Thunder Compute gives indie developers, researchers and data scientists instant access to affordable cloud GPUs. Our pre-configured instance templates set up popular AI stacks automatically, so you can run LLMs or generate AI images in minutes.

AI Templates on Cheap Cloud GPUs

We currently offer:
  • Ollama – launches an Ollama server for open-source large language models
  • ComfyUI – installs ComfyUI for fast AI-image generation workflows

Deploy a Template

  1. Create an instance
# Launch an Ollama instance
tnr create --template ollama

# Launch ComfyUI
tnr create --template comfy-ui
  1. Connect to the instance
tnr connect 0   # replace 0 with your instance ID
Port forwarding is handled automatically when you connect. The -t flag is unnecessary.
  1. Start the service
# Ollama
start-ollama

# ComfyUI
start-comfyui
Required ports forward to your local machine automatically.

Template Details

Ollama Template

  • Forwards port 11434
  • Access the API at http://localhost:11434
  • Ready for popular Ollama models

ComfyUI Template

  • Forwards port 8188
  • Mounts the ComfyUI directory to your Mac or Linux host
  • UI at http://localhost:8188
  • Includes common nodes and extensions

Need Help?

Encounter problems or have questions? Reach out to our support team any time.