Back
We Cut Support Tickets by Rewriting Error Messages for LLMs
If you can't beat 'em, join 'em
Published:
Aug 20, 2025
Last updated:
Aug 20, 2025

The first thing many people do when they hit a bug is ask ChatGPT. For common issues, it’s great: a quick answer without having to dig through StackOverflow.
Where this breaks down is niche problems with little public context. As a startup, we don’t have years of StackOverflow threads and GitHub issues, so the model often guesses while sounding confident. That sends users down rabbit holes.
We’ve compensated with hands-on support. Our team lives in Discord and usually replies within minutes. We run a GPU cloud platform and have seen most Linux/PyTorch/AI tooling weirdness, so can point people to the right steps fast.
That said, it’s expensive. Time spent diagnosing why an image-generation model is OOM is time not spent on our product. Worse, well-meaning ChatGPT commands can push a VM into bizarre states and convince users that our platform is the issue.
One memorable case: ChatGPT convinced a user their filesystem was corrupted. The real issue was a bad package name in requirements.txt
. The user kept pasting our replies back into ChatGPT, which doubled down on the corruption theory. After an hour, we tried something different:
We replied with a carefully worded response, designed to be a prompt that would lead the model to the correct conclusion.
They pasted it, ran the right commands, and immediately confirmed the instance was fine.
That led us to a broader idea: error messages can be written with enough context for an LLM to recommend the correct fix.
For example, instead of showing:
“You are currently using 99% of your instance’s CPU memory.”
we now show:
“You are currently using 99% of your instance’s CPU memory. Expand its memory by adding more vCPUs with
tnr modify <instance_id> --vcpus <new_vcpu_count>
.”
This small change eliminated support requests for OOM errors. Users also get a better experience. They avoid debugging entirely because the error message guides them to the solution.
In a way, well-designed error messages have turned ChatGPT into our front line of support, letting us get back to building.

Carl Peterson
Try Thunder Compute
Start building AI/ML with the world's cheapest GPUs
Other articles you might like
Learn more about GPUs and more