Hardware

NVIDIA L40 vs L40S

Last update:
May 6, 2026
3 mins read

Understanding the difference between the NVIDIA L40 and L40S can be confusing. Both GPUs share the same Ada Lovelace architecture, memory configuration, and form factor, making them look nearly identical on paper. However, the differences become clear when you dig into AI performance.

This guide breaks down the NVIDIA L40 vs L40S across specs, performance, and real-world AI workloads.

Takeaways

<ul><li><strong>The L40S doubles FP16 and FP8 tensor calculations</strong>, enhancing its performance for AI workloads.</li><li><strong>The L40S provides faster results</strong>, improving LLM training and high-throughput inference.</li><li><strong>The L40 is cheaper to rent</strong>, making it more efficient for mixed or lighter workloads.</li></ul>

NVIDIA L40S graphics processing unit against a dark background, showcasing its sleek black design and cooling fins. The card features the NVIDIA branding and industrial aesthetic typical of professional data center GPUs. The image conveys sophistication and high-performance computing capability.

Quick Comparison

This table highlights key differences between the NVIDIA L40 and L40S.

Specification NVIDIA L40 NVIDIA L40S
Architecture Ada Lovelace
CUDA Cores 18,176
GPU Memory 48 GB GDDR6 ECC
Memory Bandwidth 864 GB/s
FP16 Tensor Performance 181 TFLOPS 362 TFLOPS
FP8 Tensor Performance 362 TFLOPS 733 TFLOPS
AI TOPS 362 733
Power (TBP) 300 W 350 W
Launch Date Oct 2022 Aug 2023
Thunder Compute pricing $0.89/hr $0.99/hr

<ul><li><a href="https://www.nvidia.com/en-us/data-center/l40s/#specsmodal" target="_blank" rel="noopener">NVIDIA L40S Specs</a></li><li><a href="https://images.nvidia.com/content/Solutions/data-center/vgpu-L40-datasheet.pdf" target="_blank" rel="noopener">NVIDIA L40 Datasheet</a></li></ul>

L40 vs L40S Differences

Tensor performance is the main difference in the L40 vs L40S comparison. The L40S doubles compute in FP16 and FP8 operations, which are critical for modern AI training and inference.

They both have the same CUDA cores, memory and data transfer speeds, but the L40S is tuned for AI-heavy workloads. This makes it a stronger choice for scaling deep learning pipelines.

Power Consumption

The L40 can consume up to 300W, while the L40S increases this to 350W. This power increase directly results in a higher throughput.

Large Language Model (LLM) Training

For LLM training, tensor performance is the primary constraint. The L40S delivers roughly double the FP16 and FP8 throughput.

Benchmarks consistently show faster training for transformer-based models on the L40S. This leads to lower overall infrastructure costs despite slightly higher hourly pricing.

Inference Throughput

Inference workloads also benefit from higher tensor throughput. The L40S can handle more concurrent requests and generate tokens faster.

This makes it ideal for production environments where latency and throughput directly impact user experience. Still, the L40 remains viable for lower-scale deployments.

Use Cases

When to Use the L40

The L40 is well-suited for mixed workloads that include rendering, simulation, and moderate AI usage. It provides strong performance without pushing power consumption higher.

It is also a good option for teams entering AI workflows or running smaller models. Cost efficiency remains its main advantage.

When to Use the L40S

The L40S is optimized for AI-first environments. It excels in large-scale model training, high-throughput inference, and generative AI applications.

If your workload depends heavily on tensor operations, the L40S delivers significantly better performance. This makes it the preferred choice for scaling AI infrastructure.

Final Thoughts on NVIDIA L40 vs L40S

In the NVIDIA L40 vs L40S comparison, the distinction is performance versus efficiency. The L40 offers balanced capabilities, while the L40S unlocks substantially higher AI throughput.

For most AI-driven teams, the L40S provides better long-term value. Faster training and inference can outweigh the modest increase in power and cost.

Compare pricing for the L40 and L40S across providers.

Get the world's
cheapest GPUs

Low prices, developer-first features, simple UX. Start building today.

Get started