Top 5 Kubernetes Load-Testing Tools and How They Compare

Kubernetes is a popular choice for running a cloud workload – and for good reason. It can be a powerful tool for orchestrating your applications.

It might be tempting to think that Kubernetes can handle it all. In many cases it can, but it’s always smart to know your application’s limits.

Load testing in Kubernetes can provide you with benefits:

  • Cohesive insight into your application’s performance
  • Verification of the load capacity of your application
  • Verification of your infrastructure
  • A reduced risk of server crashes

After reading this article, you’ll be equipped to determine which tools would best serve you for load testing your application.

K8S Performance Testing Essentials

Maybe you’re a webshop, and Black Friday is around the corner, so you want to be sure you can handle a big increase in traffic. Maybe you’re a SaaS product that integrates with customers’ websites, so you want to have the best performance possible. Maybe you just want to make sure you know when your application breaks.

Once you’ve figured out your why, you can start load testing. For this, you need specialized tools that can simulate a great amount of traffic.

When load testing APIs, you can generally use any tool you would use for load testing. However, when your applications are running in Kubernetes, you can get much more specialized.

It’s much easier to spin up new pods, so you don’t have to do the load test on your current infrastructure. It’s also easier to integrate your load tests into your CI/CD pipelines with the right tools.

Five K8S Load Testing Tools

Choosing a tool that not only accomplishes the work you need but is also easy to use and easy to debug is important. We will approach each tool with the mindset of a developer tasked with setting up a load test in a modern infrastructure.

The following five tools will be judged on how easy they are to get started with, how well they integrate with CI/CD systems, and how easy it is to use their documentation.

Effortless Traffic Replay with Speedscale

Image of Speedscale UI

From the moment you enter the Speedscale UI, it’s clear that ease-of-use is a priority. It’s also developed very clearly toward use with Kubernetes.

Speedscale has developed its own “speedctl” CLI tool, which you can use to configure your Kubernetes cluster. From then on, everything is configured using annotations on your deployments.

It’s not necessary to have a ton of Kubernetes knowledge to get going, since the documentation is well written and the tool is fairly straightforward.

Speedscale’s convenient features include:

  • Automatic traffic replay via a sidecar proxy
  • Automatically generated mocks for your backend and/or third-party APIs
  • Easy-to-understand reports

In practice, these features combine into a product that can build a mock-server container, which then becomes part of the replay itself. That’s quite an advantage over traditional load testing.

For example, you don’t need to set up huge clusters to execute a load test, or production-grade pods to execute the replay. This makes simulating high inbound/backend traffic much cheaper.

Plus, mocked containers can generate chaos, like variable latency, 404s, and unresponsiveness. This way you’re not just testing your service in optimal conditions. You can also see how it responds to unstable dependencies.

To get results after running the test, take a look in the web UI, which will show the most general statistics, like 95th and 99th percentile of response times, among others. However, it also shows each request in its entirety, so you can dive deep into any test report you want.

Speedscale offers documentation on integrating with a number of CI/CD providers, meaning it should be easy for you to get started implementing the tool in your own pipeline. In the case that your provider isn’t shown in the list, they also offer instructions on how to implement their tool as a script in any CI/CD provider.

Something special about Speedscale is how well it integrates with Kubernetes. You create a deployment with the right annotations, and the operator you install via speedctl takes care of the rest. It takes very little work to set up, and close to no work in between tests other than a kubectl apply. You can easily automate even that with something like Helm.

Write in JavaScript with K6

K6 report overview

If you are used to writing JavaScript, then K6 is right up your alley. The tests are written in plain JavaScript, using a library you can import.

Even if you don’t know JavaScript, the syntax is fairly simple. Once you’ve written the test, you use the k6 CLI tool to run the test, which outputs the results to the terminal. It’s even possible to use other tools as a way of generating k6 tests.

This means you get your results incredibly fast by just looking in your terminal. This does, however, limit the view a bit since you can’t show graphs in a terminal.

As a workaround, K6 offers a cloud solution that not only shows the results in the terminal but uploads it to K6’s servers, where you can then view the results in the web UI. This allows you to see the data more neatly organized – with graphs, for example.

If you’re a fan of tooling that doesn’t require a ton of setup, then K6 may be a great fit for you. It’s an easy-to-use CLI, and you don’t have to install much in your environment. However, you should be aware of the fact that the tool comes with limited ability to run complex tests, or a variety of scenarios. If you’re interested in knowing more about the exact performance of k6, check out this write-up.

K6 has a Docker image available, so you don’t have to install anything. Just write the test file and run it. If you want to incorporate it into your CI/CD pipelines, K6 has documentation for setting this up.

If you decide to use the local version of K6, there’s no cost. For their cloud offering, there’s a developer plan for $59 USD per month and a team plan for $399 USD per month.

A Classic Feel with Jmeter
JMeter Gui

Jmeter may be the oldest entry on this list. It feels like it. I’ll get into performance in a minute, but the developer experience is not the greatest.

It’s a Java application that only runs locally. This can present challenges in terms of getting it running in the first place. Besides that, the documentation is very comprehensive but can be confusing, leading to a difficult experience setting up the tool.

As a result of running the tool locally, you’ll get your results fairly quickly. They’ll print out nicely on the screen if you’re using either the CLI or GUI version. Note that Jmeter recommends that any heavy load tests should be run using the CLI version.

If you’re looking to integrate load tests in your CI/CD pipelines, another tool might be better. It’s possible to use JMeter in an automated system, but it’s clear that the tool is not built for the purpose, and you will not find any official set-up instructions.

It does have the advantage of being completely free! You can also check out a more direct comparison of Speedscale vs JMeter.

Gatling for the Classic Enterprise
Gatling HTML Report

Getting started with Gatling can be a task in itself. Their documentation is confusing, with no clear path toward getting a load test set up. The fact that they offer entire courses and an academy based around their product says something. This is one of the biggest differences of Speedscale vs Gatling.

Once you do get it set up, you’ll have to create the load test scripts in their own Domain Specific Language (DSL). Of course, that may mean a bigger feature set, but it’s also a more comprehensive task to set up Gatling.

There don’t seem to be any direct integrations into any CI/CD systems and no guides on setting it up manually. So, for an automated solution, another tool might be a better choice. As for pricing, you can get a starter license for €400 per month or pay $3 USD per hour in either AWS or Azure.

UI Directory Structure with ReadyAPI
ReadyAPI GUI

ReadyAPI won’t please everyone. It’s an excellent application, but it’s not modern. If you like a classic UI where you download a desktop application and navigate with a UI directory structure, then ReadyAPI is great!

If you like a more modern design, maybe even a CLI tool, then ReadyAPI is not for you. The tool does what you want it to, so it’s definitely not a bad option. However, for a “modern” developer, the experience is a bit lacklustre.

Even though the design may feel rather classic, ReadyAPI integrates directly with many CI/CD systems. However, integrating with these systems isn’t exactly the easiest of tasks, and personally, I’m not a fan.

If you want to use ReadyAPI, they have different plans. A basic API test module is €679 per year for a license, or €5,726 per year for an API performance module.

Best Practices for Load Testing and Performance Testing

Getting started with anything is easy, and with any tool you’re able to run a load test within minutes. However, when doing load testing or performance testing:

  • Be wary of cohesion and coupling
  • Consider your infrastructure
  • Consider implementing mocks
  • Keep track of your tests

Be wary of cohesion and coupling. If your services are too tightly coupled, you can soon end up testing parts you didn’t initially intend to test.

Consider your infrastructure. This relates to the last point, where you need to consider how things are tied together. Doing load tests without proper preparation may result in unintended spikes.

Consider implementing mocks if you don’t intend to load test your entire platform, but instead just want to test a single part, like when doing pull request checks.

Keeping track of your tests is not necessary when performance testing, but it’s crucial to understanding how your platform evolves over time.

You can read a more comprehensive list of considerations to make when running a load test here.

What is the Best Kubernetes Load-Testing Tool?

Keeping in mind that the goal is to load test Kubernetes specifically, the two clear winners are Speedscale and K6. They each have advantages. If you want a quick and simple load test setup, K6 is very easy to get started with.

That’s not to say that Speedscale is tough to set up, but it does need to install an Operator into your Kubernetes cluster. This makes it the better tool for those who want deep integration with their cluster.

For a here-and-now test, consider K6. If you’re looking to integrate load tests directly in your Kubernetes workflows and get the added advantages from that, you might want to go with Speedscale. But, with the ability to create k6 tests using Speedscale, it’s even worth taking a look at Speedscale if you’re already using k6.

The main advantage here is that you’ll get your tests created much faster, with some reporting creation times reduced from 3 days to 3 hours.

If you’re still not sure, take a look at our direct comparison of Speedscale vs K6.

About The Author