Get started today
Replay past traffic, gain confidence in optimizations, and elevate performance.

In this article, you’ll be introduced to two different load testing tools that are both able to work with Kubernetes; Speedscale and K6. Throughout this post you’ll be given a comparative view of how each tool performs in five different categories: Ease of setup, developer experience, working with the CLI, creating tests, and integration into CI/CD pipelines.

Companies are deploying workloads in Kubernetes and times are changing. Well, companies have been deploying workloads to Kubernetes for quite some time now, and many are running their workloads effectively, even using new technologies and venues that Kubernetes provide. But, many are still using the same tools they would use when they were running their services on regular Virtual Machines. One of the areas that many don’t think to innovate on when they move their workloads to Kubernetes is load testing. You can still use your regular tools, but have you considered what advantages you may get from load testing tools that integrate directly with Kubernetes?

First of all, you still get the same benefits that you’re getting from a regular load testing tool, making sure that your application can handle the load you expect it to handle. Second of all, you get the added benefit of being able to test directly from within your cluster, removing the need to open up your service to the outside world, or just other parts of your network. Most importantly, using a tool that can integrate into Kubernetes, you can manage more of your infrastructure in the same way, rather than hosting your applications and your tools separately.

Ease of Setup: Speedscale vs. K6

When you first get started with Speedscale you are guided through a quick start guide, which will show you the two different ways you can install Speedscale. You can either use Helm, in which case you will be shown the exact commands you need to run, including adding the Speedscale repo to Helm, as well as defining all the appropriate values in the helm install command. You’re also shown how you can enable Data Loss Prevention (DLP) via Helm, so Speedscale doesn’t receive any sensitive data.

If you prefer not to work with Helm, you can instead choose to use the speedctl CLI tool offered by Speedscale. The quick start guide will first show you how to install speedctl, either by using brew or by using an install script. Once speedctl is installed, Speedscale is installed into your Kubernetes cluster by running speedctl install, which of course is also stated in the quick start guide.

Once you’ve followed either of these options, you are ready to use Speedscale in your cluster. Thankfully, you’re not left on your own, as the quick start guide gives you a quick overview of how to observe a service using Speedscale, followed by an overview of how to replay traffic.

Getting K6 installed is also fairly straightforward. You can find installation instructions by going to the official K6 documentation, where you’ll be provided with a few different installation options. Most likely you will be using brew install k6 to install K6, however there’s also a winget or choco option for Windows. With K6 installed, you can click further through the documentation to get instructions on how to run K6 and execute tests.

The biggest difference between Speedscale and K6 in terms of setup, is that Speedscale installs into a Kubernetes cluster by default with an option of installing a CLI tool. K6, on the other hand, is purely a local CLI tool by default, with the option of installing an operator into Kubernetes. This is a testament to what the focus of each tool is. Speedscale wants to be integrated into your workflow and continuously provide you with value, whereas K6 focuses on helping run load tests when needed.

Common across both tools is that the installation is incredibly simple, with clear instructions on getting each tool configured. Also, both tools aren’t just providing you with installation instructions, they both provide you with a path to getting started. Getting started with Speedscale does require some prior knowledge of Kubernetes, as well as requiring a Kubernetes cluster to install the tool into. K6 doesn’t require any prior knowledge at all, at least to install. More on that in the next section.

Upon installation, Speedscale doesn’t have many configuration options as most configurations are done through test configs once the tool is running. K6 doesn’t have any installation options upon installation at all, since any configuration is done as part of running/configuring tests.

Developer Experience: Speedscale vs K6

When you first get started with Speedscale everything works out of the box, and you can almost immediately start running tests. You just need to instrument your services with the Speedscale sidecar, which is easily done by adding a few annotations. More details on that to follow. Should you want to use any custom test configurations, those are easy to create as well.

The biggest advantage of Speedscale is in the inherent way it works with API traffic and Kubernetes. By examining traffic going into and out of your APIs, the platform is automatically able to identify backend endpoints you need. Backends are automatically mocked by modeling the traffic, which makes environments for running the tests a non-issue. Tests are also run by adding specific annotations to your Kubernetes deployment. As it can be assumed you have Kubernetes knowledge when using a Kubernetes load testing tool, this makes Speedscale an incredibly easy tool to use for load testing, as you don’t need to learn any new language. It is possible to define test configurations through the web UI or as JSON, but there’s a high likelihood you won’t ever need to do so.

Speedscale is in many ways a set-and-forget solution to load testing. The way it manages necessary backends for you is a huge accelerator. Once it’s configured and integrated into your workflow, it’s unlikely you will have to think about it again unless you want to make changes to your configuration.

With K6 there’s one major thing to note, which is that tests are written in Javascript. If you already know Javascript then that’s great, and you will have an easy time working with the tool. However, if you don’t know Javascript you will have to take into account that it needs to be part of your learning journey as well. The big advantage of K6 using Javascript to configure tests, is that in many cases you’ve got the whole Javascript community to help you. For example, if you need to figure out how to create a loop, that’s not K6 specific, that’s just regular Javascript.

Once you’ve gotten a hang of K6 and the way it uses Javascript, it’s fairly easy to use. The tests are executed using the k6 command line which will run the tests locally, or you can deploy a CRD to Kubernetes if you have the operator installed. Alternatively, you can use the K6 cloud offering to run the tests.

While Speedscale has a focus on providing not only the tests, but environments and proper data to be an integrated part of your infrastructure, K6 is more focused on providing load tests when you need them. This heavily influences how the developer experience is perceived. In essence, both tools are easy to use. Speedscale has a bit of a steeper learning curve, but in the end provides a much more advanced use case than just running a simple load test. You also get access to service mocking, a service map, and contract testing.

K6 focuses on providing a strong tool to perform load tests, as well as just having a CLI tool where Speedscale offers a WebUI dashboard, from where you can configure and deploy tests. If you’re the type of engineer who heavily prefers working in the terminal, then K6 is a great option. If you also want a graphical interface, then Speedscale is the clear winner.

Working with the CLI

Speedscale has the option of downloading the speedctl CLI tool, from which you can accomplish all the tasks you would need when using Speedscale. The help menu in speedctl is incredibly verbose with a clear overview of what commands are available, what flags are available and how they’re used. The CLI is a great alternative to the WebUI if you’re a fan of using the terminal. You’ll be able to find all the same features in the CLI as in the WebUI. speedctl allows you to use both shorthand flags like -m, or the full version of the flag --capturemode. This approach has very much become a de-facto industry standard, and is incredibly helpful. You can use the shorthand flag when typing out and executing a command in the terminal, and then use the full version in scripts to make them more understandable to others.

K6 has no WebUI, so in the CLI you will find all the functionality provided by K6. You can create your tests, inspect a script, pause a test, etc. As covered in the section on developer experience, the K6 CLI is fairly straightforward to use. It’s clear there’s been a focus on ease of use when developing the CLI. Just like speedctl, it’s possible to use both shorthand and full flags when configuring a command, bringing with it the same benefits.

Ultimately, no CLI has a clear advantage over the other. Both tools bring with them the functionality you’d expect from any quality CLI, like a verbose help menu and accurate descriptions of flags.

Creating Tests

One of the most important aspects of a load testing tool, is how easy it is to create tests. Here is a high-level overview of how a simple test is configured in each tool.


Creating a test as soon as you’ve gotten Speedscale installed isn’t possible. Speedscale focuses on working with your actual data, and as such you can only create tests based on actual traffic from your cluster. To explain this, imagine you have a service that exposes a single API endpoint, which interacts with five third-party APIs. When you hit that API endpoint, the Speedscale Operator will record the traffic and save it, given that you’ve instrumented your API with the Speedscale sidecar.

From here, you can create a snapshot of that traffic, which is then the basis for your tests. A snapshot represents traffic recorded in a given period. When you’ve created a snapshot, you run a test by kicking off the Replay Wizard in the web UI, or manually by adding the following annotations to your service:

    replay.speedscale.com/snapshot-id: "1e4c2995-9acb-4fa7-af1b-9c757e17ed55"
    replay.speedscale.com/testconfig-id: "standard"
    replay.speedscale.com/cleanup: "inventory"
    sidecar.speedscale.com/inject: "true"

The snapshot-id refers to the snapshot you’ve just created, so the Speedscale Operator knows what requests need to be sent to your Service-under-Test (SUT). The testconfig-id refers to the test configuration you’re are using. The “standard” test config will perform a one-to-one replica of the recorded traffic. Another default test config you can use is the “performance_100replicas”, where the recorded traffic will be replicated 100 times in a rapid-fire way. cleanup defines what Speedscale should do once the test is done running, with “inventory” instructing Speedscale to return the cluster to the original state. Lastly, inject specifies that after cleanup the SUT should continue to be orchestrated by Speedscale.

It should be mentioned, that as part of a test config you are also defining a set of assertions. So, as part of your load test you can also make sure that your service isn’t just able to handle the load you are applying, the data it returns also matches what you expect. This is a big advantage in a CI/CD scenario.


The example below is taken from the official documentation

The first step to creating a test in K6 is to create a Javascript file. In this file, you write out your test in plain Javascript. A simple test could look like this:

import http from 'k6/http';
import { sleep } from 'k6';

export default function () {

This simple test sends a request to https://test.k6.io and waits a second. You can then run the test by executing k6 run script.js with script.js being the name of the file you’ve written the test in. From here, you can configure the test to be more intensive, for example by putting more load on your service by executing k6 run script.js --vus 10 --duration 30s. A VU is a Virtual User, meaning this test will run the test with 10 concurrent users for 30 seconds. It’s also possible to specify these options directly in your test file:

import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
  vus: 10,
  duration: '30s',
export default function () {

Creating tests with K6 is fairly easy, but you do have to learn about Javascript in the process.

An interesting capability of Speedscale is that you can actually leverage recorded traffic to autogenerate K6 scripts for you.  If you have an established K6 testing practice, you may be able to leverage the record and playback capability to expedite test creation, and also utilize Speedscale’s mocks.

How do they Compare?

Speedscale requires more setup in order to make a test, where K6 can be created without any prior setup. Which tool has the advantage depends heavily on what your use case is. K6 can test your services quickly, but you will inevitably also have to be testing the dependencies of your service, unless you’ve managed to mock the dependencies with some other tool.

Speedscale has the major advantage that it’s able to mock calls to dependencies by default, as your services are instrumented with a Speedscale sidecar. Because of this, you can be sure that you are only testing the single service you’re interested in, and not all of the dependencies. This is extremely useful when you have third-party APIs that can be rate-limited.

So, while tests in Speedscale take a bit more effort to create, you should also take into account what advantages you are getting, and whether they are something you need. Also, with Speedscale it’s only the initial setup that takes time. Once your services are instrumented, creating tests are a quick affair.

Integrating into CI/CD

Speedscale has a big focus on being able to integrate seamlessly into CI/CD scenarios with their load testing capabilities. The ability to run tests by deploying a simple Kubernetes manifest file makes it easy to run tests from your CI/CD pipelines. The gist of using Speedscale in CI/CD goes as follows:

Start by deploying your tests by patching your service to include some specific annotations as covered previously. Once the test is done running, check the report by using speedctl, which will return a status as an ENUM, allowing you to easily make checks based on the status. Because the status is an ENUM, you can easily create a gated check, making sure your test is successful.

Because integration into any CI/CD pipeline is done via implementing a script, getting Speedscale working should be possible in any CI/CD provider, as you can see by their documentation.

K6 is fairly easy to integrate into mostly any CI/CD provider, as it is generally configured as a script that’s run either locally or through a Docker container. This makes it incredibly flexible, however you won’t find instructions on implementing it in different systems by looking at the documentation page. Instead, you will have to check if there’s a relevant blog post for your CI/CD provider, like there is for GitLab CI/CD.


If you’re looking for a tool that by default integrates heavily into Kubernetes, and utilizes specific Kubernetes features like sidecars, Speedscale is likely the tool you want to use. It can quickly become a huge advantage as it can directly integrate into your workflow, and make load testing a direct part of your deployment procedure.

On the other hand, K6 can also be used to integrate into your workflow, however it’s more focused on providing tests that need to be run a single time to validate a service, rather than being run each time a service is deployed.

Both services are great options, and which one you choose will depend on what you need. Do you want a tool that can become a big part of your infrastructure, or do you need one that can help with validation whenever you need it?

If you are interested in comparing more load testing tools for Kubernetes, check out our roundup of the 5 best load testing tools for Kubernetes

Ensure performance of your Kubernetes apps at scale

Auto generate load tests, environments, and data with sanitized user traffic—and reduce manual effort by 80%
Start your free 30-day trial today

Learn more about this topic