Overview

Get started today
Replay past traffic, gain confidence in optimizations, and elevate performance.

A Kubernetes preview environment is an isolated environment that allows developers to test their code at any time without worrying about how others may be affected. While implementations and use cases may vary, simulating a production environment as closely as possible is the main goal.

Imagine you’re part of a team developing a complex API, and you’ve been tasked with adding a new endpoint that relies on features within the codebase currently being optimized by one of your team members. Although your team has a development environment with seeded databases and dev versions of dependencies, you run into issues when team members want to test their optimizations at the same time as you.

To get work done, one of you must wait for the other, which is far from efficient. Not to mention the inefficiencies of spinning up, maintaining, and paying for a continuously running replica of your production environment.

Instead, it’s more efficient to spin up a local cluster using minikube, configuring mock servers to remove any reliance on dependencies, and using Skaffold to update the application as you code. As you’re running in a real Kubernetes cluster, you can even use traffic replay to continuously test the application. Setting up this scenario—along with two other examples—is exactly what will be covered in this post.

If you’re still unsure about setting up a Kubernetes preview environment, or why they’re useful, I recommend that you first check out the high-level overview on the topic. The most important lesson you’ll learn from that post, is how preview environments based on traffic replay are much more likely to satisfy the needs of preview environments, as it brings over realistic transactions and sanitized test data from actual environments. Half the battle of setting up preview environments is the realism and ability to seed proper data.

Creating Kubernetes Preview Environments

Below you’ll find three examples of how to implement preview environments:

  • Local preview environments
  • Preview environments in CI/CD
  • Automatic preview environments for product teams

These examples will showcase the variety of scenarios where preview environments is useful, and how to implement the concept in different ways.

The examples use Speedscale’s traffic replay and automatic mocking features. While the sections are purposefully written to make the concepts understandable to anyone, you need Speedscale experience to follow the steps outlined in each example.

Learn how Speedscale works and how to implement it in the complete traffic replay tutorial, which also includes instructions for setting up Google’s microservice demo, which is used throughout the rest of this post. In short, all the following examples assume that you’ve deployed a minikube cluster running with the microservice demo and created a snapshot in the Speedscale WebUI.

For those reading purely to understand the concepts, a snapshot in Speedscale is a file containing information about traffic recorded from an application in Kubernetes, i.e. incoming and outgoing requests. In “traffic replay,” the requests contained within a snapshot are the ones being replayed.

This post is written based on an Ubuntu 22.04 LTS system, so the commands are for Linux. Resources will be provided in case you’re using another operating system.

Local Preview Environments with Skaffold and Minikube

As detailed in the introduction, having a shared development environment is usually better than testing locally. However, issues arise when multiple people want to use it at once, none of which are present when creating your local preview environment with Skaffold and minikube.

If you’re not familiar with Skaffold, it’s a development tool specifically related to containers and Kubernetes. With it, you can run your application in a real Kubernetes cluster during development. It solves the same issue that containers originally solved, where it allowed developers to use the same OS locally as when the app is deployed in production.

Apart from the ability to run your app in a cluster during development, the most useful part of Skaffold is syncing files between your local pc and cluster without reloading your app when you make changes. It’s similar to nodemon if you’re a JavaScript developer.

Now, the question is, how does all of this factor into a Kubernetes preview environment? Let’s get started.

First, you need to install Skaffold:

$ curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && \
sudo install skaffold /usr/local/bin/

From the root of the microservices demo, change directory into the folder containing source code for the frontend service because that’s the service you’ll focus on throughout this post.

$ cd src/frontend

Because the demo app will be deployed as a cohesive unit, you need to add deployment files unique to the frontend service first. Skaffold works well with Kustomize, which is why you need to add kustomization.yaml files. Add the first one to the root of the frontend service by running:

$ cat <<EOF > kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - base
EOF

This file is defining that when kubectl apply -k is running, it needs to look at the resources found within the ./base folder. So, let’s create those files. Thankfully, the demo app also uses kustomize, so copy the frontend definition:

$ mkdir base && cp ../../kustomize/base/frontend.yaml ./base/frontend.yaml

Now, add another kustomization.yaml file to define the resources that should be deployed, which is just the frontend here.

$ cat <<EOF > base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - frontend.yaml
EOF

By now, it’s possible to deploy the frontend service by running kubectl apply -k. But to use Skaffold instead, generate the Skaffold config file by running:

$ skaffold init

Choose Docker as the builder, then Buildpacks as the builder to create Kubernetes resources so that Skaffold will write the configuration to skaffold.yaml. Now that you’re ready to use Skaffold, test it out by running:

$ skaffold dev

This should make Skaffold start building the image. Deploy it and it’ll watch the folder for changes. By default, Skaffold uses the manifest file defined in the demo app. As a result, the Speedscale sidecar has been removed because the annotations have been stripped.

To add it back, open a new terminal window—closing the skaffold dev command will stop it from watching file changes—and add the sidecar annotation to the base/frontend.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  annotations:
    sidecar.speedscale.com/inject: "true"

As long as you still have the skaffold dev command running, you should see that Skaffold is now rebuilding your project, adding the annotation to the Deployment and, by extension, adding the sidecar back.

Although Skaffold now uses the Speedscale sidecar, it’s not a Kubernetes preview environment. For this to happen, you need to execute a replay against the service. If you haven’t followed the instructions for creating a snapshot in the traffic replay, do so now.

Once you have a snapshot ready, note the ID and use it when adding the following annotations to the frontend Deployment:

replay.speedscale.com/snapshot-id: "<snapshot-id>"
replay.speedscale.com/testconfig-id: "standard"
replay.speedscale.com/cleanup: "none"

These annotations are telling Speedscale to execute a replay using the “standard” test configuration. The notable part is cleanup: "none". Normally, Speedscale removes the responder (mock server) and returns the service to its original state. With this annotation, the responder will stay alive after the replay is done, which is what you want in this case.

If you log into the Speedscale WebUI, you’ll see that other services except the frontend have stopped receiving requests, even though the prebuilt demo app load generator is still generating requests. This is because all outgoing requests(i.e., requests that match previous records) are sent to the mock server.

Screenshot of Speedscale WebUI

Start experimenting with the snapshots, like creating ones that only contain a single request. Or create new test configs in a responder-only mode if you don’t want to replay any traffic after rebuilds.

Hopefully, it’s clear this can tremendously increase the efficiency of development. . You’ll not have to execute the same curl request on every reload, or open your browser to generate a specifically formatted request. You can make it happen automatically on every reload.

Integrating Preview Environments in CI/CD

Tests are part of the CI/CD pipeline, but non-functional tests like performance testing aren’t as popular yet. Currently, many people are aware of performance testing; they see the benefit and would like to implement it. But they don’t apply it because of its complexity.

This example shows that it’s easy to incorporate load testing into a CI/CD pipeline using Speedscale.

But to implement this feature, you must know how to configure CI/CD pipelines in general. Summarily, this section focuses on the script you should include in your pipeline, rather than creating the pipeline itself.

Start by creating the script file:

$ touch speedscale.sh

Because this script will run in a CI/CD pipeline, verify that the API key exists in the environment:

if [ -z "$SPEEDSCALE_API_KEY" ]; then
  echo "SPEEDSCALE_API_KEY is required"
  exit 1
fi

Ensure the speedctl CLI is installed because you need it to initiate the traffic replays:

echo "installing speedctl"
sh -c "$(curl -Lfs https://downloads.speedscale.com/speedctl/install)"

With speedctl installed and linked to your account, initiating a replay is as simple as running:

echo "creating replay for service $SERVICE in cluster $CLUSTER in namespace $NAMESPACE"
REPORT_ID=$(speedctl infra replay "$SERVICE" \
  --cluster "$CLUSTER" \
  --namespace "$NAMESPACE" \
  --test-config-id "$TEST_CONFIG" \
  --snapshot-id "$SNAPSHOT_ID" \
  --build-tag "$BUILD_TAG")

Once the script passes this command, Speedscale informs the Operator present in the given cluster that it needs to replay traffic against the given service. Here, it’s assumed that a previous part of the script needs to build the Docker image with a unique tag that you can then pass to the --build-tag flag.

Because this is part of a CI/CD pipeline, you need to verify the results of the replay, which you can capture by:

echo "waiting for replay with report ID $REPORT_ID to complete"
speedctl wait report "$REPORT_ID" \
  --timeout "$TIMEOUT"

This is going to wait for the report to finish—or time out according to the --timeout flag—which will exit with code 0 for success or code 1 for missed goals. To ensure that this exit code passes to the pipeline properly, add the last line:

exit $?

And that’s it. Integrate this script into your CI/CD pipeline, and you are now running performance tests with the power of a Kubernetes preview environment. If you’re not sure how you’ll integrate scripts in a given CI/CD provider, view detailed instructions here.

Automatic Preview Environments for Product Teams

When new features are developed, the product team often wants to verify them to be sure they align with their vision. As a result, developers demo their new features in front of the product team, which is inefficient because it requires coordination between multiple members and the product team that can’t test things independently.

For this reason, some organizations have begun spinning up environments that product teams can access directly. Sometimes, this feature is extended to other groups, like major clients. The concept is great but with room for improvement.

As a result, in most organizations, developers still have to spin up these environments. Although it saves the stress of setting up meetings, it eats engineers’ time.

In this example, you’ll see how to combine the previous two examples to automatically generate preview environments.

Making the application accessible through a URL is the first step to implementing preview environments. To do this, add an Ingress resource to the frontend service:

$ cat <<EOF > ./base/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: frontend-ingress # name of Ingress resource
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
    - host: frontend.info # hostname to use for podtato-head connections
  http:
    paths:
      - path: / # route all url paths to this Ingress resource
        pathType: Prefix
        backend:
          service:
            name: frontend # service to forward requests to
            port:
              number: 80 # port to use

Note that you need to add - ingress.yaml to your ./base/kustomization.yaml file, like how the frontend.yaml file is defined.

Like the previous example, you’ll see what the script will look like, rather than how it looks in a given CI/CD provider. Begin the script by defining the needed variables:

#!/bin/bash
TEST_CONFIG=standard
SERVICE=frontend
GIT_HASH="$(git rev-parse --short HEAD)"
SUBDOMAIN=$GIT_HASH
REPLAY_NAME="${SERVICE}-${TEST_CONFIG}-${GIT_HASH}"
SNAPSHOT_ID=<snapshot-id>

When you implement this in your real environment, you may decide to make these values dynamic rather than hardcoded, but keep them hardcoded for now. With the variables set, you need to modify the Ingress resource to create a new, unique subdomain:

echo "Replacing the subdomain"
sed -i "s/host: frontend.info/host: $SUBDOMAIN.frontend.info/" base/ingress.yaml

If you looked at the variables set in the beginning, you’ll notice this will create a subdomain that matches the short hash of the git commit, which should always be unique. With that slight modification done, the script can go ahead and deploy the frontend service and the Ingress.

echo "Deploying with kustomize"
kubectl apply -k .

With the resources deployed inside your cluster, it’s time to initiate the replay. In the previous section, you saw how to use the speedctl tool to initiate replays. But in this case, you’ll use the TrafficReplay CRD.

cat << EOF > replay.yaml
apiVersion: speedscale.com/v1
kind: TrafficReplay
metadata:
  name: "$REPLAY_NAME"
spec:
  snapshotID: "$SNAPSHOT_ID"
  testConfigID: "$TEST_CONFIG"
  workloadRef:
    kind: Deployment
    name: "$SERVICE"
  cleanup: none
  mode: responder-only
EOF

echo "created traffic replay CR yaml"
cat replay.yaml

echo "applying traffic replay CR to the cluster"
kubectl apply -f replay.yaml

The main difference between these two options is that the TrafficReplay resource gives you a few more options. For example, you’ll see how mode: responder-only has been set, which allows you to avoid traffic while using the standard config. With speedctl, you’d have to create a clone of the standard config and set it on responder-only.

To finish off the script, inform people where they can access the application:

echo "Application can be accessed at http://${SUBDOMAIN}.frontend.info/"

To implement this in your pipeline, you need to incorporate another workflow that deletes the application when pull requests are ignored or merged. But that’s outside the scope of this post.

Hopefully, you see how the automatic mocks created from recorded traffic makes it easy to create—what seems to be—entire environments. One major benefit of the approach is that your Kubernetes preview environment doesn’t need to be publicly accessible.

Because the environment is running within your cluster, you can easily configure your network to allow only connections from a corporate VPN. This also means you can avoid uploading configurations to the provider and instead test the script locally—a major relief for anyone who’s worked with pipelines before.

Remove the Complexity from Preview Environments

You may have noticed that the last two examples were instructions on how to implement Speedscale in CI/CD, which is deliberate. Preview environments as a concept is growing in popularity. But many still view it as a complex procedure, although it doesn’t have to be.

The intention of this post, especially the last two examples, is to showcase how certain technologies reduce the complexity of a Kubernetes preview environment significantly, like the combination of traffic replay and automatic mocks.

However, this is far from the only use case where traffic replay proves useful. If you’re curious about what else you can use it for, look into how traffic replay fits into production traffic replication as a whole.

Ensure performance of your Kubernetes apps at scale

Auto generate load tests, environments, and data with sanitized user traffic—and reduce manual effort by 80%
Start your free 30-day trial today

Learn more about this topic