Overview

Get started today
Replay past traffic, gain confidence in optimizations, and elevate performance.

No matter what application you’re building and who your target customers are, everyone can agree that it’s critical to avoid broken deployments. To aid in this goal, many tools and concepts have been invented, with Kubernetes preview environments being one of them.

In this post, you’ll get a deeper understanding of how preview environments work, how organizations are using them, and how you can get started yourself. But to put it simply: preview environments allow teams to deploy a version of their applications during the development process, interacting with it as if it was deployed in production.

What Are Kubernetes Preview Environments?

Being able to interact with an application during development—as if it were deployed to production—sounds good in theory, but how does it actually work?

The life cycle of a preview environment will typically look like this:

Preview environments life cycle

Notice how it’s a continuous process. There are examples of organizations creating long-lived environments, but the most common approach—and the most advantageous—is to create many short-lived environments. More on this in the section on use cases.

The point of a preview environment is to preview some feature. Imagine you’re working on an e-commerce site and you need to add front-end validation to the checkout page.

One option is to develop the feature, write tests, then deploy it and hope for the best. Or, you can spin up the service in a development cluster, interact with it yourself, and validate that yourself.

That’s the basic principle of a preview environment: being able to quickly spin up your service, allowing you or others to interact with it directly. However, there are still tons of features that differentiate preview environments from good preview environments.

Some key features of a good preview environment are:

  • Good test data
    • Data is key to good testing, as you’ll only be able to replicate real use cases with realistic data
  • Realistic use of your service
    • As with data, it’s important that your preview environment can realistically replicate how users normally interact with your service
  • Proper back-end dependencies
    • No matter what service you’re building, there’s a good chance it relies on at least one other service (database, API, microservice, etc.). In a preview environment, it’s important that these dependencies are replicated realistically as well. A popular choice for this is to use mocks.
  • The latest version of your code
    • It may seem obvious that you need the version of your code that you’re trying to validate. But it’s important to choose a tool that makes it easy to load your latest code versions

You may notice that there’s a common thread between all those points: realism.

Think of it like this: You’re invited to a wedding, and you want to make a speech. You can write the speech and practice it multiple times in front of the mirror, but all that practice can never truly prepare you for the feeling of speaking to a crowd.

To truly prepare, you’d have to get in front of a real gathering of people and practice your speech. It’s the same in software engineering. You can develop a feature and have everyone on your team test it, but it’ll never truly be the same as when your users start interacting with it.

You may not want to practice a wedding speech in front of a crowd, but there are many good reasons to ensure realism when performing software tests. Preview environments are a good step in this direction, and combining it with concepts such as traffic replay can get you incredibly close to testing in production without interfering with production resources.

The Rising Popularity of Kubernetes Preview Environments

Now that you have a general understanding of what preview environments are and how they work, it’s time to understand why they’re particularly talked about in relation to Kubernetes.

The simple reason is that the scalable and flexible nature of the Kubernetes ecosystem aligns especially well with how most organizations want preview environments to be implemented.

At this point, it’s important to mention that preview environments are still a fairly new concept in the world of software engineering, where most engineers know about it but have yet to fully implement it.

This distinction is important, as it means the definition of preview environments is still fluid and you are certain to find disagreeing opinions as you bring up the topic. However, general consensus is starting to form around certain key characteristics of preview environments:

  • Ephemeral
  • Easy to create
  • Easy to destroy
  • Able to integrate with common software processes (CI/CD, commits, infrastructure as code, etc.)
  • Able to be personal or organization-wide, depending on the circumstances
  • Short-lived
  • Quick

While preview environments are possible to create with almost any given kind of infrastructure—like virtual machines—engineers often gravitate toward containers. If you take another look at the list above, you’ll find that containers (i.e., Docker containers) match all the same requirements.

This point is only reinforced by the fact that more and more organizations are moving to the cloud, which in many cases leads to the adoption of containers. This creates a much bigger incentive to also move development environments to the cloud, and into containers.

The Need for Separation & Similarity

If you’ve worked with Kubernetes, you’ll know that it has two key advantages: separation of concerns, and the uniformity in how services are configured and deployed regardless of environment.

Using either namespaces or clusters (or third-party solutions like vclusters), you can get amazing levels of separation inside Kubernetes, ensuring that teams cannot affect each other’s services.

This separation is necessary when you start implementing preview environments, as you don’t want anything happening in your environment to affect somebody else’s. This is crucial in implementing preview environments.

Because preview environments are meant to be used for testing, it’s important that the environments are idempotent and static. Static in the sense that it is not affected by anything outside of the environment.

On the other hand, Kubernetes also allows for many things to be collected into one standard approach. Think deployments. Given that one of the most important factors of a good preview environment is realism, the ability to deploy them with the exact same procedure as in production—commonly via Helm charts—makes a compelling case for Kubernetes.

This brings up the point of how infrastructure-as-code is a critical ingredient in getting preview environments working as efficiently as possible, as having manifest files allows you to automate large parts of the process—again, something that’s already being done in many Kubernetes clusters.

Without separation, you risk affecting your understanding of whether your changes are working properly or not, causing flakiness. In organizations with shared pre-prod or staging environments, where many teams are uploading alpha versions of their code, running simultaneous tests and breaking things, you can never be truly sure if your application is breaking because of your actions or someone else’s.

This has a huge effect on the developer experience.

Without some amount of consistency in how production and development are deployed, it’s almost impossible for even skilled infrastructure engineers to produce a proper replica of the production environment.

All in all, these two key features of Kubernetes come together to form an ecosystem where you can have separate environments for everyone, while doing it efficiently through standardized processes.

Utilizing Preview Environments in Kubernetes

At this point, you likely have quite a few ideas for how you can start using preview environments, but it never hurts to get inspiration. Here are a few examples of how others have decided to use preview environments.

Automatic Deployment and Verification

As the name suggests, preview environments are used to preview changes before they’re deployed into production. This is not a new concept, and has been implemented by organizations for many years. It’s the entire reason integration and end-to-end testing are as popular as they are.

The difference is in how complex the verification steps can become.

A good preview environment should be able to use a combination of traffic replay and automatic mocks to create an environment for your service that accurately reflects real-world conditions.

This way, preview environments can allow you to test all parts of your service without having to manually write out test cases.

Preventing Resource Limitations of a Local Environment

Running tests locally is something that every developer will be able to recognize. Especially while developing a feature, it’s not uncommon to run some tests locally to verify what you’re doing.

Some organizations have moved past this in certain aspects, such as making sure that tests during CI/CD are running in the cloud.

However, with preview environments, you can consider moving the local testing into the cloud as well. Tools like Skaffold make it possible for developers to run their service in a real Kubernetes cluster while developing locally.

By combining tools like Skaffold with automatic creation of preview environments—namely data replication—you’ll be creating a very streamlined process for your developers.

Perform Long-Running or Resource-Intensive Tests

While it is true that most organizations have moved any automated tests into the cloud, that’s typically only for scheduled tests.

As a developer you may want to execute an ad-hoc test, just to verify the progress you’ve made so far. When it’s just a simple API request it’s not an issue, but when you want to perform long-running or resource intensive tests, you may run into issues.

Imagine that you want to run a load test. The hardware you’re using as a developer will inherently limit the amount of load you can generate. If you’re able to automatically provision preview environments, you can run the load test in Kubernetes as well.

Not only does it remove the load from your local machine, it also allows you to “fire and forget,” in a sense. Of course, you can’t simply forget about the test—you’ll have to look at the results. But you can start the test a minute before closing down your PC and going home for the day without any worries.

Realistic Scenarios

A common thread throughout this post has been the realism that you get from a good preview environment. But it’s important enough that it deserves its own section.

You can sit through meetings upon meetings and draw out exactly how your application is intended to be used, but ultimately, it’s unlikely you’ll cover 100% of scenarios. Humans are unpredictable, and there are bound to be edge cases you haven’t considered.

There’s a reason why “testing in production” must be done with extreme care—namely, the possible consequences of errors—but there are certainly also reasons why it’s even considered in the first place.

By testing in production, you can be certain that any mistakes, or even successes, aren’t caused by disparities between your production and development environments. But alas, many organizations still deem it too risky, which is understandable.

But, by implementing traffic capture and traffic replay, you can replay real user traffic to your development environment, which gets you as close to production as possible without being in production. Additionally, by using a traffic replay tool that supports PII redaction, you can avoid getting into trouble with data regulations like GDPR.

Getting Feedback from Others

Most of the talking points in this post have been either technical or related to technical teams. But preview environments can help you even when interacting with nontechnical teams.

Depending on the size of your organization, you may have a dedicated product team. The process usually goes something along the lines of:

  • Product team develops a list of requirements
  • Developers implement the feature
  • Developers showcase the feature to the product team
  • Revisions are made OR feature is deployed

The main bottleneck in this process is when developers have to showcase the feature. That can be at least partially or fully removed by implementing preview environments.

Imagine deploying your new feature to a preview environment—complete with test data and everything—then generating a link to share with the product team. Now they can test the feature on their own, at their own pace.

Even if this can’t remove the meeting between developers and the product team completely, it can help ensure that the meeting is spent constructively, rather than showing off the feature.

This is just one example. This same principles apply to other teams, and perhaps even high-status clients are asking for custom features.

Manage Development Cost

As companies are moving to the cloud, it becomes increasingly important to consider cost management. Also, most companies are interested in cutting down costs in any case.

There are two main ways in which preview environments can help you lower cost: optimizing resource usage and reducing engineering hours.

With preview environments, you can move away from long-lived environments that are using up resources even when no one’s working. Instead, you can spin up environments when needed and shut them down as soon you’re done with them.

Whether preview environments will reduce engineering hours depends on how you implement it in your organization and how it’s adopted. But, it’s not unreasonable to think that this concept can streamline the workflow for developers, letting them develop new features faster.

Additionally, preview environments should reduce the number of bugs that make it into production, ultimately leading to fewer hours spent debugging.

Getting Started with Kubernetes Preview Environments

Instructions on how to set up fully automated preview environments is too big a topic to cover in a single section. However, as stated multiple times throughout this post, traffic capture/replay is the basis for any good preview environment.

As such, setting up traffic capture should be the first step in your process. For fully detailed instructions, you can check out this blog post, but here are the general steps you’ll have to follow:

  • Configure your Kubernetes cluster to capture traffic from services
  • Configure PII redaction to ensure compliance with data regulations
  • Save the captured traffic in a data store
  • Make the data source shareable between production and development without compromising security
  • Configure a way to replay the captured traffic against specific services

This may sound like a lot of work, but it can all be accomplished by installing an Operator into your cluster.

Should You Use a Preview Environment?

As with anything in software, your mileage may vary. If you’re a small startup with only one or two engineers, there’s a good chance you have more important things to focus on. But if you have an established development team and multiple departments, there’s a high likelihood that preview environments can be a big advantage for you.

In summary, preview environments can help you not just avoid costly bugs in production, but streamline processes and perhaps even reduce cost.

Ensure performance of your Kubernetes apps at scale

Auto generate load tests, environments, and data with sanitized user traffic—and reduce manual effort by 80%
Start your free 30-day trial today