Overview

Get started today
Replay past traffic, gain confidence in optimizations, and elevate performance.
In the CNCF ecosystem, Envoy, an open source service proxy developed by Lyft, is a very common choice in service mesh networking. In a previous post we discussed that both Consul and Istio leverage Envoy. Were you aware that you can extend Envoy’s capabilities with WebAssembly? What is WebAssembly? WebAssembly, or Wasm as it is often abbreviated, is not so much of a programming language as it is a specification for a binary instruction format that can be run in sandboxed virtual machines. This approach allows you to extend Envoy’s capabilities in a secure and isolated environment without interfering with service operation. The Wasm support for Envoy is a little different than vanilla WebAssembly, however. Envoy actually provides a Wasm environment that adheres to an ABI specification called proxy-wasm. This presents a challenge for Go developers because even though Go natively supports Wasm architecture target as a build flag GOARCH=wasm, it is incompatible with Envoy. Instead, you must also supply a target OS build flag for Wasi with GOOS=wasi. This OS target is not part of the standard Go library, so the only way for you to build your extension is to use TinyGo, an alternative Go implementation meant for programs running on embedded systems and microcontrollers. At Speedscale, we use a Wasm extension to provide our integration and support for Istio installations. And while we prefer to develop our applications in Go, using TinyGo was ill-suited for our needs. If you intend on using TinyGo for Wasm extensions, you should be aware of certain caveats and concessions need to be made. See Go language features for more details. There are of course other language SDKs, C++ and Rust appearing to be the favorites. Either one of these are fine choices since the SDKs adhere to the same proxy-wasm ABI. For the purpose of demonstration in this post, I will focus on the Rust implementation. Outside of the proxy-wasm spec itself, there is little available documentation about how to actually develop these extensions in practice. The ABI spec is built around callback functions that Envoy invokes when certain events occur, the majority of which occur in what’s referred to as a Context. These Contexts can be thought of as a logical pairing between a request and a response – something very similar to what Speedscale does. Event callbacks expose points during a context’s lifecycle when certain information is available, such as HTTP request headers, HTTP response trailers, etc. And as we will see later, these events need to be correlated using a context id value. Let’s dive in! We’ll work through a simple example of an extension that adds an additional header value to an HTTP request. To get started, let’s initialize our project and fill in a few basic details in a Cargo.toml file so we can build our extension. Ensure that the wasm32 build target is installed and initialize the Cargo.toml manifest:
				
					$ rustup target add wasm32-wasi
$ cargo init --lib

				
			
You can edit the cargo file however you choose, such as:
				
					[package]
name = "my-envoy-filter"
version = "0.0.1"
authors = ["Jane Developer <jane@example.com>"]
edition = "2018"

[lib]
path = "src/filter.rs"
crate-type = ["cdylib"]

[dependencies]
proxy-wasm = "0.1.3"

				
			
The Wasm runtime requires you to initialize the module and specify how new contexts should be created as they occur. For our case, we are only instructing the module to use our HttpContext implementation, meaning that we intend to only process HTTP traffic and not generic TCP traffic. Note that we do need to implement both the Context and HttpContext traits even if we do not intend on using every callback. It isn’t required that you add all of the empty function definitions for each callback, though. They are included below simply for demonstration purposes.
				
					#![cfg(target_arch = "wasm32")]

use proxy_wasm as wasm;
use proxy_wasm::traits::{Context, HttpContext};
use proxy_wasm::types::{Action, LogLevel};

#[no_mangle]
pub fn _start() {
    wasm::set_log_level(LogLevel::Trace);

    // Note: there are also RootContext and StreamContext that provide different callbacks
    wasm::set_http_context(
        |context_id, root_context_id| -> Box {
            Box::new(MyContext {
                context_id: context_id,
                root_context_id: root_context_id,
            })
        }
    );
}

struct MyContext {
    context_id: u32
    root_context_id: u32
}

impl MyContext {
}

impl Context for MyContext {
    fn on_done(&mut self) -> bool {
        true
    }
}

impl HttpContext for MyContext {
    fn on_http_request_headers(&mut self, num_headers: usize) -> Action {
        Action::Continue
    }

    fn on_http_request_trailers(&mut self, num_trailers: usize) -> Action {
        Action::Continue
    }

    fn on_http_request_body(&mut self, body_size: usize, stream_end: bool) -> Action {
        Action::Continue
    }

    fn on_http_response_headers(&mut self, num_headers: usize) -> Action {
        Action::Continue
    }

    fn on_http_response_trailers(&mut self, num_trailers: usize) -> Action {
        Action::Continue
    }

    fn on_http_response_body(&mut self, body_size: usize, stream_end: bool) -> Action {
        Action::Continue
    }
}

				
			
A point to make with the above example: because on_done belongs to the more generic Context trait, you can’t access HTTP information via some of the SDK function calls. For this reason, I chose to create a top-level struct MyContext for any state tracking and other function calls, and then just satisfy both the Context and HttpContext traits.

Also notice that the MyContext struct stores the context IDs when it is created. Each of the event callbacks happen piecemeal rather than as a complete HTTP request or response, so as I mentioned earlier, knowing this context ID is helpful if you intend on tracking or performing something for complete HTTP requests/responses when the context is done.

Whenever you want to build this extension, you can use cargo to do so:
				
					$ cargo build --target wasm32-wasi --release
				
			
When this completes, you should have your build artifact located at target/wasm32-wasi/release/your_extension.wasm.

Before we get ahead of ourselves, let’s add a little modification to our mostly empty Rust implementation so that we can inject our own headers in HTTP requests before the service behind Envoy sees it:
				
					fn on_http_request_headers(&mut self, num_headers: usize) -> Action {
    self.add_http_request_header("X-my-custom-header", "hello world");
    Action::Continue
}

				
			
The Rust SDK also provides some very handy utilities depending on your use case, most notably the ability to make external HTTP or gRPC calls. Like I mentioned, documentation is a bit sparse, but I definitely encourage reading the source code for each of the context traits.

But wait! There’s more!

Now that we have our Wasm extension, what’s next? We have to instruct Envoy to use this somehow. Envoy has configuration for applying filters, but if you are using Istio, then you can create an EnvoyFilter custom resource to specify the necessary configuration needed to load and run your extension. For example:
				
					apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: my-envoy-filter
  namespace: my-namespace
spec:
  workloadSelector:
    labels:
      app: my-app-selector
  configPatches:
  - applyTo: HTTP_FILTER
    match:
      context: SIDECAR_INBOUND
      listener:
        filterChain:
          filter:
            name: envoy.filters.network.http_connection_manager
            subFilter:
              name: envoy.filters.http.router
    patch:
      operation: INSERT_BEFORE
      value:
        name: myapp-filter-in
        config_discovery:
          config_source:
            ads: {}
            initial_fetch_timeout: 0s
          type_urls: [ "type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm"]
  - applyTo: EXTENSION_CONFIG
    match:
      context: SIDECAR_INBOUND
    patch:
      operation: INSERT_BEFORE
      value:
        name: myapp-filter-in
        typed_config:
          '@type': type.googleapis.com/udpa.type.v1.TypedStruct
          type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
          value:
            config:
              root_id: myapp-root-in
              vm_config:
                vm_id: myapp-vm-in
                runtime: envoy.wasm.runtime.v8
                configuration:
                  "@type": type.googleapis.com/google.protobuf.StringValue
                  value: |
                    { "direction": "in" }
                code:
                  remote:
                    http_uri:
                      uri: https://example.com/path/to/your/extension.wasm

				
			
To explain the above, we are doing a couple of things. First we are instructing Envoy’s inbound HTTP filter to include an additional extension (Wasm in this case) that will apply only to the workload indicated with the label app: my-app-selector. We’re also indicating that the extension’s configuration needed can be found with the name myapp-filter-in.

Next we are declaring the extension’s actual configuration, specifically that we are only interested in observing inbound traffic and that our Wasm binary can be located at specific URI. If you aren’t wanting to host your Wasm binary at a remote location, you can also specify a local file on disk:
				
					code:
     local:
        filename: /path/to/your/extension.wasm
				
			

Note that this pattern can also be repeated for observing outbound traffic from the pod as well by changing the directions to out and specifying SIDECAR_OUTBOUND.

More information on configuring Envoy to support your extension can be found in the Envoy documentation, and if you are using Istio, the Istio documentation for EnvoyFilter.

Now you can see a glimpse of what can be done by extending Envoy’s capabilities with WebAssembly plugins. The flexibility of this mechanism is how we have expanded Speedscale to support native Istio integrations out of the box, without needing to worry about any additional changes to your existing Istio configuration and installation.

————–

Many businesses struggle to discover problems with their cloud services before they impact customers. For developers, writing tests is manual and time-intensive. Speedscale allows you to stress test your cloud services with real-world scenarios. Get confidence in your releases without testing slowing you down. If you would like more information, schedule a demo today!

Ensure performance of your Kubernetes apps at scale

Auto generate load tests, environments, and data with sanitized user traffic—and reduce manual effort by 80%
Start your free 30-day trial today

Learn more about this topic