Overview

Get started today
Replay past traffic, gain confidence in optimizations, and elevate performance.

All but the simplest applications borrow code. You could write everything yourself from just core language features but who has time for that?  Instead you take on dependencies, pieces of code written by others that usually give us 80% or more of what we need with 20% of the effort.   Sometimes these dependencies are made to interact with a specific technology like a database, or perhaps it’s just a library providing some feature that would be onerous to write yourself.  The differences are outside the scope of this article.  What I would like to concentrate on here is how to use imported code and dependency wrapping in go, while maintaining clean abstractions so that the code base can change over time with your needs.

Dependency wrapping is a nifty technique in Go programming that allows you to encapsulate dependencies with beautiful paper. A wrapper function is a design pattern that allows additional functionality to be executed before or after a function call. It can modify the behavior of the original function without changing its core logic, enhancing function calls with logging and other features.

What is Dependency Wrapping?

Gift wrap box

Dependency wrapping is a nifty technique in Go programming that involves wrapping a dependency, such as a function or a struct, with another function or struct that provides additional functionality or behavior. Think of it as putting a gift in a box and then wrapping it with beautiful paper. The gift remains the same, but the wrapping adds an extra layer of appeal.

In Go, this technique is particularly useful for decoupling dependencies and making your code more modular, reusable, and maintainable. By wrapping a dependency, you can introduce new functionality or behavior without touching the original implementation. This means you can enhance or modify the behavior of existing code without the risk of breaking it.

For example, imagine you have a function that interacts with a third-party API. By wrapping this function, you can add logging, error handling, or even caching without altering the core logic. This keeps your code clean and focused on its primary task while still benefiting from additional features.

Benefits and Use Cases

Dependency wrapping offers several compelling benefits and use cases:

Truck coupler to symbolize decoupling

  • Decoupling: By wrapping dependencies, you can easily change or replace one dependency without affecting others. This makes your codebase more flexible and adaptable to change.

  • Modularity: Wrapping dependencies promotes modularity, allowing you to break down complex code into smaller, more manageable pieces. This makes your code easier to understand and maintain.

  • Reusability: Wrapped dependencies can be reused in multiple contexts, reducing code duplication and improving maintainability. This means you can write once and use it everywhere.

  • Testability: Wrapped dependencies can be easily tested in isolation, making it easier to write unit tests and ensure code quality. This leads to more robust and reliable software.

  • Flexibility: Dependency wrapping allows you to add new functionality or behavior to existing dependencies without modifying their underlying implementation. This gives you the freedom to innovate without constraints.

Common use cases for dependency wrapping include:

  • Logging: Wrapping a dependency with logging functionality to track usage or errors. This helps in monitoring and debugging.

  • Caching: Wrapping a dependency with caching functionality to improve performance. This can significantly speed up your application.

  • Authentication: Wrapping a dependency with authentication functionality to secure access. This ensures that only authorized users can access certain features.

  • Error handling: Wrapping a dependency with error handling functionality to provide more robust error handling. This makes your application more resilient to failures.

The Approach

If you need to interact with github for example you have a few choices. You could “shell out” and call the shell client but that’s probably slower than you want, and requires the git client to be installed in the runtime environment. You could use the HTTP API but that will require a lot of boilerplate if you’re calling more than one or two endpoints. The choice for most developers would be to import a library that communicates with github and call it a day.  The public library’s quality is likely somewhere between “perfect for my use case” and “works well enough.” It may be well tested, and if not it at least has more users than anything you would write from scratch today. But we still have a problem. While I wouldn’t fault you for betting on git (still) being the version control de facto in ten years, you may switch hosting providers, and most technologies don’t come with the same assurances.

As modern developers, we switch dependencies all the time. The only constant in software is change. Our applications rely on external dependencies like databases, third party APIs, caches, and queues. They serve our needs today but tomorrow we may need a faster option, one that doesn’t cost as much, or a version not tied to a cloud provider.  If we want to make these changes without too much pain, or worse rewriting our core business logic, the code that handles a dependency must be isolated.  If you are familiar with the hexagonal architecture, often called “ports and adapters,” this pattern may look familiar.

The Queue, as an Example

This feels like a good one because it’s something your application may want to replace in time. Our sample application is a distributed note taking service. For when you need your notes available on eight continents and resilient against global disasters. Our notes application starts with SQS, a queue service provided by AWS, used to notify other services when a note is saved. Error handling removed for brevity.

				
					type Note struct {
    ID  	string
    Text	string
    Created time.Time
}

func sendNote(q, *sqs.SQS, queueURL, n Note) *sqs.SendMessageOutput {
    body, _ := json.Marshal(n)
    in := sqs.SendMessageInput{
        QueueUrl: queueUrl,
        MessageBody: aws.String(string(body)),
    }

    out, _ := q.SendMessage(&in)
    return out
}
				
			
This code is fairly simple, and we can imagine using several such functions within our business logic.  We create a note, send it to the queue. Modify it somewhere else, send it to the queue.  Now we have other services, maybe many services, that listen for changes and react.  But three continents into our rollout we realize we need a feature that SQS doesn’t have.  We need RabbitMQ, or maybe Kafka.  Or maybe we need to support more than one.  We need to move to a new technology, a new library, and potentially a new model.  The code examples for the new queues don’t look anything like what we’ve been doing with SQS and by now we have SQS logic sprinkled everywhere. This could take a while…  Unfortunately I’ve been in this situation too many times.  I’ve felt the pain of the code migration, and the manual validation that comes afterwards to make sure nothing has broken (see the Speedscale home page for help with the validation part). There is a better option though. Have you guessed that it includes wrapping your dependencies?

Introduce a Thin Wrapper

What we want is to get the benefit of someone else’s code without tying ourselves to it. What if we start by providing a thin wrapper around the code we import?
				
					type wrapper struct {
    queue	*sqs.SQS
    queueUrl	*string
}

func (w *wrapper) send(msgBody string) string {
    in := sqs.SendMessageInput{
        QueueUrl: w.queueUrl,
        MessageBody: aws.String(msgBody),
    }

    out, _ := w.queue.SendMessage(&in)
    return *out.MessageId
}
				
			
The `wrapper` type provides the behavior we need from SQS and no more.  The wrapper does not accept or return any SQS specific types, which is intentional.  We want to make our lives easier with the SQS library but we don’t want it to pollute our code with it.  But this code doesn’t handle all of the same logic.  We want to work with the `Note` in our business logic so we can keep high-level code with high-level code.  We can write our internal queue logic around this type without worrying too much about the SQS implementation.
				
					type NoteQueue struct {
    queue *wrapper
}

func (nq *NoteQueue) Send(n Note) string {
    body, _ := json.Marshal(n)
    return nq.queue.send(string(body))
}
				
			
The goal is to create a boundary for the SQS code. Anywhere we use an SQS library type in our business logic we are leaking details that we will have to replace later. But perhaps more importantly, if we are using SQS types in our business logic then we are also thinking in terms of SQS, as opposed to thinking of something that meets our specific needs. This could shape our core logic so that a migration is even more difficult.  The sooner we can move from the external representation of a concept to one that is opinionated towards the problem we are trying to solve, the better.

Now we have two layers here, the `wrapper` type and the `NoteQueue` type, but that isn’t strictly necessary.  We could use SQS directly in the `NoteQueue` and still have a clean boundary so long as the SQS details don’t leak into code that uses the `NoteQueue`, though we gain something else in exchange for the bit of extra code.  Instead of using the `wrapper` directly we can represent its behavior with an interface.
				
					type NoteQueue struct {
    queue interface { // optionally represent wrapper with an interface
        send(msgBody string)
    }
}
				
			

This is a drop-in replacement for the `wrapper` but now we can now replace the SQS implementation of `wrapper` as needed.  Referencing back to the hexagonal architecture, the `queue` interface here can be considered a “port,” something to be plugged into.  The `wrapper` is an “adapter,” something to be plugged in.  And just like the outlets at your house support televisions and vacuums, your code can support a mock or an in-memory queue, which makes most of this code unit-testable.  Integrations tests are always an option but they are usually slow to run, painful to write and orchestrate with real systems, and flaky.  Again, see the Speedscale home page… this is what we do.

Common Pitfalls to Avoid

While dependency wrapping is a powerful technique, there are several common pitfalls to avoid:

  • Over-wrapping: Avoid wrapping dependencies too deeply, as this can lead to complexity and performance issues. Keep your wrappers simple and focused.

  • Tight coupling: Avoid tightly coupling wrapped dependencies, as this can defeat the purpose of decoupling. Ensure that your wrappers are loosely coupled and can be easily replaced.

  • Performance overhead: Be aware of the performance overhead of wrapping dependencies, and optimize accordingly. Measure the impact of your wrappers and make adjustments as needed.

  • Debugging challenges: Be prepared for debugging challenges when working with wrapped dependencies, and use tools like logging and testing to help identify issues. Clear and comprehensive logs can be a lifesaver.

By understanding the benefits and use cases of dependency wrapping, and avoiding common pitfalls, you can effectively use this technique to improve the modularity, reusability, and maintainability of your Go code.

The Result

It should be said that while wrapping external code like this can provide benefits like isolation, testability, long term scalp health… there are no silver bullets.  Adding layers to your application will add more code, which means opportunities for more bugs.  Also, using interfaces in place of concrete types almost always makes code more difficult to reason about and debug.  That said, the proper abstractions can keep your business logic clean of external influences, saving headaches and issues down the line.

Speedscale helps developers release with confidence by automatically generating integration tests and environments. This is done by collecting API calls to understand the environment an application encounters in production and replaying the calls in non-prod. If you would like more information, drop us a note at hello@speedscale.com .

Ensure performance of your Kubernetes apps at scale

Auto generate load tests, environments, and data with sanitized user traffic—and reduce manual effort by 80%
Start your free 30-day trial today

Learn more about this topic