Get started today
Replay past traffic, gain confidence in optimizations, and elevate performance.

All but the simplest applications borrow code. You could write everything yourself from just core language features but who has time for that?  Instead you take on dependencies, pieces of code written by others that usually give us 80% or more of what we need with 20% of the effort.   Sometimes these dependencies are made to interact with a specific technology like a database, or perhaps it’s just a library providing some feature that would be onerous to write yourself.  The differences are outside the scope of this article.  What I would like to concentrate on here is how to use imported code and dependency wrapping in go, while maintaining clean abstractions so that the code base can change over time with your needs.

The Approach

If you need to interact with github for example you have a few choices. You could “shell out” and call the shell client but that’s probably slower than you want, and requires the git client to be installed in the runtime environment. You could use the HTTP API but that will require a lot of boilerplate if you’re calling more than one or two endpoints. The choice for most developers would be to import a library that communicates with github and call it a day.  The public library’s quality is likely somewhere between “perfect for my use case” and “works well enough.” It may be well tested, and if not it at least has more users than anything you would write from scratch today. But we still have a problem. While I wouldn’t fault you for betting on git (still) being the version control de facto in ten years, you may switch hosting providers, and most technologies don’t come with the same assurances.

As modern developers, we switch dependencies all the time. The only constant in software is change. Our applications rely on external dependencies like databases, third party APIs, caches, and queues. They serve our needs today but tomorrow we may need a faster option, one that doesn’t cost as much, or a version not tied to a cloud provider.  If we want to make these changes without too much pain, or worse rewriting our core business logic, the code that handles a dependency must be isolated.  If you are familiar with the hexagonal architecture, often called “ports and adapters,” this pattern may look familiar.

The Queue, as an Example

This feels like a good one because it’s something your application may want to replace in time. Our sample application is a distributed note taking service. For when you need your notes available on eight continents and resilient against global disasters. Our notes application starts with SQS, a queue service provided by AWS, used to notify other services when a note is saved. Error handling removed for brevity.

					type Note struct {
    ID  	string
    Text	string
    Created time.Time

func sendNote(q, *sqs.SQS, queueURL, n Note) *sqs.SendMessageOutput {
    body, _ := json.Marshal(n)
    in := sqs.SendMessageInput{
        QueueUrl: queueUrl,
        MessageBody: aws.String(string(body)),

    out, _ := q.SendMessage(&in)
    return out
This code is fairly simple, and we can imagine using several such functions within our business logic.  We create a note, send it to the queue. Modify it somewhere else, send it to the queue.  Now we have other services, maybe many services, that listen for changes and react.  But three continents into our rollout we realize we need a feature that SQS doesn’t have.  We need RabbitMQ, or maybe Kafka.  Or maybe we need to support more than one.  We need to move to a new technology, a new library, and potentially a new model.  The code examples for the new queues don’t look anything like what we’ve been doing with SQS and by now we have SQS logic sprinkled everywhere. This could take a while…  Unfortunately I’ve been in this situation too many times.  I’ve felt the pain of the code migration, and the manual validation that comes afterwards to make sure nothing has broken (see the Speedscale home page for help with the validation part). There is a better option though. Have you guessed that it includes wrapping your dependencies?

Introduce a Thin Wrapper

What we want is to get the benefit of someone else’s code without tying ourselves to it. What if we start by providing a thin wrapper around the code we import?
					type wrapper struct {
    queue	*sqs.SQS
    queueUrl	*string

func (w *wrapper) send(msgBody string) string {
    in := sqs.SendMessageInput{
        QueueUrl: w.queueUrl,
        MessageBody: aws.String(msgBody),

    out, _ := w.queue.SendMessage(&in)
    return *out.MessageId
The `wrapper` type provides the behavior we need from SQS and no more.  The wrapper does not accept or return any SQS specific types, which is intentional.  We want to make our lives easier with the SQS library but we don’t want it to pollute our code with it.  But this code doesn’t handle all of the same logic.  We want to work with the `Note` in our business logic so we can keep high-level code with high-level code.  We can write our internal queue logic around this type without worrying too much about the SQS implementation.
					type NoteQueue struct {
    queue *wrapper

func (nq *NoteQueue) Send(n Note) string {
    body, _ := json.Marshal(n)
    return nq.queue.send(string(body))
The goal is to create a boundary for the SQS code. Anywhere we use an SQS library type in our business logic we are leaking details that we will have to replace later. But perhaps more importantly, if we are using SQS types in our business logic then we are also thinking in terms of SQS, as opposed to thinking of something that meets our specific needs. This could shape our core logic so that a migration is even more difficult.  The sooner we can move from the external representation of a concept to one that is opinionated towards the problem we are trying to solve, the better.

Now we have two layers here, the `wrapper` type and the `NoteQueue` type, but that isn’t strictly necessary.  We could use SQS directly in the `NoteQueue` and still have a clean boundary so long as the SQS details don’t leak into code that uses the `NoteQueue`, though we gain something else in exchange for the bit of extra code.  Instead of using the `wrapper` directly we can represent its behavior with an interface.
					type NoteQueue struct {
    queue interface { // optionally represent wrapper with an interface
        send(msgBody string)
This is a drop-in replacement for the `wrapper` but now we can now replace the SQS implementation of `wrapper` as needed.  Referencing back to the hexagonal architecture, the `queue` interface here can be considered a “port,” something to be plugged into.  The `wrapper` is an “adapter,” something to be plugged in.  And just like the outlets at your house support televisions and vacuums, your code can support a mock or an in-memory queue, which makes most of this code unit-testable.  Integrations tests are always an option but they are usually slow to run, painful to write and orchestrate with real systems, and flaky.  Again, see the Speedscale home page… this is what we do.

The Result

It should be said that while wrapping external code like this can provide benefits like isolation, testability, long term scalp health… there are no silver bullets.  Adding layers to your application will add more code, which means opportunities for more bugs.  Also, using interfaces in place of concrete types almost always makes code more difficult to reason about and debug.  That said, the proper abstractions can keep your business logic clean of external influences, saving headaches and issues down the line. Speedscale helps developers release with confidence by automatically generating integration tests and environments. This is done by collecting API calls to understand the environment an application encounters in production and replaying the calls in non-prod. If you would like more information, drop us a note at hello@speedscale.com .
Ensure performance of your Kubernetes apps at scale

Auto generate load tests, environments, and data with sanitized user traffic—and reduce manual effort by 80%
Start your free 30-day trial today

Learn more about this topic