Nowadays APIs are the de facto method of communication between services. With all the connections points, setting up environments to efficiently build and test can be difficult. Also, in lower environments, 3rd party APIs can be unreliable, incorrect or non-existent. Common problems with 3rd party sandboxes we see are:
- They don’t perform like they do in production. eg. I need to test at 100 requests per second!
- They only return a few types of responses, when in reality the actual set of possible responses is vast
- You can’t make it misbehave (return error messages, 404’s, not respond) to test your error handling code
- They don’t contain the right test data, or it needs to be reset frequently
- You don’t have enough instances to go around in the lower environments (they cost $$$)
- The functionality you need to test with only exists in production
This is why service mocks are a critical piece of the overall Speedscale framework. We set out to design a testing framework that allows efficient, effective testing of Kubernetes API’s, with the goal of orchestrating the Triple Threat for the user automatically (tests, data, and dependencies). The service mocks stand in for dependencies, and contain data. When you focus on anything less than the 3 components of the Triple Threat, you’ll get flaky test results.
Speedscale is a game changing capability that enables large architectural upgrades with quality. Traffic replay is high coverage and fast.
-David Ting, VP, Engineering, Nylas
How Speedscale Builds Mocks
One of the first questions I get when showcasing the ability to collect and then replay API calls is — “Well that won’t work because my backends won’t have the same data or records as when you collected the traffic. Upon replay, none of the you calls you make will be able to go through.” Yes, even for non-idempotent operations, service mocks are a great solution. We don’t actually make the creation, update, or deletion of records. We respond as if we did. You don’t need us to actually make changes to the backends. Those aren’t the systems-under-test. We’re able to replay the response that you need for the given timespan of traffic, which is what you’re really after.
Mocks are useful, but typically difficult to build. The more you want them to do, the longer they take. Speedscale uses traffic to model the behavior of the mock, thereby reducing the subject matter expertise and time it takes to build them. By examining traffic, Speedscale understands what endpoints you talk to, and how you talk to them. This last point is important — as the number of APIs and containers in your application grow, it can be hard to keep track of a service’s dependencies. Tracking downstream APIs is a feat in itself sometimes.
Now, More Than Ever
Provisioning end-to-end environments can be difficult. However end-to-end testing seems to be the go-to as the final gate before releasing to production. UI tests run through an end-to-end environment is commonplace in most engineering orgs. However no amount of UI tests will uncover how your systems react to latent responses from 3rd party APIs, error messages from 3rd parties, or non-responsiveness. All of these things commonly happen in production, but how will your application handle it?
One of the biggest capabilities our customers get with mocking is performance tests. Typically performance testing is left to the end of the sprint, in an end-to-end environment. But what if you could isolate JUST your API, with all the necessary backends, and replay the same sort of inbound requests you’d see in prod? What if you could multiply the traffic to desired TPS? A component-level load test, if you will.
Case study: Using performant 3rd party mocks, Nylas was able to improve performance by 20x over several releases.
Time for a Tune Up
Speedscale’s service mocks can be modeled from real traffic, even if it’s just from staging. Once we’ve collected the traffic, a quick change of a config file can replay traffic in multiple different modes. Mocks can be:
- sped up (apps can actually crash when 3rd parties respond faster than anticipated because you can’t keep up)
- slowed down a certain percent of the time (eg. 25% of responses have a latency between 80-200ms)
- certain percent of responses are not sent back to simulate non-responsiveness
Now you can understand how you handle problems induced by load, or 3rd party external factors. Make improvements. Replay the traffic again. Did you fix it? No? Rinse and repeat.
In a typical, complex end-to-end test cycle, there are too many variables to understand whether your code changes actually fixed the issue. More importantly, you may not uncover a myriad of lurking issues. But combine API traffic replay with service mocks, and you now have a stable, predictable environment to scientifically isolate and test your improvements. What’s the first thing you’d test with your mocks?
Speedscale helps developers release with confidence by automatically generating integration tests and environments. This is done by collecting API calls to understand the environment an application encounters in production and replaying the calls in non-prod. If you would like more information, schedule a demo today!