Overview

Get started today
Replay past traffic, gain confidence in optimizations, and elevate performance.

Modern application development demands speed, reliability, and reproducibility. But between third-party APIs, unstable backend services, and inconsistent test data, API development often struggles to achieve dependable environments for testing and iteration.

Luckily, Speedscale CLI offers a powerful command-line tool to help easily mock API responses, simulate realistic backend behavior, and even inject chaos where needed – all without spinning up a full mock server manually or using messy generated data.

Today, we’re going to explore how to mock complex API workflows using the Speedscale CLI, enabling better control over response bodies, error handling, response headers, and more, all within your local development workflow.

Why Mock APIs?

Before diving into Speedscale, let’s set the stage as to why mock APIs are so helpful. Mock APIs are critical elements of the API and software development space, offering substantial benefits throughout the development lifecycle.

Firstly, mock APIs serve frontend developers who need these dummy APIs to mimic specific responses and simulate backend errors. This allows you to decouple the development process from the production systems, iterating without risking the actual systems or deploying broken/semi-functional code.

Secondly, these systems serve integration testers who want to run consistent tests without relying on flaky external services that might have poor connections or integration offerings. This allows testers to easily set a goal without having to worry about external client integrations or APIs, which might undermine the validity of the testing process.

API designers also benefit significantly from being able to iterate on endpoints, documentation, and systemic flows without needing to implement real backend APIs or systems. In essence, this enables rapid iteration and A/B testing away from the prying eyes of the end user, allowing you to control the narrative, expand your systems readily, and make for the best UX possible.

Finally, DevOps engineers need these systems to deploy mocks that respond to HTTP methods with minimal configuration. API design relies heavily on stability in the face of permutations that might be outside of your control – but good adoption relies on stability and clarity. Adopting a mock API allows you to reap both the benefits of these systems and the benefits of clear communication around them.

With realistic mock APIs, you can simulate GET, POST, DELETE, and other HTTP methods, define exact response headers, model random failures, and support advanced OpenAPI specifications – all without touching production systems (or incurring the risks of using production-tied implementations).

What is Speedscale CLI?

Speedscale is a novel solution that allows you to capture traffic and then replay it, either in its captured form or mutated into different test cases or classes.

The CLI aspect of Speedscale is a Command Line Interface version of the offering, which enables users to:

  • Record traffic using proxy mode, filtering what is captured, and ensuring that private data is obfuscated or removed
  • Generate mock API traffic replays to either repeat past traffic patterns or generate theoretical patterns based on observed flows.
  • Build integration test snapshots to do post-hoc A/B testing and regression
  • Manage your mocks directly from the command line, allowing for rapid iteration and scaling

Getting Started with the CLI in API Development

Speedscale’s CLI tooling is ideal for creating mock servers with realistic traffic and for enabling AI-powered response generation from recorded data. It’s also incredibly easy to get started. For instance, you can install the Speedscale CLI by simply issuing the following command:

brew install speedscale/tap/speedctl

You can also install the CLI through the install script:

sh -c "$(curl -Lfs 
https://downloads.speedscale.com/speedctl/install)"

There are several methods for setting up the CLI, as outlined in the Speedscale CLI install docs.

Use Case: Mocking a Complex API Workflow

With the CLI ready to go, let’s look at what a complex API workflow might look like in a mock API implementation.

Let’s say your app depends on a multi-endpoint backend that processes user login, retrieves user data, and fetches personalized settings. This example system might leverage multiple REST APIs, requiring complex multipoint flows. Instead of setting up a staging backend or coding dummy APIs by hand, you can use Speedscale to record, configure, and replay those interactions.

Step 1: Record Real Traffic with Proxy Mode

Firstly, you can run the target app or service through Speedscale’s proxy to capture real HTTP traffic. This might look something like this:

speedscale record \
  --name login-flow \
  --proxy-port 9000 \
  --app-port 3000

Now route your frontend to use http://localhost:9000 – Speedscale will intercept and log each request, method, response body, and header.

Step 2: Review Captured Traffic

With the captured traffic, you can now use the Speedscale CLI and Dashboard to inspect the recorded API calls. This system will allow you to see:

  • Endpoint paths (e.g. /api/login, /api/user)
  • Request payloads and headers
  • Mockable response bodies and status codes
  • Dependency chain (e.g., upstream auth APIs or microservices)

Each call can be tagged, trimmed, or filtered – perfect for testing purposes that require specific responses or data types.

Step 3: Generate and Replay a Mock

With traffic captured, you can now convert the recording into a mock workload:

speedscale generate mock \
–name login-flow \
–output ./mocks/login-mock.yaml

And then deploy the mock:

speedscale mock serve \
–config ./mocks/login-mock.yaml \
–port 8000

Your API endpoints (e.g.,/api/login) will now respond with real, previously recorded data.

Step 4: Inject Custom Responses and Failures

With this in place, you can now configure a few custom responses and failure systems:

  • Static responses: Always return a fixed body
  • Specific requirements: Only respond when a matching request is received
  • Error handling: Add 500-series responses or timeouts to test fallbacks
  • Headers: Customize CORS with Access-Control-Allow-Origin, or simulate auth tokens

For example, you can use YAML to specify your speedscale mock config and control your message flow:

request:
method: POST
path: /api/login
response:
statusCode: 200
headers:
Content-Type: application/json
body: ‘{“token”: “abc123”}’

This allows you to mock responses for development, support frontend workflows, and build faster without waiting on real services. You can also artificially limit these systems or route them in new ways to test differential service, A/B test new services entirely, or mock environmental issues.

Step 5: Test and Integrate in CI/CD

Finally, you can use these systems to test and integrate into your CI/CD.

speedscale mock serve –config ./mocks/login-mock.yaml &
npm run test

Here, you can add assertions against response bodies, mock dependency failures, or simulate specific timing behaviors (e.g., 200ms latency). Then view logs, inspect test data, and rerun easily – with no extra dependencies needed.

Advanced CLI Options

Speedscale also offers a few powerful CLI options to interact with your systems more directly, including:

  • speedscale config – Set project IDs, auth, and CLI preferences
  • speedscale dashboard – Sync and manage mocks visually
  • speedscale snapshot – Generate deterministic workloads for regression tests
  • speedscale test – Execute mocks in test mode, validate response code & structure

And that’s just the beginning – there are more ways to interact with this system, and you can explore all the related CLI commands here: https://docs.speedscale.com/setup/install/cli/

Why Choose Speedscale Over Traditional Mock Servers – and Why Dummy APIs Fail

Speedscale unlocks a lot of advanced functionality that just can’t be matched with traditional mock servers.

To begin with, the Speedscale CLI offers real traffic replay through effective CLI systems. Traditional mock servers usually use synthetic data, and as such, they pale in comparison to the utility and specificity of the Speedscale implementation.

There’s also the fact that traditional mock systems are secondarily focused on integration. In contrast, the Speedscale CLI is built directly for easy integration into the CI/CD pipeline via a clean CLI tool.

Finally, the Speedscale CLI also offers better dynamic response filtering, proxy mode traffic injection, and other complex permutability that isn’t typically part and parcel of traditional mock server systems. This all adds up to a much more effective system than traditional mock servers, and one that can be more readily adopted to different test approaches, use cases, and needs.

This is ultimately a decision about how much chaos you allow into the system. In a traditional server mock, you are trying to wrangle chaos. Still, in the process of using synthetic or generated data, you are likely to introduce substantial chaos into the overall testing environment. By using a mocked API using real traffic, you are choosing to reduce the amount of chaos in the system, thereby creating more order and thus better code and integrated systems.

Conclusion: Mock Smarter, Not Harder

Mocking doesn’t have to mean writing brittle static responses or manually wiring up endpoints – and it certainly doesn’t mean having to use synthetic data systems that are far less accurate or useful. With Speedscale CLI, you can:

  • Capture real backend APIs and services
  • Replay and manipulate mock responses for different users or data types
  • Inject random failures and test error handling
  • Support frontend and integration tests with confidence

Whether you’re building APIs, validating contracts, or stress-testing external services, Speedscale empowers you to move from chaos to control – all with a few lines in your terminal. You can get started by looking at the documentation, or get started right away with a 30-day free trial!

Ensure performance of your Kubernetes apps at scale

Auto generate load tests, environments, and data with sanitized user traffic—and reduce manual effort by 80%
Start your free 30-day trial today

Learn more about this topic