Mitmproxy vs Proxymock: Replaying Traffic for Realistic API Testing


Replaying traffic is a core tool in your toolbox when you need to reproduce a tricky bug or validate how your app behaves. Traffic replay is especially valuable for testing complex software applications that rely on APIs and microservices, where integration and functionality must be thoroughly validated. Two popular approaches are mitmproxy (a general-purpose, interactive intercepting proxy with powerful replay features) and proxymock (Speedscale’s tool that records traffic and generates layer-7-aware mocks and tests). They both let you replay traffic but they solve different problems and shine in different workflows. Below I’ll compare the two and help you decide when to use each. If you’re interested in building a complete traffic replay system yourself, you can learn more here.

Short TL;DR

  • mitmproxy: excellent low-level intercepting, editing, and replaying of recorded HTTP/TLS flows. Best when you want to manually inspect, trim, or surgically replay traffic and when you need a flexible, scriptable proxy.
  • proxymock: built for developer productivity: records full request/response payloads with L7 context, auto-generates mocks of backend dependencies, and can replay traffic in a way that turns outbound requests into simulated services. This is great for rapid isolation testing and CI.

What “replay” even means here

  • mitmproxy (client replay / server replay) supports replaying client requests (resending them to the real backend) and server-side replay (serving saved responses when requests match previously recorded traffic). Replaying traffic involves resending requests according to the original protocol used, such as HTTP or TLS, ensuring that the operational framework and message formats are preserved. It’s a tool for re-exercising recorded flows or for playback-based troubleshooting. You can also isolate and replay a single request for targeted debugging or validation.
  • proxymock replays with simulation: it not only replays recorded traffic locally, it can stand in for backend services by returning mocks derived from real outbound calls. In other words, when your app makes an outbound call during replay, proxymock can serve a recorded response instead of contacting the real service. That makes isolation testing and reproducible debugging much easier.

Practical differences (developer-first view)

1) Observability & Layer-7 understanding

Both mitmproxy and proxymock expose raw flows (HTTP request/response bodies, headers, TLS), and have powerful filtering, interactive editing, and scripting hooks. Both can be great for security testing and ad-hoc debugging.

Both tools can perform remote recording in Kubernetes clusters but mitmproxy requires you to manage your own storage. proxymock, can be connected to a SaaS service that solves this issue for cost.

2) Mocking & isolation

  • mitmproxy can replay saved responses (server-side replay) but it’s not primarily a mock generator — you replay flows, you can script behavior, but building a suite of mocks for many backend services is manual work. In other words, this can be a static tennis partner that always returns the ball exactly as served.
  • proxymock automatically converts recorded outbound calls into mock servers you can run locally (or in CI). That means you can run the entire app with backend dependencies replaced by recorded, realistic mocks — superb for deterministic dev/debug and CI tests. This approach is a cost effective way to simulate backend services without the overhead of maintaining real environments. proxymock also provides sophisticated data transformation systems to make mocks more realistic and programmable. This makes mocks repeatable, extensible and far less brittle.

3) Integration into CI / tests

  • mitmproxy can be scripted into CI (mitmdump, addons, saved flows), and is commonly used for regression or environment replay scenarios. It’s flexible but you’ll write glue code and manage flow files.
  • proxymock is explicitly designed to generate mocks and tests from recorded runs and to be used in CI — you can record once and then use the generated mocks to run deterministic tests without external dependencies. Running tests with these generated mocks ensures consistent and reliable validation across different environments. That accelerates test creation and reduces flakiness.

mitmproxy works well if you’re willing to create and maintain the complete pipeline and environments necessary for testing. proxymock (along with the optional SaaS service) provide a portal where replays can be managed, observed and tuned. Both are capable but one is more of a DIY toolkit and while the other is a one click experience.

Example workflows

When to reach for mitmproxy

You’re investigating a TLS-weirdness or want to surgically replay a handful of requests:

  1. Start mitmproxy/mitmweb and capture a session.
  2. Edit/delete flows interactively (remove noise like static asset GETs).
  3. Use client-replay to resend selected requests or server-replay to serve saved responses for matching requests.

Great when you need an interactive, low-level probe or you’re building security-oriented tests. mitmproxy is also valuable for penetration testing and identifying security vulnerabilities during traffic replay.

When to choose proxymock

You want repeatable local dev environments and CI tests without spinning up prod services:

  1. Run proxymock while exercising your app (or capture from a remote cluster). proxymock records L7 payloads and outbound interactions.
  2. proxymock auto-generates mock servers and (optionally) tests from the recording.
  3. Run your app locally with proxymock serving mocked backend responses. Your app behaves as if the real dependencies existed — but you get deterministic, replayable behavior.

This workflow ensures all critical interactions are thoroughly tested using realistic mock data.

Perfect for creating isolation tests and rapidly iterating on features that touch databases or 3rd-party APIs.

Common challenges and solutions

API testing is an essential part of the software development process, but it’s not without its hurdles. Ensuring the reliability, security, and performance of APIs often means navigating a range of challenges that can slow down releases or introduce risk. Here are some of the most common issues teams face during API testing—and practical ways to address them:

1. Ensuring reliability across changing APIsAPIs evolve rapidly, and even small changes can break existing integrations or test cases. This makes it difficult to maintain reliable tests that consistently validate expected results.

*Solution:*Adopt traffic capture and replay tools that can record real production traffic and automatically update test suites as your API changes. By replaying real traffic, you can quickly identify compatibility issues and ensure your API performs as expected, even as new versions are deployed.

2. Securing sensitive data during testingTesting with real data can expose sensitive information, leading to potential security vulnerabilities or compliance risks.

*Solution:*Use tools that support automatic redaction of sensitive data during traffic capture and replay. This ensures that your testing process doesn’t inadvertently leak PII or other confidential information, keeping your API tests both safe and compliant.

3. Validating performance under real-world conditionsAPIs may behave differently under load or in production-like environments, making it hard to predict slow response times or bottlenecks before deployment.

*Solution:*Leverage traffic replay tools to simulate real-world load and usage patterns. By replaying production traffic, you can perform load testing and identify performance issues early, allowing you to fix issues before they impact users.

4. Detecting security vulnerabilitiesAPIs are a common target for attacks such as SQL injection or cross-site scripting. Manual testing often misses these edge cases.

*Solution:*Integrate security testing into your API testing process by using tools that allow you to replay and modify captured traffic. This makes it easier to test for incorrect handling of malicious requests and validate that your API is protected against common threats.

5. Managing test environments and dependenciesSetting up and maintaining test environments that accurately reflect production can be time-consuming and error-prone, especially when dealing with multiple backend services.

*Solution:*Automate environment setup by using mock services generated from real traffic. This approach allows you to run functional testing and integration testing in isolation, reducing flakiness and making your tests more reliable.

By proactively addressing these challenges with the right tools and processes, you can make your API testing more robust, secure, and efficient—ultimately improving the reliability and performance of your APIs in production.

Pros & cons quick summary

Both tools offer unique benefits depending on the testing scenario.

mitmproxy

  • Pros: extremely flexible, TLS-capable, granular control, rich scripting/addon ecosystem, excellent for security and forensic replay.
  • Cons: selection is flow-oriented and manual; mocking/back-end simulation is DIY.

proxymock

  • Pros: L7-aware recording, automatic mock generation, tests-from-traffic, smooth local/CI workflow for backend simulation. Designed as a traffic replay tool to streamline backend simulation and test automation, providing benefits like faster test cycles and simplified configuration.
  • Cons: higher-level automation makes it opinionated — if you need raw packet/TCP-level fiddling or highly custom proxy scripting, mitmproxy still wins.

Recommendation — which to use?

  • If your job is investigative, security-focused, or you need deep, manual control over flows → mitmproxy.
  • If your goal is developer productivity, deterministic local reproduction, auto-generated mocks, and CI-friendly replay → proxymock.

When choosing a tool, an important aspect to consider is how each solution manages test configurations and replay scenarios, as ease of managing configurations can be just as critical as traffic generation capabilities.

Closing notes (and a tiny bit of engineering therapy)

Replaying traffic is a potent technique but it becomes truly useful when paired with context. There is no such thing as a typical user, so testing should account for a wide range of user behaviors. Traffic replay helps identify bugs that may not be caught by traditional testing methods. APIs often need to communicate with other systems, and traffic replay ensures these integrations are reliable. Unlike GUI testing, which focuses on the user interface and end-user experience, traffic replay validates backend interactions and system-level functionality. Compatibility with different operating systems is also important for comprehensive API validation. Some common use cases for traffic replay include regression testing, debugging, and performance validation. There are multiple types and several types of API tests that can be performed using traffic replay tools. The following list represents key test types: regression, load, configuration, and security testing. Traffic replay can also be used to validate a new version of an API by comparing its behavior to previous versions, ensuring no new bugs are introduced. Managing account data securely during traffic replay and testing is essential, as is controlling access to sensitive data to prevent unauthorized exposure. Tools like mitmproxy and proxymock were created to address the challenges of realistic API testing. Traffic replay helps ensure APIs return the correct response in various scenarios. Software testing plays a crucial role in validating API functionality and integration. Ultimately, traffic replay helps determine whether APIs meet expectations for functionality, security, and performance.

Knowing which L7 calls matter and being able to replace noisy or flaky dependencies with deterministic mocks is key for good traffic replay-based testing. Mitmproxy is your scalpel; proxymock is your lab assistant who records everything, builds the mocks, and files neatly-labeled samples in the fridge. Both are tools any modern backend engineer should know how to use.

Get started for free

ProxyMock desktop or Speedscale Cloud — choose your path.