This event has passed. Read the presentation summary below or view upcoming events.

TestingMind Test Automation Summit

Matthew LeRay, CTO/Co-Founder of Speedscale, presented 'Smarter Testing for AI Generated Code' at the TestingMind Test Automation Summit in Atlanta on October 22, 2025.

Event Details

October 22, 2025 • Atlanta, GA

Smarter Testing for AI Generated Code

New Strategies for an Agentic AI World

Matthew LeRay
CTO/Co-Founder, Speedscale
Download Full Presentation (PDF)

Presentation Overview

As AI coding assistants become ubiquitous in software development, quality assurance teams face unprecedented challenges. This presentation explored how QA professionals can adapt their strategies to handle the unique characteristics of AI-generated code.

Key Topics Covered

Adapting QA to AI Generated Code

Understanding the fundamental differences between traditional code and AI-generated code, and how QA processes must evolve.

Challenges with AI Coding Assistants

84% of developers use AI tools daily, but confidence lags with 46% actively distrusting the accuracy of AI tool output.

Case Studies & How to Improve

Real-world examples of QA teams successfully adapting to AI-generated code with practical strategies and tools.

On the Horizon

Future trends including agentic AI for QA, AI-powered pair programming, and closed-loop ticket resolution.

Common Challenges Identified

Three Critical Issues
  • AI is Stochastic, Testing is Deterministic: AI coding agents produce stochastic results while QA relies on deterministic outcomes.
  • QA Test Environments are not Microservices-Enabled: Traditional test environments struggle with inter-service interactions and test data synchronization.
  • AI Produces Lots of Broken Code: Testing happens too far "right" in the development cycle, catching issues late.

Solutions: Process, Tools, Expectations

Treat Tests and Environments as Cattle, not Pets

Use LLM tools to generate, manage and analyze test flows. Implement fuzzing and edge case testing. Replicate environments using simulation engines and ephemeral environments rather than static mocks.

Add LLM-Powered Code Review CI Step

Microsoft research shows Code Researcher achieved a 58% crash-resolution rate compared to 37.5% by traditional methods on Linux kernel crashes.

Replace Some Integration Tests with Isolation Tests

Use OpenAPI CI checks, traffic replay, and simulation environments to test services in isolation while maintaining realistic behavior.

Test for FinOps, not stress or performance

As one contractor wisely noted: "Anything is Possible (Given Enough Time and Money)". Focus on efficiency and cost optimization.

Tactical Tips and Tricks

  • All artifacts in Markdown: Standardize on markdown for better AI processing and human readability.
  • Provide centralized testing services via MCP: Use Model Context Protocol to give AI agents access to testing capabilities.
  • Advocate for QA as part of the AI "inner loop": Move testing earlier in the development cycle.
  • Automate everything: Build agents for summarization, test case generation, DevOps, and project management tasks.

On The Horizon

The future of QA includes:

  • Agentic AI for QA: AI agents capable of testing new builds unattended with MCPs that answer deeper questions about codebases and specs.
  • Pair Programmer Evolution: AI working alongside QA professionals to enhance capabilities.
  • Closed-loop Ticket Resolution: AIs finding bugs, opening tickets, and resolving them without human interaction.
Learn More

For more information about the TestingMind Test Automation Summit and other events, visit their website.

Ready to test smarter?

See how Speedscale helps teams adapt to AI-generated code with traffic replay, service virtualization, and realistic test environments.