Distributed Load Testing Using Locust
Article Summary
Glance's engineering team scaled their load testing from basic scripts to a full Kubernetes-powered distributed system. Here's how they did it with Locust.
Ricky Mondal, Senior Software Engineer at Atlassian (formerly at Glance), shares a comprehensive guide to building scalable load testing infrastructure. The article walks through everything from basic Locust setup to deploying distributed tests across Kubernetes clusters.
Key Takeaways
- Locust beats JMeter with Python-based tests and event-driven architecture
- Structured codebase lets N microservices test independently without conflicts
- Custom Docker images plus master-worker K8s architecture enable massive scale
- FastHttpUser mode and auto-rebalancing workers maximize RPS throughput
- HPA on worker pods auto-scales based on CPU thresholds
You can build production-grade distributed load testing with Locust, Docker, and Kubernetes that scales to hundreds of thousands of concurrent users.
About This Article
Ricky Mondal's team needed to test system performance across multiple microservices before going live. They wanted to simulate hundreds of concurrent users and measure CPU, memory, network bandwidth, and response time bottlenecks.
They set up Locust with a master-worker architecture on Kubernetes. Custom Docker images packaged the test code so each microservice could be tested independently. They used HPA to automatically scale worker pods based on CPU thresholds.
The distributed setup let them run load tests at scale with auto-rebalancing workers and FastHttpUser mode. Teams could figure out exactly how much resource capacity they needed and what autoscaling policies would handle traffic spikes across their microservices.