A Saga of Improvement in Android App Performance (Part 2)
Article Summary
Tokopedia built an automated performance testing pipeline that catches regressions before they hit production. Here's how they measure every build, every night.
In Part 2 of their performance series, Tokopedia's engineering team reveals their pre-production testing infrastructure. They run nightly UI automation tests on Firebase Test Lab across multiple devices, parsing performance logs and blocking PRs that degrade metrics.
Key Takeaways
- Nightly Jenkins jobs test dev and release branches on Firebase Test Lab automatically
- Custom PR checker triggers via GitHub comment and blocks merges if performance degrades
- Parse Firebase debug logs and store metrics in MySQL for dashboard visualization
- Color coded dashboard shows metric health against targets across different devices
- Shell scripts extract TraceMetric data from device logs for trend analysis
Tokopedia automated their entire pre-production performance validation pipeline, from nightly builds to PR blocking, using Firebase Test Lab, Jenkins, and custom log parsing.
About This Article
Tokopedia needed to validate performance changes before merging code to production. They required a way to compare metrics across multiple device configurations and detect regressions automatically.
Vishal Gupta's team built a PR checker Jenkins job that triggers from GitHub comments. It runs Firebase Test Lab tests, compares results against baseline daily builds using MySQL data, and calculates percentage changes for each metric.
The PR checker blocks merges when performance degrades beyond defined thresholds. This lets developers catch and fix issues before they reach production across different device models and OS versions.