Scaling Mobile UI Testing with AI
Article Summary
Trendyol scaled from 4,869 to 10,400 UI tests in under a year — while keeping execution under one hour. Here's how AI became their test generation engine.
Trendyol's mobile team built a complete AI-powered testing workflow that generates smoke tests, regression tests, and analytics validation across five regions. They integrated AI with Maestro (Android) and XCUITest (iOS), using structured prompts and MCPs to make test generation reliable and maintainable.
Key Takeaways
- 114% test growth in one year, maintaining 96.6% stability across all releases
- AI generates tests from UI hierarchy captures using structured scripts and rule documents
- Maestro MCP enables AI to debug tests on live devices and fix failures automatically
- Every test runs in 5 regions (TR, SA, UAE, RO, AZ) with automated localization
- AI scores Jira tasks and generates event tests for analytics validation
Trendyol doubled their UI test coverage in under a year by combining consistent project structure, custom tooling, and AI workflows that generate and maintain tests across platforms and regions.
About This Article
Trendyol's Android test suite used Espresso, but it didn't have the visual debugging tools or speed the team needed. As testing expanded across five regions with different languages and right-to-left layouts, the limitations became harder to ignore.
The team switched to Maestro and built on top of it. They added snapshot testing, HTML failure reports, and created an Android Studio plugin to support YAML tests. They also went through the codebase to add consistent element IDs and localized strings, which made it possible to generate reliable AI tests.
AI now generates smoke tests, page objects, and localized test variants without manual work. The team maintains 96.6% stability across daily builds and merge requests in Turkey, Saudi Arabia, UAE, Romania, and Azerbaijan.