Measuring Kotlin Build Performance
Article Summary
Uber ran 129 experiments across 354 projects to answer one question: What's the real cost of adopting Kotlin at scale?
Uber's Mobile Engineering team partnered with JetBrains to measure Kotlin build performance across their massive Android monorepo with 2,000+ modules. They generated 1.4 million lines of code in 13 different configurations to understand the tradeoffs of different project structures and tooling choices.
Key Takeaways
- Kotlin reduced source code by 40% compared to functionally equivalent Java
- Kapt added 95% overhead versus pure Kotlin without annotation processing
- Error Prone static analysis contributed 70% overhead on Java builds
- Mixed Kotlin/Java in same module showed measurable performance impact
- 95% of engineers willing to accept slower builds to use Kotlin
Compilation time grows linearly with project size, but annotation processing (Kapt) and mixed source sets create the biggest performance bottlenecks in Kotlin builds.
About This Article
Uber's Android monorepo contained over 20 applications and 2,000 modules. The team needed to figure out whether Kotlin would help or hurt developer productivity, work well with existing Java code, and maintain user experience. They didn't have reliable performance data at scale to make this decision.
Uber partnered with JetBrains to build a project generation workflow based on Apache Thrift specifications. They created 354 functionally equivalent projects using 13 different configuration matrices. Then they ran 129 controlled experiments on CI machines with the Buck build system, collecting metrics through Chrome traceable files.
The experiments showed that Kotlin's type inference system adds 8% compilation overhead. Compilation time scales linearly as projects get larger. Mixed Kotlin and Java modules need careful planning in high-throughput repositories. This data let Uber make an informed decision about adopting Kotlin.