Uber Apr 2, 2019

Measuring Kotlin Build Performance

Article Summary

Uber ran 129 experiments across 354 projects to answer one question: What's the real cost of adopting Kotlin at scale?

Uber's Mobile Engineering team partnered with JetBrains to measure Kotlin build performance across their massive Android monorepo with 2,000+ modules. They generated 1.4 million lines of code in 13 different configurations to understand the tradeoffs of different project structures and tooling choices.

Key Takeaways

Critical Insight

Compilation time grows linearly with project size, but annotation processing (Kapt) and mixed source sets create the biggest performance bottlenecks in Kotlin builds.

The team discovered surprising differences between how Apt and Kapt perform, plus insights on type inference costs that Swift developers will find familiar.

About This Article

Problem

Uber's Android monorepo contained over 20 applications and 2,000 modules. The team needed to figure out whether Kotlin would help or hurt developer productivity, work well with existing Java code, and maintain user experience. They didn't have reliable performance data at scale to make this decision.

Solution

Uber partnered with JetBrains to build a project generation workflow based on Apache Thrift specifications. They created 354 functionally equivalent projects using 13 different configuration matrices. Then they ran 129 controlled experiments on CI machines with the Buck build system, collecting metrics through Chrome traceable files.

Impact

The experiments showed that Kotlin's type inference system adds 8% compilation overhead. Compilation time scales linearly as projects get larger. Mixed Kotlin and Java modules need careful planning in high-throughput repositories. This data let Uber make an informed decision about adopting Kotlin.