Lyft Don Yu Oct 27, 2020

Decomposing network calls on the Lyft mobile apps

Article Summary

Lyft's mobile apps used to poll a single endpoint every 5 seconds for everything. That "Universal Object" became their biggest reliability nightmare.

Don Yu and the Lyft engineering team share how they decomposed one monolithic API into 40+ microservice endpoints. This year-long migration involved 13+ engineers and fundamentally changed how their mobile apps fetch data.

Key Takeaways

Critical Insight

Lyft eliminated their single point of failure by splitting one universal endpoint into isolated APIs, cutting latency by 40% and improving driver notification speed by over 20%.

The article reveals why decomposing network calls isn't always the right choice, and when a monolithic polling loop might actually make sense for your app.

About This Article

Problem

Lyft's Universal Object endpoint was monolithic, so a single bug in one microservice field could break the entire response payload. This prevented the whole thing from parsing and blocked the user experience across their mobile apps.

Solution

Don Yu's team built client-side abstractions that mapped the Universal Object to simpler data models. They split the endpoints and used feature flags to run A/B tests, which let them safely move from the old API to the new one.

Impact

Debugging got faster. Engineers went from investigating dozens of microservices down to 1-2 services per bug. Lyft also cut bandwidth costs by letting different resource types use variable polling rates and push streams.