Decomposing network calls on the Lyft mobile apps
Article Summary
Lyft's mobile apps used to poll a single endpoint every 5 seconds for everything. That "Universal Object" became their biggest reliability nightmare.
Don Yu and the Lyft engineering team share how they decomposed one monolithic API into 40+ microservice endpoints. This year-long migration involved 13+ engineers and fundamentally changed how their mobile apps fetch data.
Key Takeaways
- Single endpoint failure could break the entire app experience for users
- New architecture reduced p50 latency from 200ms to under 120ms
- Driver match notifications now arrive 20% faster after decomposition
- Team ran 28+ A/B experiments to safely migrate without breaking flows
- Shadowed 1% of production traffic to catch payload mismatches early
Lyft eliminated their single point of failure by splitting one universal endpoint into isolated APIs, cutting latency by 40% and improving driver notification speed by over 20%.
About This Article
Lyft's Universal Object endpoint was monolithic, so a single bug in one microservice field could break the entire response payload. This prevented the whole thing from parsing and blocked the user experience across their mobile apps.
Don Yu's team built client-side abstractions that mapped the Universal Object to simpler data models. They split the endpoints and used feature flags to run A/B tests, which let them safely move from the old API to the new one.
Debugging got faster. Engineers went from investigating dozens of microservices down to 1-2 services per bug. Lyft also cut bandwidth costs by letting different resource types use variable polling rates and push streams.