Sentry Jan 17, 2024

How We Improved Performance Score Accuracy

Article Summary

Edward Gou from Sentry reveals why their performance scores were lying to developers. A single slow pageload could tank your entire app's score, even when 99% of users had fast experiences.

Sentry's Performance Score condenses multiple Web Vitals (LCP, FCP, FID, TTFB, CLS) into a 0-100 rating based on real user data. But their original calculation method had a fatal flaw: it aggregated metrics first, then scored them. This meant outliers could completely misrepresent actual user experience.

Key Takeaways

Critical Insight

By scoring individual pageloads before averaging instead of aggregating metrics first, Sentry fixed how outliers were unfairly tanking performance scores that should have reflected mostly positive user experiences.

The mathematical function they use (Complementary Log-Normal CDF) and the specific weight distribution across Web Vitals reveal interesting priorities about what matters most for perceived performance.

Recent from Sentry

Related Articles