Meta Jul 28, 2025

Accelerating on-device ML on Meta’s apps with ExecuTorch

Article Summary

Meta just shared how they moved billions of daily users to their new on-device ML framework. The performance gains are substantial.

Meta's PyTorch Edge team rolled out ExecuTorch across Instagram, WhatsApp, Messenger, and Facebook over the past year. This open-source framework replaces their previous mobile ML stack and runs AI models directly on users' devices instead of servers.

Key Takeaways

Critical Insight

ExecuTorch delivered faster inference, lower latency, and better privacy across Meta's apps while enabling features like E2EE that weren't possible before.

The article reveals how moving ML on-device actually freed up server capacity and enabled Meta to scale features globally.

Recent from Meta

Related Articles