The Intelligent OS: Making AI agents more helpful for Android apps
Article Summary
Matthew McCullough from Android reveals how Google is fundamentally changing app interaction: users won't open apps anymore, they'll just ask AI to handle tasks for them.
Android is introducing developer capabilities that let AI agents like Gemini interact directly with apps. Two approaches are launching in beta: AppFunctions for structured integrations and UI automation for zero-code agentic reach. Both are rolling out first on Galaxy S26 and Pixel 10 devices.
Key Takeaways
- AppFunctions library lets apps expose functions to AI via natural language
- Samsung Gallery integration: ask Gemini for cat photos without opening the app
- UI automation requires zero developer code, works on food delivery and rideshare apps
- Users control automation via live view and can switch to manual anytime
- Android 17 will expand capabilities to more developers and manufacturers
Android is betting on AI agents replacing traditional app launches, giving developers two paths: structured AppFunctions integrations or zero-code UI automation.
About This Article
Android developers struggled to integrate their apps with AI agents and assistants. There was no standard framework, so each integration required substantial engineering work to enable agentic interactions across the ecosystem.
Matthew McCullough's Android team built AppFunctions, a Jetpack library based on backend MCP cloud servers. Apps can now expose self-describing functions that AI agents discover and execute through natural language processing on-device.
Gemini immediately started automating tasks in Calendar, Notes, and Tasks on devices from multiple manufacturers. UI automation also works with food delivery, grocery, and rideshare apps on Galaxy S26 and Pixel 10 devices in the US and Korea.