On-Device AI/ML in React Native
Article Summary
Przemyslaw Weglik from Software Mansion shows how to run real-time AI models directly on mobile devices using React Native. No server calls, no latency, just pure on-device inference.
This hands-on tutorial demonstrates building a background blur feature using on-device machine learning in React Native. The implementation combines react-native-vision-camera with TensorFlow Lite to process video frames in real-time, running a segmentation model entirely on the phone.
Key Takeaways
- Uses TFLite model to segment humans at 256x256 resolution in real-time
- Combines Vision Camera, Skia rendering, and Worklets for frame processing
- Applies blur filters and alpha blending to separate foreground from background
- Runs entirely on device with no network calls or cloud dependencies
- Requires erosion and smoothing filters to fix pixelated mask boundaries
Modern mobile phones can run real-time computer vision models locally, enabling features like background blur without sending data to servers.
About This Article
When developers applied segmentation masks at 256x256 resolution, the boundaries between the sharp foreground and blurred background looked pixelated and rough. This required extra post-processing work to fix.
Software Mansion used Skia ImageFilter techniques to smooth out the mask edges. They applied erosion with a 7x7 kernel, followed by a 5x5 blur kernel, before blending the layers together.
The erosion and blur filters created smooth, natural transitions between the foreground portrait and the blurred background. The pixelated look that appeared in earlier versions was gone.