Callstack Mar 6, 2026

Announcing React Native Evals

Article Summary

Mike Grabowski and the Callstack team just dropped React Native Evals—an open-source benchmark that turns AI coding model debates from anecdotal opinions into reproducible evidence.

Callstack built React Native Evals to solve a problem every developer faces: which AI coding model actually writes the best React Native code? The benchmark includes 43 real-world implementation tasks across animation, async state, and navigation libraries, with a formal methodology for scoring and reproducibility.

Key Takeaways

Critical Insight

React Native Evals provides the first reproducible, evidence-based benchmark for measuring how well AI coding models handle React Native development tasks.

The preliminary results reveal surprising performance gaps between models that most developers assume are equivalent.

About This Article

Problem

React Native developers couldn't easily compare AI coding models because they relied on personal experiences with individual tasks like navigation or animations. There were no standard ways to evaluate which model actually performed better.

Solution

Callstack built a two-phase pipeline using TypeScript and Bun. A solver model generates code from prompts and scaffolds, then a separate LLM evaluates the outputs against structured requirements defined in each eval's requirements.yaml file.

Impact

The benchmark now provides measurable performance baselines across multiple coding models. Preliminary results are available on an interactive website at rn-evals.vercel.app, which helps React Native developers choose models based on actual data instead of guesswork.

Recent from Callstack

Related Articles

AI Tools for React Native

Browse all AI tools →