Building AI Agents In Kotlin Part 3: Under Observation
Article Summary
Denis Domanskii from JetBrains tackles a problem every AI agent developer faces: your agent works, but you can't see what it's doing. When debugging takes hours and costs are invisible, you're flying blind.
This is Part 3 of JetBrains' series on building AI coding agents in Kotlin using the Koog framework. The article addresses the observability gap that emerges as agents become more capable, making it harder to debug failures, track costs, and understand agent behavior during development.
Key Takeaways
- Four lines of code integrate Langfuse observability into Koog agents via OpenTelemetry
- Tracing revealed a hidden bug: tool failed when requesting 400 lines from 394-line file
- 50-task SWE-bench evaluation cost $66 and ran 30 minutes with full visibility
- Per-run cost tracking beats provider dashboards for multi-agent development workflows
- Verbose mode exposes full prompts and responses during development, not just metadata
Critical Insight
Adding observability transforms AI agents from black boxes into inspectable systems where you can see exactly what happened, why it failed, and what it cost per run.