Runtime containment kernel for LLM agents. Enforces budget, step, retry, and circuit-breaker limits before the model call.
Optimal context window selection for LLM coding tools. Treats context as a constrained optimization problem, not retrieval. Beats RAG, grep, and LLM-triage baselines on real GitHub issues.