One free AI endpoint, every free tier behind it. Local OpenAI-compatible gateway routing across OpenRouter, Groq, NVIDIA NIM, Cloudflare Workers AI, and HuggingFace with automatic failover.
Your coding agent works alone. SmartSplit gives it a team. All LLMs read files, search the web, and prepare context — while your main model focuses on thinking. If any provider goes down, another steps in. No tokens wasted, no downtime.
LLM pipeline crash recovery for Python. Resumes from last successful node instead of restarting from scratch. No database required, just plain JSON files.