Open-source local AI inference orchestration. Run open-source LLMs across a fleet of machines - no cloud, no API keys.
Tool-calling AI agents that run locally.