LLM Context Compression and Retrieval Engine -- zero dependencies, sub-100ms queries, document + code ingestion
State aware knowledge compression, ingestion, and hybrid retrieval engine. Zero dependencies. Sub-100ms queries.