PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
castnettech
mnemosyne-engine

LLM Context Compression and Retrieval Engine -- zero dependencies, sub-100ms queries, document + code ingestion

1K 53 9
compactbench
compactbench

Open benchmark for LLM context compaction methods — measures what survives when you replace conversation history with a compacted artifact. Multi-cycle drift, hidden ranked set.

997 1 0
castnettech
mnemosyne-ollama

State aware knowledge compression, ingestion, and hybrid retrieval engine. Zero dependencies. Sub-100ms queries.

721 53 9
open-compress
claw-compactor

14-stage Fusion Pipeline for LLM token compression — 15-82% reduction depending on content, zero LLM inference cost, reversible compression, AST-aware code analysis

657 2K 204
KRLabsOrg
squeez

Squeeze verbose LLM agent tool output down to only the relevant lines

542 13 0
castnettech
mnemosyne-mcp

State aware knowledge compression, ingestion, and hybrid retrieval engine. Zero dependencies. Sub-100ms queries.

534 53 9
h2cker
llama-index-postprocessor-vecr

LlamaIndex postprocessor for vecr-compress: deterministic retention-guaranteed node compression.

392 0 0
    • Data from PyPI, GitHub, ClickHouse, and BigQuery