Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
🦙 Integrating LLMs into structured NLP pipelines