30 dependents
| Package | Description | Downloads/month |
|---|---|---|
| Training Sparse Autoencoders on Language Models | 92K | |
| Mechanistic interpretability + EU AI Act Annex IV compliance. 21/21 frameworks: ... | 3K | |
| Open-source SAE visualizer, based on Anthropic's published visualizer. | 2K | |
| Open-source SAE visualizer, based on Anthropic's published visualizer. Forked / ... | 2K | |
| For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) resear... | 1K | |
| Sparse probing benchmark for Sparse Autoencoders derived from the paper "Are Spa... | 1K | |
| A framework for evaluating sparse autoencoders | 1K | |
| Make any model compatible with transformer_lens | 760 | |
| 713 | ||
| open source mech interp circuit tracing library | 451 | |
| Sorbonne University Master MIND - Large Language Models course plugin | 445 | |
| Sparse Autoencoder Training Library | 322 | |
| Library for easily using interpretability techniques in transformer models | 273 | |
| Sparse AutoEncoder to decode Mistral LLM | 262 | |
| visualization of LLM attention patterns and things computed about them | 248 | |
| Sparse and discrete interpretability tool for neural networks | 193 | |
| Transformer token flow visualizer | 189 | |
| Open-source EU AI Act Annex IV compliance toolkit. Mechanistic interpretability ... | 179 | |
| A template for python projects in PDM | 169 | |
| A package for mechanistic interpretability in Neural IR | 152 | |
| New version of Taker (Transformer Activation taKER) | 148 | |
| Original Implementation for Isolating Path Effect for Latent Circuit Identificat... | 138 | |
| Toolkit for analyzing unstructured datasets with sparse autoencoders | 134 | |
| In-depth visualizations for SAE features | 129 | |
| 104 | ||
| Cross-attention-based cell-cell interaction inference from ST data. | 99 | |
| Investigating belief state representations of transformers trained on Hidden Mar... | 93 | |
| Minimal implementation of SAEs | 74 | |
| AI-powered assistant for spike sorting and neural data analysis | 66 | |
| make any model compatible with transformer_lens | 1 |