36 dependents
| Package | Description | Downloads/month |
|---|---|---|
| A Python library that interfaces with the MediaWiki API. This is a mirror from g... | 132K | |
| Wrapper for mwclient with improvements for 3rd party wikis | 5K | |
| Data processing for and with foundation models! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷 | 4K | |
| Open Source search based on OpenStreetMap data | 4K | |
| Extracts data from German Wiktionary dump files. | 3K | |
| A framework for making Wikidata bots. | 3K | |
| A tool for learning vector representations of words and entities from Wikipedia | 2K | |
| Edit diffs and type detection for Wikipedia | 1K | |
| The universal integrated corpus-building environment. | 1K | |
| mwclient wrapper | 1K | |
| Python script that generates a SQLite database from TibiaWiki articles | 1K | |
| A Python library that interfaces with the MediaWiki API. This is a mirror from g... | 959 | |
| Semantic Hypergraph Tools | 821 | |
| A foundational library for Semantic Hypergraphs | 735 | |
| MCP server exposing the Team Fortress Wiki to LLM clients over stdio | 637 | |
| MCP server exposing the wikilite R package as ~50 tools for Claude — Wikipedia h... | 636 | |
| Recommendation engine framework based on Wikipedia data | 625 | |
| EarwigBot is a bot that edits Wikipedia and interacts over IRC | 587 | |
| Wikidata and Wiktionary language data extraction | 546 | |
| Expansion engine for MediaWiki wiki pages based on mwparserfromhell | 519 | |
| Add taxa stubs to Wikipedia | 428 | |
| A set of utilities for processing MediaWiki text. | 376 | |
| Structure of the global air transportation networks (pax & cargo) from Wikipedia | 370 | |
| Python based liquipedia esports data scraper | 358 | |
| 337 | ||
| Extract Information from Wikimedia Dumps | 250 | |
| Semantification of Genealogy | 207 | |
| An OldSchool RuneScape API wrapper for Python. | 201 | |
| Scalable Data Preprocessing Tool for Training Large Language Models | 185 | |
| Python based liquipedia esports data scraper | 156 | |
| A tool for learning vector representations of words and entities from Wikipedia | 142 | |
| Utility that converts Wikipedia pages into GitHub-flavored Markdown. | 133 | |
| An efficient Wikipedia XML dump extractor which converts each page to JSON. | 113 | |
| Scalable data pre processing and curation toolkit for LLMs | 71 | |
| Wiki2Video: turn Wikipedia articles into short documentary-style videos | 59 | |
| Scalable Data Preprocessing Tool for Training Large Language Models | 1 |