Diachronic Word Embedding Model based on Word2vec Skip-gram with Chebyshev approximation
Automatic topic modelling using minimal external input and computational resources
Semantic Quality Benchmark for Word Embeddings, i.e. Natural Language Models in Python. Acronym `SeaQuBe` or `seaqube`.
RiverText is a framework that standardizes the Incremental Word Embeddings proposed in the state-of-art. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!