Welcome to TextPruner’s documentation

_images/banner.png

TextPruner is a toolkit for pruning pre-trained transformer-based language models written in PyTorch. It offers structured training-free pruning methods and a user-friendly interface.

The main features of TexPruner include:

  • Compatibility: TextPruner is compatible with different NLU pre-trained models. You can use it to prune your own models for various NLP tasks as long as they are built on the standard pre-trained models.

  • Usability: TextPruner can be used as a package or a CLI tool. They are both easy to use.

  • Efficiency: TextPruner reduces the model size in a simple and fast way. TextPruner uses structured training-free methods to prune models. It is much faster than distillation and other pruning methods that involve training.

TextPruner currently supports the following pre-trained models in transformers:

  • BERT

  • Albert

  • Electra

  • RoBERTa

  • XLM-RoBERTa

Installation

pip install textpruner

Note

This document is under development.