|
--- |
|
license: |
|
- apache-2.0 |
|
- bsd-3-clause |
|
tags: |
|
- summarization |
|
- summary |
|
- booksum |
|
- long-document |
|
- long-form |
|
- tglobal-xl |
|
- XL |
|
- 8bit |
|
- quantized |
|
datasets: |
|
- kmfoda/booksum |
|
metrics: |
|
- rouge |
|
inference: false |
|
pipeline_tag: summarization |
|
--- |
|
|
|
|
|
# long-t5-tglobal-xl-16384-book-summary: 8-bit quantized version |
|
|
|
<a href="https://colab.research.google.com/gist/pszemraj/c19e32baf876deb866c31cd46c86e893/long-t5-xl-accelerate-test.ipynb"> |
|
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> |
|
</a> |
|
|
|
This is an 8-bit quantized version of the `pszemraj/long-t5-tglobal-xl-16384-book-summary` model, The model has been compressed using `bitsandbytes` and can be loaded with low memory usage. |
|
|
|
Refer to the [original model](https://huggingface.co./pszemraj/long-t5-tglobal-xl-16384-book-summary) for all details about the model architecture and training process. For more information on loading 8-bit models, refer to the `4.28.0` [release information](https://github.com/huggingface/transformers/releases/tag/v4.28.0) and the [example repository](https://huggingface.co./ybelkada/bloom-1b7-8bit). |
|
|
|
- The total size of the model is only ~3.5 GB (vs original 12 GB) |
|
- Enables low-RAM loading, making it easier to use in memory-limited environments like Colab |
|
- Requires `bitsandbytes` - AFAIK at time of writing, only works on GPU |
|
|
|
|
|
## Basic Usage |
|
|
|
To use the model, install or upgrade `transformers`, `accelerate`, and `bitsandbytes`. Make sure to have `transformers>=4.28.0` and `bitsandbytes>0.37.2`. |
|
|
|
```bash |
|
pip install -U -q transformers bitsandbytes accelerate |
|
``` |
|
|
|
Load the model with `AutoTokenizer` and `AutoModelForSeq2SeqLM`: |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
|
|
model_name = "pszemraj/long-t5-tglobal-xl-16384-book-summary-8bit" |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
model = AutoModelForSeq2SeqLM.from_pretrained(model_name) |
|
``` |
|
|
|
## More information about long-t5-tglobal-xl-16384-book-summary |
|
|
|
- This is an 8-bit quantized version of `pszemraj/long-t5-tglobal-xl-16384-book-summary`. |
|
- It generalizes reasonably well to academic and narrative text. |
|
- The XL checkpoint typically generates summaries that are considerably better from a human evaluation perspective. |