Introduction
Allenai's Longformer Encoder-Decoder (LED).
As described in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan, led-large-16384 was initialized from bart-large since both models share the exact same architecture. To be able to process 16K tokens, bart-large's position embedding matrix was simply copied 16 times.
This model is especially interesting for long-range summarization and question answering.
Fine-tuning for down-stream task
This notebook shows how led-large-16384 can effectively be fine-tuned on a downstream task.
- Downloads last month
- 3,546
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.