shenyutong
commited on
Commit
·
f4731fb
1
Parent(s):
d649385
update
Browse files- README.md +173 -0
- wmt14_en2de_cased_bin.zip +3 -0
README.md
ADDED
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<p align="center">
|
2 |
+
<img src="fairseq_logo.png" width="150">
|
3 |
+
<br />
|
4 |
+
<br />
|
5 |
+
<a href="https://github.com/pytorch/fairseq/blob/master/LICENSE"><img alt="MIT License" src="https://img.shields.io/badge/license-MIT-blue.svg" /></a>
|
6 |
+
<a href="https://github.com/pytorch/fairseq/releases"><img alt="Latest Release" src="https://img.shields.io/github/release/pytorch/fairseq.svg" /></a>
|
7 |
+
<a href="https://github.com/pytorch/fairseq/actions?query=workflow:build"><img alt="Build Status" src="https://github.com/pytorch/fairseq/workflows/build/badge.svg" /></a>
|
8 |
+
<a href="https://fairseq.readthedocs.io/en/latest/?badge=latest"><img alt="Documentation Status" src="https://readthedocs.org/projects/fairseq/badge/?version=latest" /></a>
|
9 |
+
</p>
|
10 |
+
|
11 |
+
--------------------------------------------------------------------------------
|
12 |
+
|
13 |
+
Fairseq(-py) is a sequence modeling toolkit that allows researchers and
|
14 |
+
developers to train custom models for translation, summarization, language
|
15 |
+
modeling and other text generation tasks.
|
16 |
+
|
17 |
+
### What's New:
|
18 |
+
|
19 |
+
- April 2020: [Initial model parallel support and 11B parameters unidirectional LM released](examples/megatron_11b/README.md)
|
20 |
+
- March 2020: [Byte-level BPE code released](examples/byte_level_bpe/README.md)
|
21 |
+
- February 2020: [mBART model and code released](examples/mbart/README.md)
|
22 |
+
- February 2020: [Added tutorial for back-translation](https://github.com/pytorch/fairseq/tree/master/examples/backtranslation#training-your-own-model-wmt18-english-german)
|
23 |
+
- December 2019: [fairseq 0.9.0 released](https://github.com/pytorch/fairseq/releases/tag/v0.9.0)
|
24 |
+
- November 2019: [VizSeq released (a visual analysis toolkit for evaluating fairseq models)](https://facebookresearch.github.io/vizseq/docs/getting_started/fairseq_example)
|
25 |
+
- November 2019: [CamemBERT model and code released](examples/camembert/README.md)
|
26 |
+
- November 2019: [BART model and code released](examples/bart/README.md)
|
27 |
+
- November 2019: [XLM-R models and code released](examples/xlmr/README.md)
|
28 |
+
- September 2019: [Nonautoregressive translation code released](examples/nonautoregressive_translation/README.md)
|
29 |
+
- August 2019: [WMT'19 models released](examples/wmt19/README.md)
|
30 |
+
- July 2019: fairseq relicensed under MIT license
|
31 |
+
- July 2019: [RoBERTa models and code released](examples/roberta/README.md)
|
32 |
+
- June 2019: [wav2vec models and code released](examples/wav2vec/README.md)
|
33 |
+
|
34 |
+
### Features:
|
35 |
+
|
36 |
+
Fairseq provides reference implementations of various sequence-to-sequence models, including:
|
37 |
+
- **Convolutional Neural Networks (CNN)**
|
38 |
+
- [Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017)](examples/language_model/conv_lm/README.md)
|
39 |
+
- [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](examples/conv_seq2seq/README.md)
|
40 |
+
- [Classical Structured Prediction Losses for Sequence to Sequence Learning (Edunov et al., 2018)](https://github.com/pytorch/fairseq/tree/classic_seqlevel)
|
41 |
+
- [Hierarchical Neural Story Generation (Fan et al., 2018)](examples/stories/README.md)
|
42 |
+
- [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](examples/wav2vec/README.md)
|
43 |
+
- **LightConv and DynamicConv models**
|
44 |
+
- [Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)](examples/pay_less_attention_paper/README.md)
|
45 |
+
- **Long Short-Term Memory (LSTM) networks**
|
46 |
+
- Effective Approaches to Attention-based Neural Machine Translation (Luong et al., 2015)
|
47 |
+
- **Transformer (self-attention) networks**
|
48 |
+
- Attention Is All You Need (Vaswani et al., 2017)
|
49 |
+
- [Scaling Neural Machine Translation (Ott et al., 2018)](examples/scaling_nmt/README.md)
|
50 |
+
- [Understanding Back-Translation at Scale (Edunov et al., 2018)](examples/backtranslation/README.md)
|
51 |
+
- [Adaptive Input Representations for Neural Language Modeling (Baevski and Auli, 2018)](examples/language_model/transformer_lm/README.md)
|
52 |
+
- [Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019)](examples/translation_moe/README.md)
|
53 |
+
- [RoBERTa: A Robustly Optimized BERT Pretraining Approach (Liu et al., 2019)](examples/roberta/README.md)
|
54 |
+
- [Facebook FAIR's WMT19 News Translation Task Submission (Ng et al., 2019)](examples/wmt19/README.md)
|
55 |
+
- [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](examples/joint_alignment_translation/README.md )
|
56 |
+
- [Multilingual Denoising Pre-training for Neural Machine Translation (Liu et at., 2020)](examples/mbart/README.md)
|
57 |
+
- [Neural Machine Translation with Byte-Level Subwords (Wang et al., 2020)](examples/byte_level_bpe/README.md)
|
58 |
+
- **Non-autoregressive Transformers**
|
59 |
+
- Non-Autoregressive Neural Machine Translation (Gu et al., 2017)
|
60 |
+
- Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement (Lee et al. 2018)
|
61 |
+
- Insertion Transformer: Flexible Sequence Generation via Insertion Operations (Stern et al. 2019)
|
62 |
+
- Mask-Predict: Parallel Decoding of Conditional Masked Language Models (Ghazvininejad et al., 2019)
|
63 |
+
- [Levenshtein Transformer (Gu et al., 2019)](examples/nonautoregressive_translation/README.md)
|
64 |
+
|
65 |
+
|
66 |
+
**Additionally:**
|
67 |
+
- multi-GPU (distributed) training on one machine or across multiple machines
|
68 |
+
- fast generation on both CPU and GPU with multiple search algorithms implemented:
|
69 |
+
- beam search
|
70 |
+
- Diverse Beam Search ([Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424))
|
71 |
+
- sampling (unconstrained, top-k and top-p/nucleus)
|
72 |
+
- large mini-batch training even on a single GPU via delayed updates
|
73 |
+
- mixed precision training (trains faster with less GPU memory on [NVIDIA tensor cores](https://developer.nvidia.com/tensor-cores))
|
74 |
+
- extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers
|
75 |
+
|
76 |
+
We also provide [pre-trained models for translation and language modeling](#pre-trained-models-and-examples)
|
77 |
+
with a convenient `torch.hub` interface:
|
78 |
+
```python
|
79 |
+
en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
|
80 |
+
en2de.translate('Hello world', beam=5)
|
81 |
+
# 'Hallo Welt'
|
82 |
+
```
|
83 |
+
See the PyTorch Hub tutorials for [translation](https://pytorch.org/hub/pytorch_fairseq_translation/)
|
84 |
+
and [RoBERTa](https://pytorch.org/hub/pytorch_fairseq_roberta/) for more examples.
|
85 |
+
|
86 |
+
![Model](fairseq.gif)
|
87 |
+
|
88 |
+
# Requirements and Installation
|
89 |
+
|
90 |
+
* [PyTorch](http://pytorch.org/) version >= 1.4.0
|
91 |
+
* Python version >= 3.6
|
92 |
+
* For training new models, you'll also need an NVIDIA GPU and [NCCL](https://github.com/NVIDIA/nccl)
|
93 |
+
* **For faster training** install NVIDIA's [apex](https://github.com/NVIDIA/apex) library:
|
94 |
+
```bash
|
95 |
+
git clone https://github.com/NVIDIA/apex
|
96 |
+
cd apex
|
97 |
+
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--deprecated_fused_adam" --global-option="--xentropy" --global-option="--fast_multihead_attn" ./
|
98 |
+
```
|
99 |
+
|
100 |
+
To install fairseq:
|
101 |
+
```bash
|
102 |
+
pip install fairseq
|
103 |
+
```
|
104 |
+
|
105 |
+
On MacOS:
|
106 |
+
```bash
|
107 |
+
CFLAGS="-stdlib=libc++" pip install fairseq
|
108 |
+
```
|
109 |
+
|
110 |
+
If you use Docker make sure to increase the shared memory size either with
|
111 |
+
`--ipc=host` or `--shm-size` as command line options to `nvidia-docker run`.
|
112 |
+
|
113 |
+
**Installing from source**
|
114 |
+
|
115 |
+
To install fairseq from source and develop locally:
|
116 |
+
```bash
|
117 |
+
git clone https://github.com/pytorch/fairseq
|
118 |
+
cd fairseq
|
119 |
+
pip install --editable .
|
120 |
+
```
|
121 |
+
|
122 |
+
# Getting Started
|
123 |
+
|
124 |
+
The [full documentation](https://fairseq.readthedocs.io/) contains instructions
|
125 |
+
for getting started, training new models and extending fairseq with new model
|
126 |
+
types and tasks.
|
127 |
+
|
128 |
+
# Pre-trained models and examples
|
129 |
+
|
130 |
+
We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below,
|
131 |
+
as well as example training and evaluation commands.
|
132 |
+
|
133 |
+
- [Translation](examples/translation/README.md): convolutional and transformer models are available
|
134 |
+
- [Language Modeling](examples/language_model/README.md): convolutional and transformer models are available
|
135 |
+
- [wav2vec](examples/wav2vec/README.md): wav2vec large model is available
|
136 |
+
|
137 |
+
We also have more detailed READMEs to reproduce results from specific papers:
|
138 |
+
- [Neural Machine Translation with Byte-Level Subwords (Wang et al., 2020)](examples/byte_level_bpe/README.md)
|
139 |
+
- [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](examples/joint_alignment_translation/README.md )
|
140 |
+
- [Levenshtein Transformer (Gu et al., 2019)](examples/nonautoregressive_translation/README.md)
|
141 |
+
- [Facebook FAIR's WMT19 News Translation Task Submission (Ng et al., 2019)](examples/wmt19/README.md)
|
142 |
+
- [RoBERTa: A Robustly Optimized BERT Pretraining Approach (Liu et al., 2019)](examples/roberta/README.md)
|
143 |
+
- [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](examples/wav2vec/README.md)
|
144 |
+
- [Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019)](examples/translation_moe/README.md)
|
145 |
+
- [Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)](examples/pay_less_attention_paper/README.md)
|
146 |
+
- [Understanding Back-Translation at Scale (Edunov et al., 2018)](examples/backtranslation/README.md)
|
147 |
+
- [Classical Structured Prediction Losses for Sequence to Sequence Learning (Edunov et al., 2018)](https://github.com/pytorch/fairseq/tree/classic_seqlevel)
|
148 |
+
- [Hierarchical Neural Story Generation (Fan et al., 2018)](examples/stories/README.md)
|
149 |
+
- [Scaling Neural Machine Translation (Ott et al., 2018)](examples/scaling_nmt/README.md)
|
150 |
+
- [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](examples/conv_seq2seq/README.md)
|
151 |
+
- [Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017)](examples/language_model/conv_lm/README.md)
|
152 |
+
|
153 |
+
# Join the fairseq community
|
154 |
+
|
155 |
+
* Facebook page: https://www.facebook.com/groups/fairseq.users
|
156 |
+
* Google group: https://groups.google.com/forum/#!forum/fairseq-users
|
157 |
+
|
158 |
+
# License
|
159 |
+
fairseq(-py) is MIT-licensed.
|
160 |
+
The license applies to the pre-trained models as well.
|
161 |
+
|
162 |
+
# Citation
|
163 |
+
|
164 |
+
Please cite as:
|
165 |
+
|
166 |
+
```bibtex
|
167 |
+
@inproceedings{ott2019fairseq,
|
168 |
+
title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
|
169 |
+
author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
|
170 |
+
booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
|
171 |
+
year = {2019},
|
172 |
+
}
|
173 |
+
```
|
wmt14_en2de_cased_bin.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d87efa981eddd1fbc5fc255e5d86740f64fcb5972f001b1ea04eb2a4c0bfc018
|
3 |
+
size 707756912
|