File size: 7,853 Bytes
64c8d57
 
4ec952b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64c8d57
13e39b0
8e68401
 
 
 
 
 
 
 
 
 
f0a3613
 
 
 
 
 
 
 
 
 
13e39b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ef79141
13e39b0
 
ef79141
13e39b0
 
 
 
 
 
 
 
 
 
 
 
 
 
ef79141
13e39b0
 
ef79141
 
13e39b0
 
 
 
 
 
 
 
 
 
f0a3613
 
4ec952b
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
---
license: mit
model-index:
- name: ALMA-13B-R
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 55.55
      name: normalized accuracy
    source:
      url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=haoranxu/ALMA-13B-R
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 79.45
      name: normalized accuracy
    source:
      url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=haoranxu/ALMA-13B-R
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 49.52
      name: accuracy
    source:
      url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=haoranxu/ALMA-13B-R
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 36.09
    source:
      url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=haoranxu/ALMA-13B-R
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 75.3
      name: accuracy
    source:
      url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=haoranxu/ALMA-13B-R
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 0.0
      name: accuracy
    source:
      url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=haoranxu/ALMA-13B-R
      name: Open LLM Leaderboard
---
**[ALMA-R](https://arxiv.org/abs/2401.08417)** builds upon [ALMA models](https://arxiv.org/abs/2309.11674), with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co./datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners!
```
@misc{xu2024contrastive,
      title={Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}, 
      author={Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
      year={2024},
      eprint={2401.08417},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```
```
@misc{xu2023paradigm,
      title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models}, 
      author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
      year={2023},
      eprint={2309.11674},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```
# Download ALMA(-R) Models and Dataset 🚀

We release six translation models presented in the paper:
- ALMA-7B
- ALMA-7B-LoRA
- **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization.
- ALMA-13B
- ALMA-13B-LoRA
- **ALMA-13B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-13B-LoRA with contrastive preference optimization (BEST MODEL!). 
  
Model checkpoints are released at huggingface:
|     Models    | Base Model Link | LoRA Link |
|:-------------:|:---------------:|:---------:|
|    ALMA-7B    |        [haoranxu/ALMA-7B](https://huggingface.co./haoranxu/ALMA-7B)        |     -     |
|  ALMA-7B-LoRA |        [haoranxu/ALMA-7B-Pretrain](https://huggingface.co./haoranxu/ALMA-7B-Pretrain)        |     [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co./haoranxu/ALMA-7B-Pretrain-LoRA)     |
|  **ALMA-7B-R (NEW!)** |        [haoranxu/ALMA-7B-R (LoRA merged)](https://huggingface.co./haoranxu/ALMA-7B-R)        |     -    |
|    ALMA-13B   |        [haoranxu/ALMA-13B](https://huggingface.co./haoranxu/ALMA-13B)        |     -     |
| ALMA-13B-LoRA |        [haoranxu/ALMA-13B-Pretrain](https://huggingface.co./haoranxu/ALMA-13B-Pretrain)        |     [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co./haoranxu/ALMA-13B-Pretrain-LoRA)     |
| **ALMA-13B-R (NEW!)** |        [haoranxu/ALMA-13B-R (LoRA merged)](https://huggingface.co./haoranxu/ALMA-13B-R)        |    -   |

**Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.** 

Datasets used by ALMA and ALMA-R are also released at huggingface now (NEW!)
|     Datasets    | Train / Validation| Test |
|:-------------:|:---------------:|:---------:|
|    Human-Written Parallel Data (ALMA)    |        [train and validation](https://huggingface.co./datasets/haoranxu/ALMA-Human-Parallel)        |     [WMT'22](https://huggingface.co./datasets/haoranxu/WMT22-Test)    |
|  Triplet Preference Data |        [train](https://huggingface.co./datasets/haoranxu/ALMA-R-Preference)        |   [WMT'22](https://huggingface.co./datasets/haoranxu/WMT22-Test) and [WMT'23](https://huggingface.co./datasets/haoranxu/WMT23-Test)   |


A quick start to use our best system (ALMA-13B-R) for translation. An example of translating "我爱机器翻译。" into English:
```
import torch
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer

# Load base model and LoRA weights
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-R", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("haoranxu/ALMA-13B-R", padding_side='left')

# Add the source sentence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()

# Translation
with torch.no_grad():
    generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```

Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/details_haoranxu__ALMA-13B-R)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |49.32|
|AI2 Reasoning Challenge (25-Shot)|55.55|
|HellaSwag (10-Shot)              |79.45|
|MMLU (5-Shot)                    |49.52|
|TruthfulQA (0-shot)              |36.09|
|Winogrande (5-shot)              |75.30|
|GSM8k (5-shot)                   | 0.00|