File size: 4,786 Bytes
f77d72d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8872c65
f77d72d
 
 
 
 
977b7ea
f77d72d
afa0c06
 
3016d0d
afa0c06
 
 
025fd35
f77d72d
 
 
 
 
 
 
 
025fd35
f77d72d
 
 
 
 
 
 
bb7ac06
 
 
 
 
 
f77d72d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
license: llama3
base_model: catallama/CataLlama-v0.2-Base
tags:
- llama
- llama-3
- Catalan
model-index:
- name: CataLlama-v0.2-Instruct-SFT
  results: []
datasets:
- catallama/Catalan-Instruct-V2
language:
- ca
- en
pipeline_tag: text-generation
library_name: transformers
---

![](https://huggingface.co./catallama/CataLlama-v0.2-Instruct-SFT/resolve/main/CataLlama-v0.2.png)

# CataLlama-v0.2-Instruct-SFT

**CataLlama-v0.2-Instruct-SFT** is an instruct fine-tune of [catallama/CataLlama-v0.2-Base](https://huggingface.co./catallama/CataLlama-v0.2-Base) on the [catallama/Catalan-Instruct-V2](https://huggingface.co./datasets/catallama/Catalan-Instruct-V2) dataset.

CataLlama-v0.2 was trained on roughly **620 million new tokens** which is almost 40% more than CataLlama-v0.1.

This new (V2) SFT Dataset was built mostly from scratch and it only retained parts of the V1.

On top of the existing instructions in Catalan, **250k additional instructions were translated for this model.**

All the English instructions existing in the V1 of the dataset were discarded and replaced with high quality instructions scored with [RLHFlow/ArmoRM-Llama3-8B-v0.1](https://huggingface.co./RLHFlow/ArmoRM-Llama3-8B-v0.1) reward model.

The model shows improved proficiency with the Catalan language while performing **significantly better than CataLlama-v0.1 on all tasks.**

**This is an instruction fine-tuned model proficient on the following tasks in Catalan**

- *Information extraction (suitable for RAG)*
- *Named Entity Recognition (NER)*
- *Translation from English to Catalan and Catalan to English*
- *Summarization - both short form and long form*
- *Sentiment analysis*
- *Chat*

**Model developers** [Laurentiu Petrea](https://www.linkedin.com/in/laurentiupetrea/) based on Llama-3 from Meta.

**Model Architecture** CataLlama is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and direct preference optimisation (DPO) to align with human preferences for helpfulness and safety.

**License** The model uses the llama-3 license available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)

## Benchmarks

| Model              | CataLlama-v0.1-Instruct-SFT | CataLlama-v0.2-Instruct-SFT     |
| ------------------ | --------------------------- | ------------------------------- |
| MMLU 5 shot        | 55.28                       | **59.35**                       |
| GSM8K cot 8 shot   | 51.63                       | **76.04**                       |

### Use with transformers

See the snippet below for usage with Transformers:

**The model follows the same prompt template as Llama-3 Instruct**

```python
import transformers
import torch

model_id = "catallama/CataLlama-v0.2-Instruct-SFT"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "user", "content": "Ei com estàs avui?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
    messages, 
    tokenize=False, 
    add_generation_prompt=True
)

outputs = pipeline(
    prompt,
    max_new_tokens=1024,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)

print(outputs[0]["generated_text"][len(prompt):])
```

## Training procedure

The model was trained **with the same prompt template of Llama-3 Instruct**.

The model was trained for two epochs on **8x A100 80GB GPUs using DeepSpeed ZeRO** State-3 without CPU offloading.

Then training lasted approximately 8 hours for a total GPU cost of 150€.

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- distributed_type: multi-GPU
- num_devices: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 2


## Intended Use

**Note:** This model is not intended to beat benchmarks, but to demonstrate techniques for augmenting LLMs on new languages and preserve rare languages as part of our world heritage.

**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.

**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.

**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.