File size: 2,184 Bytes
0a912d5
 
99ea80c
 
 
 
 
0a912d5
 
99ea80c
0a912d5
99ea80c
0a912d5
99ea80c
0a912d5
99ea80c
 
 
 
 
0a912d5
 
 
99ea80c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9c26906
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
library_name: transformers
tags:
- Chat Model
- SFT
- RLHF
license: llama3
---

# Llama3-PBM-Nova-70B

## Introduction 

Llama3-PBM-Nova-70B is a chat model developed by PKU-Baichuan-MLSysLab, based on the Llama3-70B. In order to better utilize open-source data, we've performed deduplication, quality filtering, and data synthesis on it. Then, through Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF), we've significantly enhanced the base model's performance.

- **Developed by:** [PKU-Baichuan-MLSysLab](https://github.com/PKU-Baichuan-MLSystemLab)
- **Base Model:** [Llama-3-70B](https://huggingface.co./meta-llama/Meta-Llama-3-70B)
- **Model Type:** Chat Model
- **Training Method:** SFT + RLHF
- **Release Date:** August 2024

## Evaluation

| Model                  | Arena-Hard | MixEval-Hard | Alpaca-Eval 2.0 |
|------------------------|------------|--------------|-----------------|
| GPT-4Turbo(04/09)      | 82.6%      | 62.6         | 55.0%           |
| GPT-4o(05/13)          | 79.2%      | 64.7         | 57.5%           |
| Gemini 1.5 Pro         | 72.0%      | 58.3         | -               |
| Llama3-PBM-Nova-70B    | 74.5%      | 58.1         | 61.23%          |
| Llama-3.1-70B-Instruct | 55.7%      | -            | 38.1%           |
| Llama-3-70B-Instruct   | 46.6       | 55.9         | 34.4%           |


## Usage

Below is an example of how to use this model based on the Transformers library.

```
import transformers
import torch

model_id = "PKU-Baichuan-MLSystemLab/Llama3-PBM-Nova-70B"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "user", "content": "Who are you?"},
]

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][-1])
```

## License

- [LLAMA3 License](https://huggingface.co./meta-llama/Meta-Llama-3-70B/blob/main/LICENSE)