|
--- |
|
language: |
|
- en |
|
license: other |
|
tags: |
|
- chat |
|
license_name: tongyi-qianwen |
|
license_link: https://huggingface.co./Qwen/Qwen2-72B-Instruct/blob/main/LICENSE |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# Dracarys-72B-Instruct |
|
|
|
# Introduction |
|
|
|
We introduce the latest in the Smaug series, the Dracarys family of finetunes targeting coding performance improvements |
|
across a variety of base models. |
|
|
|
This variant is a finetune of [Qwen2-72B-Instruct](https://huggingface.co./Qwen/Qwen2-72B-Instruct) |
|
|
|
Compared to Qwen2-72B-Instruct, Dracarys has better LiveCodeBench scores (see evaluation results below). |
|
|
|
### Model Description |
|
|
|
- **Developed by:** [Abacus.AI](https://abacus.ai) |
|
- **License:** https://huggingface.co./Qwen/Qwen2-72B-Instruct/blob/main/LICENSE |
|
- **Finetuned from model:** [Qwen2-72B-Instruct](https://huggingface.co./Qwen/Qwen2-72B-Instruct). |
|
|
|
## How to use |
|
|
|
The prompt format is unchanged from Qwen2-72B-Instruct (see evaluations for prompt details for LCB) |
|
|
|
### Use with transformers |
|
|
|
See the snippet below for usage with Transformers: |
|
|
|
```python |
|
import transformers |
|
import torch |
|
|
|
model_id = "abacusai/Dracarys-72B-Instruct" |
|
|
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model_id, |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device_map="auto", |
|
) |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are data science coding assistant that generates Python code using Pandas and Numpy."}, |
|
{"role": "user", "content": "Write code to select rows from the dataframe `df` having the maximum `temp` for each `city`"}, |
|
] |
|
|
|
prompt = pipeline.tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
|
|
terminators = [ |
|
pipeline.tokenizer.eos_token_id, |
|
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") |
|
] |
|
|
|
outputs = pipeline( |
|
prompt, |
|
max_new_tokens=256, |
|
eos_token_id=terminators, |
|
do_sample=True, |
|
temperature=0.6, |
|
top_p=0.9, |
|
) |
|
print(outputs[0]["generated_text"][len(prompt):]) |
|
``` |
|
|
|
# Evaluation Results |
|
|
|
|
|
## LiveCodeBench |
|
|
|
| Model | Code Generation | Code Execution |Test Output Prediction | |
|
|---------------------------|-----------------|----------------|-----------------------| |
|
| **Dracarys-72B-Instruct** | 33.86 | 54.30 | 53.26 | |
|
| Qwen2-72B-Instruct | 30.10 | TBD | TBD | |
|
|