|
--- |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
base_model: |
|
- abacusai/Smaug-34B-v0.1 |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
- qwen2 |
|
license: other |
|
--- |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [abacusai/Smaug-34B-v0.1](https://huggingface.co./abacusai/Smaug-34B-v0.1) |
|
|
|
### Usage |
|
|
|
```python |
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import transformers |
|
import torch |
|
|
|
model_id = "Eurdem/SM_Smaug_52B" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_4bit= True) |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are a helpful chatbot who always responds friendly."}, |
|
{"role": "user", "content": "where is the capital of turkey"}, |
|
] |
|
|
|
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda") |
|
outputs = model.generate(input_ids, |
|
max_new_tokens=1024, |
|
do_sample=True, |
|
temperature=0.7, |
|
top_p=0.7, |
|
top_k=500 |
|
) |
|
response = outputs[0][input_ids.shape[-1]:] |
|
print(tokenizer.decode(response, skip_special_tokens=True)) |
|
|
|
``` |
|
|