File size: 1,119 Bytes
421c58e 53b64b8 67d69b2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
base_model:
- irlab-udc/Llama-3.1-8B-Instruct-Galician
license: llama3.1
language:
- gl
pipeline_tag: text-generation
library_name: transformers
---
4-bit quantized version of [irlab-udc/Llama-3.1-8B-Instruct-Galician](https://huggingface.co./irlab-udc/Llama-3.1-8B-Instruct-Galician).
## How to Use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "irlab-udc/Llama-3.1-8B-Instruct-Galician-GPTQ-Int4"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map="auto"
)
messages = [
{"role": "system", "content": "You are a conversational AI that responds in Galician."},
{"role": "user", "content": "Cal é a principal vantaxe de Scrum?"},
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
).to("cuda")
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=512)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
``` |