|
--- |
|
language: |
|
- ja |
|
license: other |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- trl |
|
- gemma |
|
datasets: |
|
- kunishou/amenokaku-code-instruct |
|
license_name: gemma |
|
base_model: unsloth/gemma-2b-it-bnb-4bit |
|
--- |
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** taoki |
|
- **License:** gemma |
|
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit |
|
|
|
|
|
# Usage |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import torch |
|
|
|
tokenizer = AutoTokenizer.from_pretrained( |
|
"taoki/gemma-2b-it-qlora-amenokaku-code" |
|
) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"taoki/gemma-2b-it-qlora-amenokaku-code" |
|
) |
|
|
|
if torch.cuda.is_available(): |
|
model = model.to("cuda") |
|
|
|
prompt="""<start_of_turn>user |
|
紫式部と清少納言の作風をjsonで出力してください。 |
|
<end_of_turn> |
|
<start_of_turn>model |
|
""" |
|
|
|
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device) |
|
outputs = model.generate( |
|
**input_ids, |
|
max_new_tokens=512, |
|
do_sample=True, |
|
top_p=0.95, |
|
temperature=0.1, |
|
repetition_penalty=1.0, |
|
) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
|
|
# Output |
|
|
|
```` |
|
<bos><start_of_turn>user |
|
紫式部と清少納言の作風をjsonで出力してください。<end_of_turn> |
|
<start_of_turn>model |
|
```json |
|
{ |
|
"紫式部": { |
|
"style": "紫式部", |
|
"name": "紫式部", |
|
"description": "紫式部の作風" |
|
}, |
|
"清少納言": { |
|
"style": "清少納言", |
|
"name": "清少納言", |
|
"description": "清少納言の作風" |
|
} |
|
} |
|
```<eos> |
|
```` |
|
|
|
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |