File size: 1,362 Bytes
b56232d
 
 
 
 
 
 
 
cdb748d
b56232d
cdb748d
5c916f2
b56232d
 
cdb748d
b56232d
 
 
cdb748d
b56232d
5c916f2
 
 
 
 
 
 
 
 
 
eb3597e
5c916f2
 
 
 
eb3597e
b56232d
 
cdb748d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
base_model: llm-jp/llm-jp-3-13b
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: cc-by-nc-sa-4.0
language:
- jp
Training Dataset: Ichikara Instruction (LLM-jp)
---


# Uploaded  model

- **Developed by:** taka-too
- **License:** CC-BY-NC-SA-4.0
- **Finetuned from model :** llm-jp/llm-jp-3-13b
- **Training Dataset:** Ichikara Instruction (LLM-jp)

This LLaMA-based model has been fine-tuned for enhanced instruction-following capabilities using the Ichikara Instruction dataset provided by LLM-jp, 
which was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

関根聡, 安藤まや, 後藤美知子, 鈴木久美, 河原大輔, 井之上直也, 乾健太郎. ichikara-instruction: LLMのための日本語インストラクションデータの構築. 言語処理学会第30回年次大会(2024)

# How to Use the Model

You can load the model via the Hugging Face transformers library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("taka-too/llm-jp-3-13b-it")
model = AutoModelForCausalLM.from_pretrained("taka-too/llm-jp-3-13b-it")
```


[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)