|
--- |
|
license: cc-by-nc-4.0 |
|
--- |
|
|
|
# weblab-10b-instruction-sft |
|
|
|
# Overview |
|
This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 billion parameters. |
|
|
|
* **Library** |
|
|
|
The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox). |
|
|
|
* **Model architecture** |
|
|
|
A 36-layer, 4864-hidden-size transformer-based language model. |
|
|
|
* **Pre-training** |
|
|
|
The model was trained on around **600B** tokens from a mixture of the following corpora |
|
|
|
- [Japanese C4](https://huggingface.co./datasets/mc4) |
|
- [The Pile](https://huggingface.co./datasets/EleutherAI/pile) |
|
|
|
* **Instruction-supervised-finetuning** |
|
|
|
The model was finetuned on a subset records from a mixture of the following dataset |
|
|
|
- [Alpaca (English)](https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json) |
|
- [Alpaca (Japanese translation)](https://github.com/shi3z/alpaca_ja/blob/main/alpaca_cleaned_ja.json) |
|
- [Flan 2021 (English)](https://huggingface.co./datasets/conceptofmind/flan2021_submix_original) |
|
- [Flan CoT (English)](https://huggingface.co./datasets/conceptofmind/cot_submix_original) |
|
- [Flan Dialog (English)](https://huggingface.co./datasets/conceptofmind/dialog_submix_original) |
|
|
|
* **Model Series** |
|
|
|
| Variant | Link | |
|
| :-- | :--| |
|
| weblab-10b-instruction-sft | https://huggingface.co./matsuo-lab/weblab-10b-instruction-sft | |
|
| weblab-10b | https://huggingface.co./matsuo-lab/weblab-10b | |
|
|
|
* **Authors** |
|
|
|
Takeshi Kojima |
|
|
|
--- |
|
|
|
# Benchmarking |
|
|
|
* **Japanese benchmark** |
|
|
|
- *We used [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/2f1583c0735eacdfdfa5b7d656074b69577b6774) library for evaluation.* |
|
- *The 4-task average accuracy is based on results of JCommonsenseQA-1.1, JNLI-1.1, MARC-ja-1.1, and JSQuAD-1.1.* |
|
- *model loading is performed with float16, and evaluation is performed with template version 0.3 using the few-shot in-context learning.* |
|
- *The number of few-shots is 3,3,3,2.* |
|
|
|
| Model | Average | JCommonsenseQA | JNLI | MARC-ja | JSQuAD | |
|
| :-- | :-- | :-- | :-- | :-- | :-- | |
|
| weblab-10b-instruction-sft | 78.78 | 74.35 | 65.65 | 96.06 | 79.04 | |
|
| weblab-10b | 66.38 | 65.86 | 54.19 | 84.49 | 60.98 | |
|
|
|
--- |
|
|
|
# How to use the model |
|
|
|
~~~~python |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("matsuo-lab/weblab-10b-instruction-sft") |
|
model = AutoModelForCausalLM.from_pretrained("matsuo-lab/weblab-10b-instruction-sft") |
|
|
|
if torch.cuda.is_available(): |
|
model = model.to("cuda") |
|
|
|
text = "倧θ¦ζ¨‘θ¨θͺγ’γγ«γ«γ€γγ¦θͺ¬ζγγ¦γγ γγγ" |
|
text = f'δ»₯δΈγ―γγΏγΉγ―γθͺ¬ζγγζη€Ίγ§γγθ¦ζ±γι©εγ«ζΊγγεΏηγζΈγγͺγγγ\n\n### ζη€Ί:\n{text}\n\n### εΏη:' |
|
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") |
|
|
|
with torch.no_grad(): |
|
output_ids = model.generate( |
|
token_ids.to(model.device), |
|
max_new_tokens=100, |
|
do_sample=True, |
|
temperature=0.7, |
|
top_p=0.95 |
|
) |
|
|
|
output = tokenizer.decode(output_ids.tolist()[0]) |
|
print(output) |
|
|
|
~~~~ |
|
--- |
|
|
|
# Licenese |
|
[cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/) |