Anonymouspro
commited on
Commit
•
8cf5fda
1
Parent(s):
bf9ae1c
Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ language:
|
|
9 |
---
|
10 |
<div align="center">
|
11 |
|
12 |
-
# TinyLlama-1.1B
|
13 |
</div>
|
14 |
|
15 |
https://github.com/jzhang38/TinyLlama
|
@@ -19,7 +19,7 @@ The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion to
|
|
19 |
|
20 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
21 |
|
22 |
-
#### This Model
|
23 |
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-955k-2T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/edit/main/README.md)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
|
24 |
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
|
25 |
|
|
|
9 |
---
|
10 |
<div align="center">
|
11 |
|
12 |
+
# TinyLlama-1.1B Anonymous Pro
|
13 |
</div>
|
14 |
|
15 |
https://github.com/jzhang38/TinyLlama
|
|
|
19 |
|
20 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
21 |
|
22 |
+
#### This Model Anonymous Pro
|
23 |
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-955k-2T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/edit/main/README.md)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
|
24 |
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
|
25 |
|