Update README.md
Browse files
README.md
CHANGED
@@ -61,7 +61,11 @@ print(tokenizer.decode(outputs[0]))
|
|
61 |
```
|
62 |
----
|
63 |
# Training and finetuning
|
64 |
-
|
|
|
|
|
|
|
|
|
65 |
<p align="center">
|
66 |
<picture>
|
67 |
<img alt="Hugging Face Transformers Library" src="https://i.postimg.cc/yY4dkwvT/Stakehozlder-Map-page-0001-modified.png" width="400" height="500" style="max-width: 100%;">
|
|
|
61 |
```
|
62 |
----
|
63 |
# Training and finetuning
|
64 |
+
- **Extend tokenzer:** The base Mistral tokenizer does not support Persian. As an initial step, we trained a SentencePiece tokenizer on the Farsi Wikipedia corpus and subsequently integrated it with the Mistral tokenizer.
|
65 |
+
- **Pre-training:** In the following step, we expanded the embedding layer of the base model to match the size of the Persian tokenizer. We then employed the LoRA method to train the model on three distinct datasets: Wikipedia-Farsi, an Islamic book collection, and content from Khamenei.ir.
|
66 |
+
- **Instruction Fine-tuning:** For the final step, we fine-tuned the model using the LoRA method on a translated version of the Stanford-alpaca to enhance the model's question-answering capabilities.
|
67 |
+
- This diagram illustrates the steps described above:
|
68 |
+
-
|
69 |
<p align="center">
|
70 |
<picture>
|
71 |
<img alt="Hugging Face Transformers Library" src="https://i.postimg.cc/yY4dkwvT/Stakehozlder-Map-page-0001-modified.png" width="400" height="500" style="max-width: 100%;">
|