Update README.md
Browse files
README.md
CHANGED
@@ -12,15 +12,15 @@ Falcon-40b-chat-oasst1 is a chatbot-like model for dialogue generation. It was b
|
|
12 |
|
13 |
- **Model Type:** Causal decoder-only
|
14 |
- **Language(s) (NLP):** English (primarily)
|
15 |
-
- **Base Model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) (License: [TII Falcon LLM License](https://huggingface.co/tiiuae/falcon-40b#license)
|
16 |
-
- **Dataset:** [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) (License: [Apache 2.0](https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/LICENSE)
|
17 |
- **License:** Inherited from the above "Base Model" and "Dataset"
|
18 |
|
19 |
## Model Details
|
20 |
|
21 |
- The model was fine-tuned in 4-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`.
|
22 |
- Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant.
|
23 |
-
- The run took approximately 10 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU with 37 GB of available memory
|
24 |
- See attached [Colab Notebook](https://huggingface.co/dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code and hyperparams used to train the model.
|
25 |
|
26 |
|
|
|
12 |
|
13 |
- **Model Type:** Causal decoder-only
|
14 |
- **Language(s) (NLP):** English (primarily)
|
15 |
+
- **Base Model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) (License: [TII Falcon LLM License](https://huggingface.co/tiiuae/falcon-40b#license))
|
16 |
+
- **Dataset:** [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) (License: [Apache 2.0](https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/LICENSE))
|
17 |
- **License:** Inherited from the above "Base Model" and "Dataset"
|
18 |
|
19 |
## Model Details
|
20 |
|
21 |
- The model was fine-tuned in 4-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`.
|
22 |
- Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant.
|
23 |
+
- The run took approximately 10 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU with 37 GB of available memory.
|
24 |
- See attached [Colab Notebook](https://huggingface.co/dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code and hyperparams used to train the model.
|
25 |
|
26 |
|