Update README.md
Browse files
README.md
CHANGED
@@ -5,9 +5,9 @@ pipeline_tag: text-generation
|
|
5 |
license: apache-2.0
|
6 |
---
|
7 |
|
8 |
-
# π Falcon-40b-
|
9 |
|
10 |
-
Falcon-40b-
|
11 |
|
12 |
## Model Summary
|
13 |
|
@@ -17,7 +17,7 @@ Falcon-40b-chat-oasst1 is a fully open-source chatbot model for dialogue generat
|
|
17 |
- **Dataset:** [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) (License: [Apache 2.0](https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/LICENSE))
|
18 |
- **License:** Apache 2.0 inherited from "Base Model" and "Dataset"
|
19 |
|
20 |
-
The model was fine-tuned in 4-bit precision using `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 10 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU with 37 GB of available memory. See attached [Colab Notebook](https://huggingface.co/dfurman/falcon-40b-
|
21 |
|
22 |
### Model Date
|
23 |
|
@@ -40,7 +40,7 @@ To prompt the chat model, use the following format:
|
|
40 |
<bot>:"""
|
41 |
```
|
42 |
|
43 |
-
**Falcon-40b-
|
44 |
```
|
45 |
Dear Friends,
|
46 |
|
@@ -60,7 +60,7 @@ Daniel
|
|
60 |
<bot>:
|
61 |
```
|
62 |
|
63 |
-
**Falcon-40b-
|
64 |
```
|
65 |
Here is a list of things to do in San Francisco:
|
66 |
|
@@ -123,7 +123,7 @@ import torch
|
|
123 |
from peft import PeftModel, PeftConfig
|
124 |
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
|
125 |
|
126 |
-
peft_model_id = "dfurman/falcon-40b-
|
127 |
config = PeftConfig.from_pretrained(peft_model_id)
|
128 |
|
129 |
bnb_config = BitsAndBytesConfig(
|
@@ -183,7 +183,7 @@ print(generated_text.split("<human>: ")[1].split("<bot>: ")[-1])
|
|
183 |
|
184 |
## Reproducibility
|
185 |
|
186 |
-
See attached [Colab Notebook](https://huggingface.co/dfurman/falcon-40b-
|
187 |
|
188 |
### CUDA Info
|
189 |
|
|
|
5 |
license: apache-2.0
|
6 |
---
|
7 |
|
8 |
+
# π Falcon-40b-openassistant-peft
|
9 |
|
10 |
+
Falcon-40b-openassistant-peft is a fully open-source chatbot model for dialogue generation. It was built by fine-tuning [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) on the [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset. This repo only includes the LoRA adapters from fine-tuning with π€'s [peft](https://github.com/huggingface/peft) package.
|
11 |
|
12 |
## Model Summary
|
13 |
|
|
|
17 |
- **Dataset:** [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) (License: [Apache 2.0](https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/LICENSE))
|
18 |
- **License:** Apache 2.0 inherited from "Base Model" and "Dataset"
|
19 |
|
20 |
+
The model was fine-tuned in 4-bit precision using `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 10 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU with 37 GB of available memory. See attached [Colab Notebook](https://huggingface.co/dfurman/falcon-40b-openassistant-peft/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code and hyperparams used to train the model.
|
21 |
|
22 |
### Model Date
|
23 |
|
|
|
40 |
<bot>:"""
|
41 |
```
|
42 |
|
43 |
+
**Falcon-40b-openassistant-peft**:
|
44 |
```
|
45 |
Dear Friends,
|
46 |
|
|
|
60 |
<bot>:
|
61 |
```
|
62 |
|
63 |
+
**Falcon-40b-openassistant-peft**:
|
64 |
```
|
65 |
Here is a list of things to do in San Francisco:
|
66 |
|
|
|
123 |
from peft import PeftModel, PeftConfig
|
124 |
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
|
125 |
|
126 |
+
peft_model_id = "dfurman/falcon-40b-openassistant-peft"
|
127 |
config = PeftConfig.from_pretrained(peft_model_id)
|
128 |
|
129 |
bnb_config = BitsAndBytesConfig(
|
|
|
183 |
|
184 |
## Reproducibility
|
185 |
|
186 |
+
See attached [Colab Notebook](https://huggingface.co/dfurman/falcon-40b-openassistant-peft/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code (and hyperparams) used to train the model.
|
187 |
|
188 |
### CUDA Info
|
189 |
|