Text Generation
PEFT
Safetensors
Eval Results
dfurman commited on
Commit
729049f
1 Parent(s): 85824ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -8,9 +8,10 @@ pipeline_tag: text-generation
8
 
9
  Falcon-40b-chat-oasst1 is a chatbot-like model for dialogue generation. It was built by fine-tuning [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) on the [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset.
10
  - The model was fine-tuned in 4-bit precision using `peft`, `transformers`, and `bitsandbytes`.
11
- - The training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. Instead of fine-tuning the entire model you fine-tune lightweight adapters and load them inside the base model at inference.
12
- - Training took approximately 10 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU with 37 GB of available memory (via Google Colab).
13
- - See attached [Notebook](https://huggingface.co/dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code (and hyperparams) used to train the model.
 
14
 
15
  ## Model Summary
16
 
@@ -163,7 +164,7 @@ print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True))
163
 
164
  ## Reproducibility
165
 
166
- - See attached [Notebook](https://huggingface.co/dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code (and hyperparams) used to train the model.
167
 
168
  ### CUDA Info
169
 
 
8
 
9
  Falcon-40b-chat-oasst1 is a chatbot-like model for dialogue generation. It was built by fine-tuning [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) on the [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset.
10
  - The model was fine-tuned in 4-bit precision using `peft`, `transformers`, and `bitsandbytes`.
11
+ - The training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant.
12
+ - Instead of fine-tuning the entire model you fine-tune lightweight adapters and load them inside the base model at inference.
13
+ - Training took approximately 10 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU, with 37 GB of available memory.
14
+ - See attached [Colab Notebook](https://huggingface.co/dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code and hyperparams used to train the model.
15
 
16
  ## Model Summary
17
 
 
164
 
165
  ## Reproducibility
166
 
167
+ - See attached [Colab Notebook](https://huggingface.co/dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code (and hyperparams) used to train the model.
168
 
169
  ### CUDA Info
170