--- datasets: - OpenAssistant/oasst1 pipeline_tag: text-generation license: tii-falcon-llm language: - en --- # 🚀 Falcon-40b-chat-oasst1 Falcon-40b-chat-oasst1 is a chatbot-like model for dialogue generation. It was built by fine-tuning [Falcon-40B](https://huggingface.co./tiiuae/falcon-40b) on the [OpenAssistant/oasst1](https://huggingface.co./datasets/OpenAssistant/oasst1) dataset. ## Model Summary - **Model Type:** Causal decoder-only - **Language(s):** English - **Base Model:** [Falcon-40B](https://huggingface.co./tiiuae/falcon-40b) (License: [TII Falcon LLM License](https://huggingface.co./tiiuae/falcon-40b#license)) - **Dataset:** [OpenAssistant/oasst1](https://huggingface.co./datasets/OpenAssistant/oasst1) (License: [Apache 2.0](https://huggingface.co./datasets/OpenAssistant/oasst1/blob/main/LICENSE)) - **License:** Inherited from "Base Model" and "Dataset" The model was fine-tuned in 4-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 10 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU with 37 GB of available memory. See attached [Colab Notebook](https://huggingface.co./dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code and hyperparams used to train the model. ### Model Date May 30, 2023 ## Quick Start To prompt the chat model, use the following format: ``` : [Instruction] : ``` ### Example Dialogue 1 **Prompter**: ``` """: My name is Daniel. Write a short email to my closest friends inviting them to come to my home on Friday for a dinner party, I will make the food but tell them to BYOB. :""" ``` **Falcon-40b-chat-oasst1**: ``` Dear Friends, I'm excited to invite you to a dinner party at my home on Friday, [date]. I will be making the food, but please bring your own beverages. I'm looking forward to seeing you and catching up! Sincerely, Daniel ``` ### Example Dialogue 2 **Prompter**: ``` : Create a list of four things to do in San Francisco. : ``` **Falcon-40b-chat-oasst1**: ``` Here are four things to do in San Francisco: 1. Visit the Golden Gate Bridge: The Golden Gate Bridge is one of the most iconic landmarks in San Francisco and a must-see for any visitor. You can walk or bike across the bridge, or take a ferry to get a closer look. 2. Explore the Fisherman's Wharf: The Fisherman's Wharf is a popular tourist destination with a variety of restaurants, shops, and attractions. You can try some fresh seafood, take a boat tour, or visit the Aquarium of the Bay. 3. Visit the Alcatraz Island: Alcatraz Island is a former federal prison that is now a national park and a popular tourist destination. You can take a ferry to the island and explore the prison, learn about its history, and enjoy the beautiful views of the city. 4. Take a cable car ride: San Francisco's cable cars are a unique and fun way to get around the city. You can ride the cable cars up and down the city's steep hills and enjoy the views along the way. These are just a few of the many things to do in San Francisco. There are also museums, parks, and other attractions to explore. ``` ### Direct Use This model has been finetuned on conversation trees from [OpenAssistant/oasst1](https://huggingface.co./datasets/OpenAssistant/oasst1) and should only be used on data of a similar nature. ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations This model is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of this model to develop guardrails and to take appropriate precautions for any production use. ## How to Get Started with the Model ### Setup ```python # Install packages !pip install -q -U bitsandbytes loralib einops !pip install -q -U git+https://github.com/huggingface/transformers.git !pip install -q -U git+https://github.com/huggingface/peft.git !pip install -q -U git+https://github.com/huggingface/accelerate.git ``` ### GPU Inference in 4-bit This requires a GPU with at least 27GB memory. ```python import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer # load the model peft_model_id = "dfurman/falcon-40b-chat-oasst1" config = PeftConfig.from_pretrained(peft_model_id) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained( config.base_model_name_or_path, return_dict=True, quantization_config=bnb_config, device_map={"":0}, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) tokenizer.pad_token = tokenizer.eos_token model = PeftModel.from_pretrained(model, peft_model_id) # run the model prompt = """: My name is Daniel. Write a short email to my closest friends inviting them to come to my home on Friday for a dinner party, I will make the food but tell them to BYOB. :""" batch = tokenizer( prompt, padding=True, truncation=True, return_tensors='pt' ) batch = batch.to('cuda:0') with torch.cuda.amp.autocast(): output_tokens = model.generate( input_ids = batch.input_ids, max_new_tokens=200, temperature=0.7, top_p=0.7, num_return_sequences=1, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) # Inspect outputs print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True)) ``` ## Reproducibility - See attached [Colab Notebook](https://huggingface.co./dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code (and hyperparams) used to train the model. ### CUDA Info - CUDA Version: 12.0 - Hardware: 1 A100-SXM - Max Memory: {0: "37GB"} - Device Map: {"": 0} ### Package Versions Employed - `torch`: 2.0.1+cu118 - `transformers`: 4.30.0.dev0 - `peft`: 0.4.0.dev0 - `accelerate`: 0.19.0 - `bitsandbytes`: 0.39.0 - `einops`: 0.6.1