andreaskoepf
commited on
Commit
•
c7a49ad
1
Parent(s):
d0945e8
Update README.md
Browse files
README.md
CHANGED
@@ -21,10 +21,19 @@ datasets:
|
|
21 |
---
|
22 |
# Open-Assistant Falcon 7B SFT MIX Model
|
23 |
|
24 |
-
-
|
25 |
-
-
|
26 |
-
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
## Prompting
|
30 |
|
|
|
21 |
---
|
22 |
# Open-Assistant Falcon 7B SFT MIX Model
|
23 |
|
24 |
+
This model is a fine-tuning of TII's [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) LLM.
|
25 |
+
It was trained on a mixture of OASST top-2 threads (exported on June 2, 2023), Dolly-15k and synthetic instruction datasets (see dataset configuration below).
|
26 |
+
|
27 |
+
## Model Details
|
28 |
+
|
29 |
+
- **Finetuned from:** [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
|
30 |
+
- **Model type:** Causal decoder-only transformer language model
|
31 |
+
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
|
32 |
+
- **Weights & Biases:** [Training log](https://wandb.ai/open-assistant/public-sft/runs/tlevhltw) (Checkpoint: 2000 steps, ~2.9 epochs)
|
33 |
+
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-7b-sft-mix-2000_sampling_noprefix2.json)
|
34 |
+
- **License:** Apache 2.0
|
35 |
+
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
|
36 |
+
|
37 |
|
38 |
## Prompting
|
39 |
|