chrisociepa commited on
Commit
057e4af
1 Parent(s): 129e3b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -24,6 +24,13 @@ The Bielik-7B-Instruct-v0.1 is an instruct fine-tuned version of the [Bielik-7B-
24
 
25
  [We have prepared quantized versions of the model as well as MLX format.](#quant-and-mlx-versions)
26
 
 
 
 
 
 
 
 
27
  ## Model
28
 
29
  The [SpeakLeash](https://speakleash.org/) team is working on their own set of instructions in Polish, which is continuously being expanded and refined by annotators. A portion of these instructions, which had been manually verified and corrected, has been utilized for training purposes. Moreover, due to the limited availability of high-quality instructions in Polish, publicly accessible collections of instructions in English were used - [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) and [orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k), which accounted for half of the instructions used in training. The instructions varied in quality, leading to a deterioration in model’s performance. To counteract this while still allowing ourselves to utilize forementioned datasets,several improvements were introduced:
 
24
 
25
  [We have prepared quantized versions of the model as well as MLX format.](#quant-and-mlx-versions)
26
 
27
+ 🎥 Demo: https://huggingface.co/spaces/speakleash/Bielik-7B-Instruct-v0.1
28
+
29
+ 🗣️ Chat Arena<span style="color:red;">*</span>: https://arena.speakleash.org.pl/
30
+
31
+ <span style="color:red;">*</span>Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality.
32
+
33
+
34
  ## Model
35
 
36
  The [SpeakLeash](https://speakleash.org/) team is working on their own set of instructions in Polish, which is continuously being expanded and refined by annotators. A portion of these instructions, which had been manually verified and corrected, has been utilized for training purposes. Moreover, due to the limited availability of high-quality instructions in Polish, publicly accessible collections of instructions in English were used - [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) and [orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k), which accounted for half of the instructions used in training. The instructions varied in quality, leading to a deterioration in model’s performance. To counteract this while still allowing ourselves to utilize forementioned datasets,several improvements were introduced: