Update README.md
Browse files
README.md
CHANGED
@@ -3,3 +3,462 @@ license: other
|
|
3 |
license_name: bloomchat-176b-license-v1.0
|
4 |
license_link: LICENSE
|
5 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
license_name: bloomchat-176b-license-v1.0
|
4 |
license_link: LICENSE
|
5 |
---
|
6 |
+
|
7 |
+
# BLOOMChat V2.0
|
8 |
+
|
9 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
+
|
11 |
+
BLOOMChat-v2 is a 176 billion parameter multilingual chat model. It is finetuned from [BLOOM (176B)](https://huggingface.co/bigscience/bloom) on long-sequence multilingual data and assistant-style conversation datasets. It supports conversation, question answering and generative answers in multiple languages.
|
12 |
+
|
13 |
+
## Model Details
|
14 |
+
|
15 |
+
### Model Description
|
16 |
+
|
17 |
+
<!-- Provide a longer summary of what this model is. -->
|
18 |
+
|
19 |
+
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
|
20 |
+
- **Model type:** Language Model
|
21 |
+
- **Language(s):** Multiple; see [training data from BLOOM](https://huggingface.co/bigscience/bloom#training-data)
|
22 |
+
- **License:** BLOOMChat-176B LICENSE v1.0
|
23 |
+
- **Finetuned from model:** [BigScience Group BLOOM](https://huggingface.co/bigscience/bloom)
|
24 |
+
|
25 |
+
### Basic Information
|
26 |
+
|
27 |
+
<!-- Provide the basic links for the model. -->
|
28 |
+
- **Blog Post**:
|
29 |
+
- **Discord**: [Link](https://discord.com/invite/8z2Pe7cpRv)
|
30 |
+
- **Github**: [Link](https://github.com/sambanova/bloomchat)
|
31 |
+
|
32 |
+
### Licensing
|
33 |
+
|
34 |
+
To increase accessibility and to support the open-source community, SambaNova is releasing BLOOMChat under a modified version of the Apache 2.0 license, which includes use-based restrictions from BLOOM’s RAIL license. While use-based restrictions are necessarily passed through, there are no blanket restrictions on reuse, distribution, commercialization or adaptation. [Please review SambaNova’s BLOOMChat-176B License](LICENSE)
|
35 |
+
|
36 |
+
## Uses
|
37 |
+
<details>
|
38 |
+
<summary>Click to expand</summary>
|
39 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
40 |
+
|
41 |
+
### Direct Use
|
42 |
+
|
43 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
44 |
+
This model is intended for commercial and research use.
|
45 |
+
|
46 |
+
|
47 |
+
### Out-of-Scope Use
|
48 |
+
|
49 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
50 |
+
|
51 |
+
|
52 |
+
BLOOMChat should NOT be used for:
|
53 |
+
|
54 |
+
- Mission-critical applications
|
55 |
+
- Applications that involve the safety of others
|
56 |
+
- Making highly important decisions
|
57 |
+
- Important automated pipelines
|
58 |
+
|
59 |
+
This model is still in early development and can be prone to mistakes and hallucinations, there is still room for improvement. This model is intended to provide the community with a multilingual chat LLM baseline.
|
60 |
+
|
61 |
+
### Recommendations
|
62 |
+
|
63 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
64 |
+
|
65 |
+
Users should be made aware of the risks, biases, limitations, and restrictions of the model, which are listed down at the bottom of the page.
|
66 |
+
|
67 |
+
</details>
|
68 |
+
|
69 |
+
|
70 |
+
---
|
71 |
+
## How to Get Started with the Model
|
72 |
+
|
73 |
+
<details>
|
74 |
+
<summary>Click to expand</summary>
|
75 |
+
|
76 |
+
### Loading in model with Huggingface
|
77 |
+
|
78 |
+
```python
|
79 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
80 |
+
|
81 |
+
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/BLOOMChat-176B-v2")
|
82 |
+
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/BLOOMChat-176B-v2", device_map="auto", torch_dtype="auto")
|
83 |
+
```
|
84 |
+
|
85 |
+
### Quick Start Inference on SambaNova's in-house Reconfigurable Dataflow Unit (RDU)
|
86 |
+
|
87 |
+
The inference code to run the model can be found our [github repo](https://github.com/sambanova/bloomchat/blob/main/rdu_quick_start/inference.py). This code requires the [SambaFlow](https://docs.sambanova.ai/developer/latest/sambaflow-intro.html) SDK to execute. For those interested in running models on RDUs, [please feel free to get in touch](https://sambanova.ai/getstarted).
|
88 |
+
|
89 |
+
### Quick Start Inference on GPU
|
90 |
+
|
91 |
+
First create a python virtual environment for these packages
|
92 |
+
|
93 |
+
```
|
94 |
+
python3 -m venv bloomchat_venv
|
95 |
+
source bloomchat_venv/bin/activate
|
96 |
+
pip install --upgrade pip
|
97 |
+
```
|
98 |
+
|
99 |
+
<!-- Please follow this section [Inference solutions for BLOOM 176B](https://github.com/huggingface/transformers-bloom-inference#bloom-inference-via-command-line) in the Huggingface Tutorial for environment set up and stop before the [BLOOM inference via command-line
|
100 |
+
](https://github.com/huggingface/transformers-bloom-inference#bloom-inference-via-command-line) section. -->
|
101 |
+
|
102 |
+
```
|
103 |
+
pip install flask flask_api gunicorn pydantic accelerate huggingface_hub>=0.9.0 deepspeed>=0.7.3 deepspeed-mii==0.0.2
|
104 |
+
```
|
105 |
+
And then
|
106 |
+
```
|
107 |
+
pip install transformers==4.27.0
|
108 |
+
```
|
109 |
+
|
110 |
+
You will see messages like this
|
111 |
+
```
|
112 |
+
ERROR: deepspeed-mii 0.0.2 has requirement transformers==4.21.2, but you'll have transformers 4.27.0 which is incompatible.
|
113 |
+
Installing collected packages: transformers
|
114 |
+
Found existing installation: transformers 4.21.2
|
115 |
+
Uninstalling transformers-4.21.2:
|
116 |
+
Successfully uninstalled transformers-4.21.2
|
117 |
+
Successfully installed transformers-4.27.0
|
118 |
+
```
|
119 |
+
|
120 |
+
Now let's git clone the [huggingface/transformers-bloom-inference](https://github.com/huggingface/transformers-bloom-inference) repo.
|
121 |
+
```
|
122 |
+
git clone https://github.com/huggingface/transformers-bloom-inference.git
|
123 |
+
cd transformers-bloom-inference/
|
124 |
+
```
|
125 |
+
And then you need to modify two files in this [transformers-bloom-inference](https://github.com/huggingface/transformers-bloom-inference) repo:
|
126 |
+
|
127 |
+
- Modifying `inference_server/models/hf_accelerate.py`
|
128 |
+
- This is because for our testing of this repo we used 4 80GB A100 GPUs and would run into memory issues
|
129 |
+
- Modifying `inference_server/cli.py`
|
130 |
+
- This is because the model was trained using specific human, bot tags
|
131 |
+
- Trailing spaces may lead to subpar performance
|
132 |
+
|
133 |
+
Modifications for `inference_server/models/hf_accelerate.py`:
|
134 |
+
|
135 |
+
```diff
|
136 |
+
diff --git a/inference_server/models/hf_accelerate.py b/inference_server/models/hf_accelerate.py
|
137 |
+
index 9be3c3f..a8ecb1d 100644
|
138 |
+
--- a/inference_server/models/hf_accelerate.py
|
139 |
+
+++ b/inference_server/models/hf_accelerate.py
|
140 |
+
@@ -1,4 +1,5 @@
|
141 |
+
from argparse import Namespace
|
142 |
+
+from accelerate.utils.modeling import get_max_memory
|
143 |
+
|
144 |
+
import torch
|
145 |
+
|
146 |
+
@@ -12,6 +13,12 @@ class HFAccelerateModel(Model):
|
147 |
+
|
148 |
+
kwargs = {"pretrained_model_name_or_path": args.model_name, "device_map": "auto"}
|
149 |
+
|
150 |
+
+ original_max_memory_dict = get_max_memory()
|
151 |
+
+
|
152 |
+
+ reduce_max_memory_dict = {device_key: int(original_max_memory_dict[device_key] * 0.85) for device_key in original_max_memory_dict}
|
153 |
+
+
|
154 |
+
+ kwargs["max_memory"] = reduce_max_memory_dict
|
155 |
+
+
|
156 |
+
if get_world_size() > 1:
|
157 |
+
kwargs["device_map"] = "balanced_low_0"
|
158 |
+
|
159 |
+
```
|
160 |
+
|
161 |
+
Modifications for `inference_server/cli.py`:
|
162 |
+
|
163 |
+
```diff
|
164 |
+
diff --git a/inference_server/cli.py b/inference_server/cli.py
|
165 |
+
index fc903d5..5450236 100644
|
166 |
+
--- a/inference_server/cli.py
|
167 |
+
+++ b/inference_server/cli.py
|
168 |
+
@@ -22,6 +22,9 @@ def main() -> None:
|
169 |
+
while True:
|
170 |
+
input_text = input("Input text: ")
|
171 |
+
|
172 |
+
+ input_text = input_text.strip()
|
173 |
+
+ modified_input_text = f"<human>: {input_text}\n<bot>:"
|
174 |
+
+
|
175 |
+
if input("change generate_kwargs? [y/n] ") == "y":
|
176 |
+
while True:
|
177 |
+
try:
|
178 |
+
@@ -33,7 +36,7 @@ def main() -> None:
|
179 |
+
print("message =", e_message)
|
180 |
+
continue
|
181 |
+
|
182 |
+
- response = model.generate(text=[input_text], generate_kwargs=generate_kwargs)
|
183 |
+
+ response = model.generate(text=[modified_input_text], generate_kwargs=generate_kwargs)
|
184 |
+
|
185 |
+
print_rank_0("Output text:", response.text[0])
|
186 |
+
print_rank_0("Generated tokens:", response.num_generated_tokens[0])
|
187 |
+
|
188 |
+
```
|
189 |
+
And now you are good to go!
|
190 |
+
|
191 |
+
Running command for bf16, NO sampling
|
192 |
+
```
|
193 |
+
python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v2 --model_class AutoModelForCausalLM --dtype bf16 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": false, "max_new_tokens": 512}'
|
194 |
+
```
|
195 |
+
Running command for bf16, YES sampling
|
196 |
+
```
|
197 |
+
python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v2 --model_class AutoModelForCausalLM --dtype bf16 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": true, "temperature": 0.8, "repetition_penalty": 1.2, "top_p": 0.9, "max_new_tokens": 512}'
|
198 |
+
```
|
199 |
+
---
|
200 |
+
Running command for int8 (sub optimal performance, but fast inference time) NO sampling:
|
201 |
+
```
|
202 |
+
python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v2 --model_class AutoModelForCausalLM --dtype int8 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": false, "max_new_tokens": 512}'
|
203 |
+
```
|
204 |
+
Running command for int8 (sub optimal performance, but fast inference time) YES sampling:
|
205 |
+
```
|
206 |
+
python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v2 --model_class AutoModelForCausalLM --dtype int8 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": true, "temperature": 0.8, "repetition_penalty": 1.2, "top_p": 0.9, "max_new_tokens": 512}'
|
207 |
+
```
|
208 |
+
|
209 |
+
**DISCLAIMER:** When using int8, the results will be subpar compared to bf16 as the model is being [quantized](https://huggingface.co/blog/hf-bitsandbytes-integration#introduction-to-model-quantization).
|
210 |
+
|
211 |
+
### Suggested Inference Parameters
|
212 |
+
- Temperature: 0.8
|
213 |
+
- Repetition penalty: 1.2
|
214 |
+
- Top-p: 0.9
|
215 |
+
- Max generated tokens: 512
|
216 |
+
|
217 |
+
### Suggested Prompts To Try in GPU Tutorial
|
218 |
+
```
|
219 |
+
Input text: Write a script in which Bob accidentally breaks his dad's guitar
|
220 |
+
```
|
221 |
+
|
222 |
+
```
|
223 |
+
Input text: Create an itemized list of tasks to complete to start a clothing brand
|
224 |
+
```
|
225 |
+
|
226 |
+
```
|
227 |
+
Input text: 十七岁的风是什么颜色的?
|
228 |
+
```
|
229 |
+
|
230 |
+
|
231 |
+
</details>
|
232 |
+
|
233 |
+
---
|
234 |
+
|
235 |
+
## Some example completions for English
|
236 |
+
|
237 |
+
<details>
|
238 |
+
<summary>Click to expand</summary>
|
239 |
+
|
240 |
+
```
|
241 |
+
<human>: What does it mean for the Fed to raise or lower interest rates?
|
242 |
+
<bot>: The Federal Reserve (also known as the Fed) is the central banking system of the United States. It was created in 1913, and is an independent agency of the federal government. The Fed has a dual mandate of maximizing employment and stabilizing prices. The Fed sets interest rates, and can influence the money supply and the credit markets through its actions.
|
243 |
+
|
244 |
+
Raising interest rates means that the Fed is trying to reduce the money supply in the economy, and is trying to discourage borrowing and encourage saving. Lowering interest rates means that the Fed is trying to stimulate economic growth by encouraging borrowing and reducing the cost of credit.
|
245 |
+
|
246 |
+
The Fed raises or lowers interest rates in order to influence the economy and achieve its goals for employment and price stability. Interest rates are one of the primary tools that the Fed uses to achieve its goals.
|
247 |
+
```
|
248 |
+
|
249 |
+
```
|
250 |
+
<human>: Write an engaging and well-written property listing description for selling a house. Address of the house and some of the details are given to you. Fill in the information gap with hallucinations if needed.
|
251 |
+
|
252 |
+
Property Address: 412 Monterey Ave, Capitola, CA 95010
|
253 |
+
Parking: 3 cars + a finished single car garage
|
254 |
+
Details:
|
255 |
+
- open floorplan
|
256 |
+
- window with views to park/mountains
|
257 |
+
- Kitchen with shaker cabinetry and white Calcatta quartz counters
|
258 |
+
<bot>: This home at 412 Monterey Ave, Capitola, CA 95010 is the perfect place to call home. With its open floor plan, it provides the perfect space for entertaining guests. The kitchen is equipped with shaker cabinetry and white Calcatta quartz counters, providing the perfect space to create culinary masterpieces. The window with views to the park and mountains provide a beautiful view of the outdoors, making this home feel like a piece of paradise. The three-car garage and finished single car garage provide ample parking space, making this home a must-see for anyone looking for a new home.
|
259 |
+
|
260 |
+
```
|
261 |
+
|
262 |
+
```
|
263 |
+
<human>: What is artificial intelligence? Explain it so an 8 year old will understand.
|
264 |
+
<bot>: Artificial Intelligence (AI) is a computer program or machine that can do things that would normally require human intelligence, such as recognizing patterns, making decisions, and learning from experience. AI can be found in many places, such as self-driving cars, smartphones, and even robots. AI is used to make things easier for people, like when your smartphone can translate text into another language or when your voice-controlled speaker can play music for you.
|
265 |
+
AI works by using algorithms and big data to learn and improve over time. For example, self-driving cars use AI to analyze the environment around them and make decisions about how to drive. The car gathers data from its sensors, processes it using algorithms, and adjusts its behavior accordingly. This process is called machine learning.
|
266 |
+
AI is still a developing technology, and there are still many things that AI programs cannot do yet. However, as AI continues to evolve, it is likely that it will become more and more capable of performing tasks that were once thought to require human intelligence.
|
267 |
+
|
268 |
+
```
|
269 |
+
|
270 |
+
</details>
|
271 |
+
|
272 |
+
---
|
273 |
+
|
274 |
+
## Some example completions for Multilingual
|
275 |
+
<details>
|
276 |
+
<summary>Click to expand</summary>
|
277 |
+
|
278 |
+
#### Arabic
|
279 |
+
|
280 |
+
```
|
281 |
+
<human>: هل العمل نصف دوام فكرة جيدة لمراهقين بين عمر ١٥ و ١٩ عام
|
282 |
+
<bot>: نعم ، يمكن أن يكون العمل نصف دوام فكرة جيدة للمراهقين بين عمري 15 و 19 عامًا ، حيث يمكنهم من خلالها اكتساب خبرة العمل الأولى وتنمية مهارات العمل المهمة. بالإضافة إلى ذلك، يمكن أن يكون العمل نصف دوام فرصة جيدة للمراهقين لتعلم كيفية إدارة الوقت بشكل جيد.
|
283 |
+
ومع ذلك، من المهم أن تراعي الوظائف التي يتم شغلها خلال نصف الدوام احتياجات المراهقين وتجنب أي ضرر ممكن. بالإضافة إلى ذلك، من المهم أن يكون لدى المراهقين وقت فراغ كافٍ للاسترخاء والتعرف على أقرانهم وتنمية مهاراتهم الأخرى.
|
284 |
+
بالإضافة إلى ذلك، من المهم أن يدعم الآباء عمل المراهقين، ويشجعونهم على الاستمرار في الدراسة بنجاح، ويقدمون الدعم العاطفي لهم.
|
285 |
+
```
|
286 |
+
|
287 |
+
**DISCLAIMER**: Text ordering may be off for Arabic due to code block not being able to render
|
288 |
+
#### Chinese
|
289 |
+
|
290 |
+
```
|
291 |
+
<human>: 请向我推荐一部周末看的电影并说明原因。
|
292 |
+
<bot>: 我推荐的电影是疯狂动物城(Zootopia)。这部电影讲述了一个发生在动物城市里的故事。主人公是一个名叫Judy的兔子警官,她也是一个初来乍到的新人。因为初来乍到,所以她受到了其他动物同事的排挤。不过她并没有气馁,而是用自己的行动证明了自己。在解决 一个一个案子,打击罪犯的同时,她也在这个过程中收获了友情和爱情。这部电影不仅有深刻的教育意义,而且还有好看的动作和搞笑的场面。所以我觉得这是一部很适合周末看的电影。
|
293 |
+
|
294 |
+
```
|
295 |
+
|
296 |
+
#### French
|
297 |
+
|
298 |
+
```
|
299 |
+
<human>: Il y a t’il un petit-déjeuner de 700-1000 calories sans oeuf mais qui inclut des protéines.
|
300 |
+
<bot>:
|
301 |
+
```
|
302 |
+
|
303 |
+
#### Hindi
|
304 |
+
```
|
305 |
+
<human>: दिल्ली से जयपुर कैसे जाये? बस से या ���ार से? दोनो के बारे मे विस्तार मे बताये.
|
306 |
+
<bot>: आप दिल्ली से जयपुर कार से या बस से जा सकते है, दोनों ही तरीके आसान है।
|
307 |
+
अगर आप कार से यात्रा करते है तो यह यात्रा 4 घंटे में पूरी हो जाएगी। आपको दिल्ली से जयपुर के लिए कुछ पॉपुलर मार्ग इस तरह से है -
|
308 |
+
1. नेशनल हाई-वे 2 (NH 2) - यह दिल्ली से जयपुर जाने का सबसे पॉपुलर मार्ग है और यह पूरी तरह से एक फोर-लेन मार्ग है। इस मार्ग पर आप बिना किसी रोक-टोक के दिल्ली से जयपुर तक पहुंच सकते है।
|
309 |
+
2. दिल्ली-जयपुर मार्ग (Delhi-Jaipur Highway) - यह दिल्ली से जयपुर जाने का एक और आसान मार्ग है। यह जयपुर से दिल्ली की और जाने वाले NH 2 को जयपुर में दिल्ली-जयपुर राष्ट्रीय राजमार्ग (Delhi-Jaipur National Highway) कहते है।
|
310 |
+
3. दिल्ली-जयपुर मार्ग (Delhi-Jaipur Highway) - यह NH 2 दिल्ली से जयपुर तक जाने का एक और मार्ग है। यह दिल्ली से जयपुर की और जाने वाले NH 2 से जुड़ा हुआ है और दिल्ली से जयपुर तक सीधा जाता है।
|
311 |
+
4. दिल्ली-जयपुर मार्ग (Delhi-Jaipur Highway) - यह NH 2 दिल्ली से जयपुर तक जाने का एक और मार्ग है। यह दिल्ली से जयपुर की और जाने वाले NH 2 से जुड़ा हुआ है और दिल्ली से जयपुर तक सीधा जाता है।
|
312 |
+
जयपुर जाने के लिए आप इनमें से किसी भी मार्ग का चुनाव कर सकते है और आसानी से जयपुर पहुंच सकते है।
|
313 |
+
जयपुर जाने के लिए आप सरकारी बसों का भी चुनाव कर सकते है। दिल्ली से जयपुर के लिए कई बसों की सुविधा उपलब्ध है। दिल्ली से जयपुर के लिए आप दिल्ली के सराय रोहिल्ला बस अड्डे से आसानी से बस ले सकते है। सराय रोहिल्ला से जयपुर के लिए कई बसें उपलब्ध है जो आपको जयपुर पहुंचा देगी।
|
314 |
+
जयपुर जाने के लिए आप प्राइवेट कैब का भी उपयोग कर सकते है। यह सुविधा आप ऑनलाइन भी ले सकते है। कैब से दिल्ली से जयपुर जाने में आपको लगभग 4 घंटे लगेंगे।
|
315 |
+
```
|
316 |
+
|
317 |
+
#### Spanish
|
318 |
+
|
319 |
+
```
|
320 |
+
<human>: ¿Me puedes dar algunas recomendaciones de juegos? Estoy buscando juegos de rol con un sistema de magia único, un sistema de construcción único y una buena historia. Por favor limítate a juegos que se puedan jugar en PC directamente, o por emulación.
|
321 |
+
<bot>: Claro, aquí están algunas opciones que podrían interesarte:
|
322 |
+
|
323 |
+
1.- "Spellstorm": Este juego de rol tiene un sistema de magia único que se basa en la manipulación de los elementos y la creación de tormentas de magia. El sistema de construcción es basado en habilidades y la historia es interesante, con una ambientación de fantasía oscura.
|
324 |
+
|
325 |
+
2.- "Infinity Engine": Este es un sistema de juego de rol creado por "Rol Team B", que cuenta con varios títulos de calidad. El sistema de magia es único y se basa en la manipulación de portales interdimensionales. El sistema de construcción es basado en habilidades y la historia es interesante, con una ambientación de fantasía.
|
326 |
+
|
327 |
+
3.- "Arcana Evolved": Este juego de rol tiene un sistema de magia único que se basa en la manipulación de las leyes fundamentales del universo. El sistema de construcción es basado en habilidades y atributos, y la historia es interesante, con una ambientación de ciencia ficción.
|
328 |
+
|
329 |
+
4.- "Fate": Este es un sistema de juego de rol que se enfoca en la narrativa y la toma de decisiones. El sistema de magia es inexistente, pero la historia es muy interesante, con una ambientación de fantasía y ciencia ficción.
|
330 |
+
|
331 |
+
Espero que estas opciones te sean de ayuda.
|
332 |
+
|
333 |
+
```
|
334 |
+
|
335 |
+
|
336 |
+
</details>
|
337 |
+
|
338 |
+
---
|
339 |
+
|
340 |
+
## Training Details
|
341 |
+
|
342 |
+
<details>
|
343 |
+
<summary>Click to expand</summary>
|
344 |
+
|
345 |
+
### Training Data
|
346 |
+
|
347 |
+
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
348 |
+
|
349 |
+
#### Long Sequence Pretraining Data
|
350 |
+
- [mc4 3.1.0](https://huggingface.co/datasets/allenai/c4/tree/mC4_3.1.0)
|
351 |
+
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
|
352 |
+
- [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata)
|
353 |
+
- [The PILE](https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated)
|
354 |
+
- [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law)
|
355 |
+
- 10-Ks and 10-Qs from [EDGAR](https://www.sec.gov/edgar/search-and-access)
|
356 |
+
- arXiv papers from Economics and Quantitative Finance
|
357 |
+
- YouTube transcripts from finance-related public YouTube videos
|
358 |
+
|
359 |
+
#### Finetuning Data
|
360 |
+
- [OIG dataset from OpenChatKit](https://huggingface.co/datasets/laion/OIG)
|
361 |
+
- [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
|
362 |
+
- [Oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
|
363 |
+
|
364 |
+
### Training Procedure
|
365 |
+
|
366 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
367 |
+
|
368 |
+
We trained BLOOMChat-v2 with [SambaNova DataScale systems](https://sambanova.ai/products/datascale/) with SambaNova's in-house Reconfigurable Dataflow Unit (RDU). We started from [BLOOM (176B)](https://huggingface.co/bigscience/bloom), an open-source multilingual LLM pretrained by the [BigScience group](https://huggingface.co/bigscience).
|
369 |
+
We then continued pretraining the model on an in-house mix of multilingual text (see the full breakdown above).
|
370 |
+
|
371 |
+
We instruction-tune the resulting model on OpenChatKit with each data source subsampled to 100k for one epoch, followed by sixteen epochs over the combined OpenChatKit and Dolly 2.0.
|
372 |
+
All of the code used to prepare the finetuning datasets and the scripts to run finetuning and inference are open-sourced and freely available at [sambanova/bloomchat](https://github.com/sambanova/bloomchat/tree/main)
|
373 |
+
|
374 |
+
|
375 |
+
### Prompting Style Used For Training
|
376 |
+
```
|
377 |
+
<human>: {input1 that the user wants from the bot}
|
378 |
+
<bot>: {response1}</s>
|
379 |
+
<human>: {input2 that the user wants from the bot}
|
380 |
+
<bot>: {response2}</s>
|
381 |
+
```
|
382 |
+
|
383 |
+
### Hyperparameters
|
384 |
+
|
385 |
+
**Long-sequence Pretraining**
|
386 |
+
- Hardware: SambaNova Reconfigurable Dataflow Unit (RDU)
|
387 |
+
- Optimizer: AdamW
|
388 |
+
- Grad accumulation: 1
|
389 |
+
- Steps: 1620
|
390 |
+
- Global Batch size: 2048
|
391 |
+
- Batch tokens: 2048 * 8192 = 16,777,216 tokens
|
392 |
+
- Learning Rate: 6e-6
|
393 |
+
- Learning Rate Scheduler: Flat
|
394 |
+
- Warmup Steps: 0
|
395 |
+
- Weight decay: 0.1
|
396 |
+
|
397 |
+
**Instruction-tuned Training on OIG**
|
398 |
+
|
399 |
+
- Hardware: SambaNova Reconfigurable Dataflow Unit (RDU)
|
400 |
+
- Optimizer: AdamW
|
401 |
+
- Grad accumulation: 1
|
402 |
+
- Epochs: 1
|
403 |
+
- Global Batch size: 128
|
404 |
+
- Batch tokens: 128 * 8192 = 1,048,576 tokens
|
405 |
+
- Learning Rate: 6e-6
|
406 |
+
- Learning Rate Scheduler: Cosine Schedule with Warmup
|
407 |
+
- Warmup Steps: 0
|
408 |
+
- End Learning Ratio: 0.1
|
409 |
+
- Weight decay: 0.1
|
410 |
+
|
411 |
+
**Instruction-tuned Training on Dolly 2.0 and Oasst1**
|
412 |
+
|
413 |
+
- Hardware: SambaNova Reconfigurable Dataflow Unit (RDU)
|
414 |
+
- Optimizer: AdamW
|
415 |
+
- Grad accumulation: 1
|
416 |
+
- Epochs: 16
|
417 |
+
- Global Batch size: 128
|
418 |
+
- Batch tokens: 128 * 8192 = 1,048,576 tokens
|
419 |
+
- Learning Rate: 6e-6
|
420 |
+
- Learning Rate Scheduler: Cosine Schedule with Warmup
|
421 |
+
- Warmup Steps: 0
|
422 |
+
- End Learning Ratio: 0.1
|
423 |
+
- Weight decay: 0.1
|
424 |
+
|
425 |
+
</details>
|
426 |
+
|
427 |
+
---
|
428 |
+
|
429 |
+
## Bias, Risks, and Limitations
|
430 |
+
|
431 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
432 |
+
|
433 |
+
Like all LLMs, BLOOMChat has certain limitations:
|
434 |
+
- Hallucination: BLOOMChat may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
|
435 |
+
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
|
436 |
+
- Repetition: BLOOMChat may produce repetitive phrases or sentences, leading to less engaging and informative responses.
|
437 |
+
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
|
438 |
+
- Toxicity: BLOOMChat may inadvertently generate responses containing inappropriate or harmful content.
|
439 |
+
|
440 |
+
## Acknowledgments
|
441 |
+
|
442 |
+
We would like to extend our gratitude to [Together](https://www.together.xyz/) for their contributions to BLOOMChat-v1, without which BLOOMChat-v2 would not have been possible.
|
443 |
+
|
444 |
+
We are grateful to the various researchers and open-source projects that have contributed to the development of BLOOMChat-v2. We thank [BigScience](https://bigscience.huggingface.co/) for providing the [BLOOM](https://huggingface.co/bigscience/bloom) model, which served as the base for BLOOMChat-v2.
|
445 |
+
For our long-sequence pretraining data: we thank [Common Crawl](https://commoncrawl.org/), [Google Research](https://research.google/), and [Allen Institute for AI](https://allenai.org/) for their contributions in making mc4-3.1.0 (avilable on HuggingFace [here](https://huggingface.co/datasets/allenai/c4/tree/mC4_3.1.0)) possible. We thank the [Technology Innovation Institute](https://www.tii.ae/) for [Falcon RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb).
|
446 |
+
We thank [BigCode](https://huggingface.co/bigcode) for [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata), [EleutherAI](https://www.eleuther.ai/) for [The PILE](https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated), the authors of [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law) for their dataset, and the authors of [sec-edgar](https://github.com/sec-edgar/sec-edgar) for their convenient Python library for downloading SEC filings.
|
447 |
+
|
448 |
+
For our finetuning data, we thank [LAION](https://laion.ai/) for their [OIG dataset](https://huggingface.co/datasets/laion/OIG), OpenAssistant Conversations Dataset ([OASST1](https://huggingface.co/datasets/OpenAssistant/oasst1)) and also thank [Databricks](https://www.databricks.com/) for providing [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
|
449 |
+
|
450 |
+
We appreciate [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [BigScience](https://bigscience.huggingface.co/) for their essential benchmarking contributions, which is very helpful in evaluating BLOOMChat's performance. We appreciate the inspiration from the wave of various recent open-source chat models, including [OpenAssistant-30B](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor), [LLaMA-Adapter-V2-65B](https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/llama_adapter_v2_chat65b), [Vicuna-13b](https://huggingface.co/lmsys/vicuna-13b-delta-v0), [Koala-13b](https://huggingface.co/TheBloke/koala-13B-HF), [OASST-Pythia-12b](https://huggingface.co/OpenAssistant/oasst-sft-1-pythia-12b), [Alpaca-13b](https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g), [ChatGLM-6b](https://github.com/THUDM/ChatGLM-6B), [FastChat-T5-3b](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0), [Dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), [LLaMA-13b](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), [StableLM-Tuned-Alpha-7b](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b), [RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1), [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat) and so on. We look forward to witnessing the continued growth and success of open-source chat-based models.
|
451 |
+
|
452 |
+
We highly appreciate the hard work and dedication of these researchers and organizations towards the advancement of the open-source community. Their contributions were invaluable in the development of BLOOMChat, and we hope that our model can contribute to further advancements in the field.
|
453 |
+
|
454 |
+
## Cite BLOOMChat
|
455 |
+
```
|
456 |
+
@software{bloomchat-v2,
|
457 |
+
title = {{BLOOMChat-v2: an Open Multilingual Chat LLM for Long Sequences}},
|
458 |
+
author = {SambaNova Systems},
|
459 |
+
url = {https://huggingface.co/sambanovasystems/BLOOMChat-176B-v2}
|
460 |
+
month = {2},
|
461 |
+
year = {2024},
|
462 |
+
version = {2.0},
|
463 |
+
}
|
464 |
+
```
|