pankajmathur
commited on
Commit
·
5f990de
1
Parent(s):
5f3d251
Update README.md
Browse files
README.md
CHANGED
@@ -62,9 +62,7 @@ Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](htt
|
|
62 |
|
63 |
<br>
|
64 |
|
65 |
-
|
66 |
-
|
67 |
-
Here is the prompt format
|
68 |
|
69 |
```
|
70 |
### System:
|
@@ -77,6 +75,23 @@ Tell me about Orcas.
|
|
77 |
|
78 |
```
|
79 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
Below shows a code example on how to use this model
|
81 |
|
82 |
```python
|
|
|
62 |
|
63 |
<br>
|
64 |
|
65 |
+
### Prompt Format
|
|
|
|
|
66 |
|
67 |
```
|
68 |
### System:
|
|
|
75 |
|
76 |
```
|
77 |
|
78 |
+
#### OobaBooga Instructions:
|
79 |
+
|
80 |
+
This model required upto 45GB GPU VRAM in 4bit so it can be loaded directly on Single RTX 6000/L40/A40/A100/H100 GPU or Double RTX 4090/L4/A10/RTX 3090/RTX A5000
|
81 |
+
So, if you have access to Machine with 45GB GPU VRAM and have installed [OobaBooga Web UI](https://github.com/oobabooga/text-generation-webui) on it.
|
82 |
+
You can just download this model by using HF repo link directly on OobaBooga Web UI "Model" Tab/Page & Just use **load-in-4bit** option in it.
|
83 |
+
|
84 |
+
![model_load_screenshot](https://huggingface.co/pankajmathur/model_101/resolve/main/oobabooga_model_load_screenshot.png)
|
85 |
+
|
86 |
+
|
87 |
+
After that go to Default Tab/Page on OobaBooga Web UI and **copy paste above prompt format into Input** and Enjoy!
|
88 |
+
|
89 |
+
![default_input_screenshot](https://huggingface.co/pankajmathur/model_101/resolve/main/default_input_screenshot.png)
|
90 |
+
|
91 |
+
<br>
|
92 |
+
|
93 |
+
#### Code Instructions:
|
94 |
+
|
95 |
Below shows a code example on how to use this model
|
96 |
|
97 |
```python
|