File size: 2,947 Bytes
75c3d1c
87295bc
75c3d1c
87295bc
 
 
 
 
75c3d1c
87295bc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
inference: false
license: cc-by-nc-sa-4.0
language:
- de
- en
library_name: transformers
pipeline_tag: text-generation
---

# Orca Mini v2 German 7b GGML

These files are GGML format model files for [Orca Mini v2 German 7b](https://huggingface.co./jphme/orca_mini_v2_ger_7b). Please find all information about the model in the original repository.


GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)


## Prompt template:

```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.

### User:
prompt

### Response:
```

## Compatibility

### `q4_0`

So far, I only quantized a `q4_0` version for my own use. Please let me know if there is demand for other quantizations.
These should be compatbile with any UIs, tools and libraries released since late May.

## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| orca-mini-v2-ger-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.83 GB | ~6.3 GB | Original llama.cpp quant method, 4-bit. |

**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.

## How to run in `llama.cpp`

I use the following command line; adjust for your tastes and needs:

```
./main -t 10 -ngl 32 -m orca-mini-v2-ger-7b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an story writing assistant who writes very long, detailed and interesting stories\n\n### User:\nWrite a story about llamas\n\n### Response:\n"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.

If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.

Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.

If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`

## How to run in `text-generation-webui`

Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).

## Thanks

Special thanks to [Pankaj Mathur](https://huggingface.co./psmathur) for the great Orca Mini base model and [TheBloke](https://huggingface.co./TheBloke) for his great work quantizing billions of models (and for his template for this README).