Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: google/gemma-7b
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
license: other
|
7 |
+
model_type: gemma
|
8 |
+
library_name: transformers
|
9 |
+
inference: false
|
10 |
+
---
|
11 |
+
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65aa2d4b356bf23b4a4da247/NQAvp6NRHlNILyWWFlrA7.webp)
|
12 |
+
## Google Gemma 7B
|
13 |
+
- **Model creator:** [Google](https://huggingface.co/google)
|
14 |
+
- **Original model:** [gemma-7b-it](https://huggingface.co/google/gemma-7b)
|
15 |
+
- [**Terms of use**](https://www.kaggle.com/models/google/gemma/license/consent)
|
16 |
+
<!-- description start -->
|
17 |
+
## Description
|
18 |
+
This repo contains GGUF format model files for [Google's Gemma 7B](https://huggingface.co/google/gemma-7b)
|
19 |
+
|
20 |
+
## Original model
|
21 |
+
- **Developed by:** [Google](https://huggingface.co/google)
|
22 |
+
|
23 |
+
### Description
|
24 |
+
Gemma is a family of lightweight, state-of-the-art open models from Google,
|
25 |
+
built from the same research and technology used to create the Gemini models.
|
26 |
+
They are text-to-text, decoder-only large language models, available in English,
|
27 |
+
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
|
28 |
+
models are well-suited for a variety of text generation tasks, including
|
29 |
+
question answering, summarization, and reasoning. Their relatively small size
|
30 |
+
makes it possible to deploy them in environments with limited resources such as
|
31 |
+
a laptop, desktop or your own cloud infrastructure, democratizing access to
|
32 |
+
state of the art AI models and helping foster innovation for everyone.
|
33 |
+
|
34 |
+
## Quantizon types
|
35 |
+
| quantization method | bits | size | description | recommended |
|
36 |
+
|---------------------|------|----------|-----------------------------------------------------|-------------|
|
37 |
+
| Q2_K | 2 | 3.09 | very small, very high quality loss | ❌ |
|
38 |
+
| Q3_K_S | 3 | 3.68 GB | very small, high quality loss | ❌ |
|
39 |
+
| Q3_K_L | 3 | 4.4 GB | small, substantial quality loss | ❌ |
|
40 |
+
| Q4_0 | 4 | 4.81 GB | legacy; small, very high quality loss | ❌ |
|
41 |
+
| Q4_K_S | 4 | 4.84 GB | medium, balanced quality | ✅ |
|
42 |
+
| Q4_K_M | 4 | 5.13 GB | medium, balanced quality | ✅ |
|
43 |
+
| Q5_0 | 5 | 5.88 GB | legacy; medium, balanced quality | ❌ |
|
44 |
+
| Q5_K_S | 5 | 5.88 GB | large, low quality loss | ✅ |
|
45 |
+
| Q5_K_M | 5 | 6.04 GB | large, very low quality loss | ✅ |
|
46 |
+
| Q6_K | 6 | 7.01 GB | very large, extremely low quality loss | ❌ |
|
47 |
+
| Q8_0 | 8 | 9.08 GB | very large, extremely low quality loss | ❌ |
|
48 |
+
| FP16 | 16 | 17.1 GB | enormous, negligible quality loss | ❌ |
|
49 |
+
|
50 |
+
## Usage
|
51 |
+
You can use this model with the latest builds of **LM Studio** and **llama.cpp**.
|
52 |
+
If you're new to the world of _large language models_, I recommend starting with **LM Studio**.
|
53 |
+
<!-- description end -->
|