rbgo commited on
Commit
976bc5d
1 Parent(s): 9f21d68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md CHANGED
@@ -1,3 +1,118 @@
1
  ---
 
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: Deci/DeciLM-7B
3
+ inference: false
4
+ language:
5
+ - en
6
  license: apache-2.0
7
+ model-index:
8
+ - name: DeciLM-7B
9
+ results: []
10
+ model_creator: Deci
11
+ model_name: DeciLM-7B
12
+ model_type: deci
13
+ prompt_template: |
14
+ <|im_start|>system
15
+ {system_message}<|im_end|>
16
+ <|im_start|>user
17
+ {prompt}<|im_end|>
18
+ <|im_start|>assistant
19
+ quantized_by: Inferless
20
+ tags:
21
+ - finetune
22
+ - vllm
23
+ - GPTQ
24
+ - Deci
25
+ pipeline_tag: text-generation
26
  ---
27
+ <!-- markdownlint-disable MD041 -->
28
+
29
+ <!-- header start -->
30
+ <!-- 200823 -->
31
+ <div style="width: auto; margin-left: auto; margin-right: auto">
32
+ <img src="https://pbs.twimg.com/profile_banners/1633782755669708804/1678359514/1500x500" alt="Inferless" style="width: 100%; min-width: 400px; display: block; margin: auto;">
33
+ </div>
34
+ <div style="display: flex; justify-content: space-between; width: 100%;">
35
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
36
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">Serverless GPUs to scale your machine learning inference without any hassle of managing servers, deploy complicated and custom models with ease.</p>
37
+ </div>
38
+ <!-- <div style="display: flex; flex-direction: column; align-items: flex-end;">
39
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
40
+ </div> -->
41
+ </div>
42
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;"><a href="https://0ooatrmbp25.typeform.com/to/nzuhQtba"><b>Join Private Beta</b></a></p></div>
43
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Go through <a href="https://tutorials.inferless.com/deploy-deci-7b-using-inferless">this tutorial</a>, for quickly deploy of <b>DeciLM-7B</b> using Inferless</p></div>
44
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
45
+ <!-- header end -->
46
+
47
+ # DeciLM-7B - GPTQ
48
+ - Model creator: [Upstage](https://huggingface.co/Deci)
49
+ - Original model: [DeciLM-7B](https://huggingface.co/Deci/DeciLM-7B)
50
+
51
+ <!-- description start -->
52
+ ## Description
53
+
54
+ This repo contains GPTQ model files for [Deci's DeciLM-7B](https://huggingface.co/Deci/DeciLM-7B).
55
+
56
+ ### About GPTQ
57
+
58
+ GPTQ is a method that compresses the model size and accelerates inference by quantizing weights based on a calibration dataset, aiming to minimize mean squared error in a single post-quantization step. GPTQ achieves both memory efficiency and faster inference.
59
+
60
+ It is supported by:
61
+
62
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
63
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
64
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
65
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
66
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
67
+
68
+ <!-- description end -->
69
+ <!-- repositories-available start -->
70
+
71
+ ## Shared files, and GPTQ parameters
72
+
73
+ Models are released as sharded safetensors files.
74
+
75
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
76
+ | ------ | ---- | -- | ----------- | ------- | ---- |
77
+ | [main](https://huggingface.co/Inferless/deciLM-7B-GPTQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.96 GB
78
+
79
+ <!-- README_AWQ.md-provided-files end -->
80
+
81
+ <!-- README_AWQ.md-text-generation-webui start -->
82
+
83
+ <!-- How to use start -->
84
+ ## How to use
85
+ You will need the following software packages and python libraries:
86
+ ```json
87
+ build:
88
+ cuda_version: "12.1.1"
89
+ system_packages:
90
+ - "libssl-dev"
91
+ python_packages:
92
+ - "torch==2.1.2"
93
+ - "vllm==0.2.6"
94
+ - "transformers==4.36.2"
95
+ - "accelerate==0.25.0"
96
+ ```
97
+
98
+
99
+ Here is the code for <b>app.py</b>
100
+ ```python
101
+ from vllm import LLM, SamplingParams
102
+
103
+ class InferlessPythonModel:
104
+ def initialize(self):
105
+
106
+ self.sampling_params = SamplingParams(temperature=0.7, top_p=0.95,max_tokens=256)
107
+ self.llm = LLM(model="Inferless/deciLM-7B-GPTQ", quantization="gptq", dtype="float16")
108
+
109
+ def infer(self, inputs):
110
+ prompts = inputs["prompt"]
111
+ result = self.llm.generate(prompts, self.sampling_params)
112
+ result_output = [[[output.outputs[0].text,output.outputs[0].token_ids] for output in result]
113
+
114
+ return {'generated_result': result_output[0]}
115
+
116
+ def finalize(self):
117
+ pass
118
+ ```