--- base_model: Deci/DeciLM-7B inference: false language: - en license: apache-2.0 model-index: - name: DeciLM-7B results: [] model_creator: Deci model_name: DeciLM-7B model_type: deci prompt_template: | <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant quantized_by: Inferless tags: - finetune - vllm - GPTQ - Deci pipeline_tag: text-generation ---
Inferless

Serverless GPUs to scale your machine learning inference without any hassle of managing servers, deploy complicated and custom models with ease.

Join Private Beta

Go through this tutorial, for quickly deploy of DeciLM-7B using Inferless


# DeciLM-7B - GPTQ - Model creator: [Deci](https://huggingface.co./Deci) - Original model: [DeciLM-7B](https://huggingface.co./Deci/DeciLM-7B) ## Description This repo contains GPTQ model files for [Deci's DeciLM-7B](https://huggingface.co./Deci/DeciLM-7B). ### About GPTQ GPTQ is a method that compresses the model size and accelerates inference by quantizing weights based on a calibration dataset, aiming to minimize mean squared error in a single post-quantization step. GPTQ achieves both memory efficiency and faster inference. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co./docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Shared files, and GPTQ parameters Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co./Inferless/deciLM-7B-GPTQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co./datasets/VMware/open-instruct/viewer/) | 4096 | 5.96 GB ## How to use You will need the following software packages and python libraries: ```json build: cuda_version: "12.1.1" system_packages: - "libssl-dev" python_packages: - "torch==2.1.2" - "vllm==0.2.6" - "transformers==4.36.2" - "accelerate==0.25.0" ``` Here is the code for app.py ```python from vllm import LLM, SamplingParams class InferlessPythonModel: def initialize(self): self.sampling_params = SamplingParams(temperature=0.7, top_p=0.95,max_tokens=256) self.llm = LLM(model="Inferless/deciLM-7B-GPTQ", quantization="gptq", dtype="float16") def infer(self, inputs): prompts = inputs["prompt"] result = self.llm.generate(prompts, self.sampling_params) result_output = [[[output.outputs[0].text,output.outputs[0].token_ids] for output in result] return {'generated_result': result_output[0]} def finalize(self): pass ```