File size: 854 Bytes
18b1618 b5e1f8d 18b1618 b5e1f8d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
---
license: apache-2.0
---
# QuantLM 1.5B 3 bit
QuantLM, unpacked to FP16 format - compatible with FP16 GEMMs. After unpacking, QuantLM has the same architecture as LLaMa.
```python
import transformers as tf, torch
model_name = "SpectraSuite/QuantLM_1.5B_3bit_Unpacked"
# Please adjust the temperature, repetition penalty, top_k, top_p and other sampling parameters according to your needs.
pipeline = tf.pipeline("text-generation", model=model_id, model_kwargs={"torch_dtype": torch.float16}, device_map="auto")
# These are base (pretrained) LLMs that are not instruction and chat tuned. You may need to adjust your prompt accordingly.
pipeline("Once upon a time")
```
* License: Apache 2.0
* We will use our GitHub repo for communication (including HF repo related queries). Feel free to open an issue here https://github.com/NolanoOrg/SpectraSuite
|