YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co./docs/hub/model-cards#model-card-metadata)
quantized by:
CUDA_VISIBLE_DEVICES=0 python llama.py /root/llava-13b-v1-1 c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors llava-13b-v1-1-4bit-128g.safetensors
using https://github.com/oobabooga/GPTQ-for-LLaMa CUDA branch
license: other
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.