meta-llama/Llama-2-7b-chat-hf - W4A16 Compression

This is a compressed model using llmcompressor.

Compression Configuration

  • Base Model: meta-llama/Llama-2-7b-chat-hf
  • Compression Scheme: W4A16
  • Dataset: HuggingFaceH4/ultrachat_200k
  • Dataset Split: train_sft
  • Number of Samples: 512
  • Preprocessor: chat
  • Maximum Sequence Length: 4096

Sample Output

Prompt:

<s>[INST] Who is Alan Turing? [/INST]

Output:

<s><s> [INST] Who is Alan Turing? [/INST]  Alan Turing (1912-1954) was a British mathematician, computer scientist, logician, and cryptographer who made significant contributions to the fields of computer science, artificial intelligence, and cryptography.

Turing was born in London, England, and grew up in a family of intellectuals. He was educated at Cambridge University, where he studied mathematics and logic, and later worked at the University of Manchester, where he developed the concept of the universal Turing machine, a theoretical model for a computer.

During World War II, Turing worked at Blet

Evaluation

Downloads last month
80
Safetensors
Model size
1.12B params
Tensor type
I64
·
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for espressor/meta-llama.Llama-2-7b-chat-hf_W4A16

Quantized
(60)
this model

Dataset used to train espressor/meta-llama.Llama-2-7b-chat-hf_W4A16