meta-llama/Llama-2-7b-chat-hf - W8A8_int8 Compression

This is a compressed model using llmcompressor.

Compression Configuration

  • Base Model: meta-llama/Llama-2-7b-chat-hf
  • Compression Scheme: W8A8_int8
  • Dataset: HuggingFaceH4/ultrachat_200k
  • Dataset Split: train_sft
  • Number of Samples: 512
  • Preprocessor: chat
  • Maximum Sequence Length: 4096

Sample Output

Prompt:

<s>[INST] Who is Alan Turing? [/INST]

Output:

<s><s> [INST] Who is Alan Turing? [/INST]  Alan Turing (1912-1954) was a British mathematician, computer scientist, and codebreaker who made significant contributions to the fields of computer science, artificial intelligence, and cryptography.

Turing was born in London, England, and grew up in a family of intellectuals. He was educated at Cambridge University, where he studied mathematics and logic, and later worked at the University of Manchester, where he made important contributions to the field of computer science.

During World War II, Turing worked at Bletchley Park, a top-secret government facility

Evaluation

Downloads last month
77
Safetensors
Model size
6.74B params
Tensor type
FP16
·
I8
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for espressor/meta-llama.Llama-2-7b-chat-hf_W8A8_int8

Quantized
(60)
this model

Dataset used to train espressor/meta-llama.Llama-2-7b-chat-hf_W8A8_int8