meta-llama/Llama-2-70b-chat-hf - W8A8_FP8 Compression
This is a compressed model using llmcompressor.
Compression Configuration
- Base Model: meta-llama/Llama-2-70b-chat-hf
- Compression Scheme: W8A8_FP8
- Dataset: HuggingFaceH4/ultrachat_200k
- Dataset Split: train_sft
- Number of Samples: 512
- Preprocessor: chat
- Maximum Sequence Length: 4096
Sample Output
Prompt:
Could not generate output
Output:
No CUDA GPUs are available
Evaluation
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for espressor/meta-llama.Llama-2-70b-chat-hf_W8A8_FP8
Base model
meta-llama/Llama-2-70b-chat-hf