Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
XelotX
/
Meta-Llama-3.1-70B-Instruct-XelotX-_Quants
like
0
GGUF
Inference Endpoints
conversational
Model card
Files
Files and versions
Community
1
Deploy
Use this model
97e20bd
Meta-Llama-3.1-70B-Instruct-XelotX-_Quants
1 contributor
History:
19 commits
XelotX
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q4_0-00003-of-00003.gguf with huggingface_hub
97e20bd
verified
4 months ago
.gitattributes
Safe
3.25 kB
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q4_0-00003-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-BF16-00001-of-00003.gguf
Safe
47.8 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-BF16-00001-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-BF16-00002-of-00003.gguf
Safe
47.9 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-BF16-00002-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-BF16-00003-of-00003.gguf
Safe
45.4 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-BF16-00003-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q2_K-00001-of-00003.gguf
Safe
8.74 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q2_K-00001-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q2_K-00002-of-00003.gguf
Safe
8.81 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q2_K-00002-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q2_K-00003-of-00003.gguf
Safe
8.83 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q2_K-00003-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q4_0-00001-of-00003.gguf
Safe
13.5 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q4_0-00001-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q4_0-00002-of-00003.gguf
Safe
13.5 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q4_0-00002-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q4_0-00003-of-00003.gguf
Safe
13 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q4_0-00003-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q4_K_M-00001-of-00003.gguf
Safe
14.4 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q4_K_M-00001-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q4_K_M-00002-of-00003.gguf
Safe
14.1 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q4_K_M-00002-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q4_K_M-00003-of-00003.gguf
Safe
14 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q4_K_M-00003-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q6_K-00001-of-00003.gguf
Safe
19.6 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q6_K-00001-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q6_K-00002-of-00003.gguf
Safe
19.7 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q6_K-00002-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q6_K-00003-of-00003.gguf
Safe
18.6 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q6_K-00003-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q8_0-00001-of-00003.gguf
Safe
25.4 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q8_0-00001-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q8_0-00002-of-00003.gguf
Safe
25.5 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q8_0-00002-of-00003.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-70B-Instruct-XelotX-Q8_0-00003-of-00003.gguf
Safe
24.1 GB
LFS
Upload Meta-Llama-3.1-70B-Instruct-XelotX-Q8_0-00003-of-00003.gguf with huggingface_hub
4 months ago