Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
LoneStriker
/
miquella-120b-3.0bpw-h6-exl2
like
10
Text Generation
Transformers
Safetensors
llama
mergekit
Merge
text-generation-inference
Inference Endpoints
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Will any 120b model currently fit on a single 24GB VRAM card through any app I can run on PC? (aka 4090)
15
#1 opened 9 months ago by
clevnumb