Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference

Quantizing mpt7b storywriter

#54
by prashantkanuru - opened

I am not able to use bitsandbytes load_in_8_bit =True when loading the model, can anyone suggest on this

Sign up or log in to comment