--- library_name: transformers license: llama2 --- Converted version of [CodeLlama-70b](https://huggingface.co./meta-llama/CodeLlama-70b-hf) to 4-bit using bitsandbytes. For more information about the model, refer to the model's page. ## Impact on performance In the following figure, we can see the impact on the performance of a set of models relative to the required RAM space. It is noticeable that the quantized models have equivalent performance while providing a significant gain in RAM usage. ![constellation](https://i.postimg.cc/MZ9SzdCG/constellation.png)