--- license: apache-2.0 --- # AWS Neuron optimum model cache This repository contains cached compilation artifacts for the most popular models on the Hugging Face Hub. ## Inference ### LLM models For a list of the supported models and configurations, please refer to the inference cache [configuration files](https://huggingface.co./aws-neuron/optimum-neuron-cache/tree/main/inference-cache-config).