--- license: apache-2.0 --- # Aura-llama ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/QYpWMEXTe0_X3A7HyeBm0.webp) Now that the cute anime girl has your attention. Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining. Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model. Aura-llama is a merge of the following models to create a base model to work from: * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct) * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct) ## Merged Evals: (Has Not Been Finetuned) Aura-llama * Avg: ? * ARC: ? * HellaSwag: ? * MMLU: ? * T-QA: ? * Winogrande: ? * GSM8K: ? ## 🧩 Configuration ``` slices: - sources: - model: meta-llama/Meta-Llama-3-8B-Instruct layer_range: [0, 23] - sources: - model: meta-llama/Meta-Llama-3-8B-Instruct layer_range: [7, 31] merge_method: passthrough dtype: bfloat16 ```