--- license: apache-2.0 --- Aura-llama Data Card

Aura-llama

Aura-llama image

Now that the cute anime girl has your attention.

Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining. Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.

Aura-llama is a merge of the following models to create a base model to work from:

Merged Evals (Has Not Been Finetuned):

Aura-llama

🧩 Configuration


          slices: 
            - sources: 
              - model: meta-llama/Meta-Llama-3-8B-Instruct layer_range: [0, 23] 
            - sources: 
              - model: meta-llama/Meta-Llama-3-8B-Instruct layer_range: [7, 31] 
          merge_method: passthrough 
          dtype: bfloat16