Steelskull commited on
Commit
e85c9f6
1 Parent(s): d0710b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -36
README.md CHANGED
@@ -1,39 +1,55 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- # Aura-llama
5
-
6
- ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/QYpWMEXTe0_X3A7HyeBm0.webp)
7
- Now that the cute anime girl has your attention.
8
-
9
- Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining.
10
- Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.
11
-
12
-
13
- Aura-llama is a merge of the following models to create a base model to work from:
14
- * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
15
- * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
16
-
17
- ## Merged Evals: (Has Not Been Finetuned)
18
- Aura-llama
19
- * Avg: ?
20
- * ARC: ?
21
- * HellaSwag: ?
22
- * MMLU: ?
23
- * T-QA: ?
24
- * Winogrande: ?
25
- * GSM8K: ?
26
-
27
- ## 🧩 Configuration
28
-
29
- ```
30
- slices:
31
- - sources:
32
- - model: meta-llama/Meta-Llama-3-8B-Instruct
33
- layer_range: [0, 23]
34
- - sources:
35
- - model: meta-llama/Meta-Llama-3-8B-Instruct
36
- layer_range: [7, 31]
37
- merge_method: passthrough
38
- dtype: bfloat16
39
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0">
5
+ <title>Aura-llama Data Card</title>
6
+ <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
7
+ <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%); color: #D8DEE9; margin: 0; padding: 0; font-size: 16px; }
8
+ .container { width: 80%; max-width: 800px; margin: 20px auto; background-color: rgba(255, 255, 255, 0.02); padding: 20px; border-radius: 12px; box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2); backdrop-filter: blur(10px); border: 1px solid rgba(255, 255, 255, 0.1); }
9
+ .header h1 { font-size: 28px; color: #ECEFF4; margin: 0 0 20px 0; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3); }
10
+ .update-section { margin-top: 30px; } .update-section h2 { font-size: 24px; color: #88C0D0; }
11
+ .update-section p { font-size: 16px; line-height: 1.6; color: #ECEFF4; }
12
+ .info img { width: 100%; border-radius: 10px; margin-bottom: 15px; }
13
+ a { color: #88C0D0; text-decoration: none; }
14
+ a:hover { color: #A3BE8C; }
15
+ pre { background-color: rgba(255, 255, 255, 0.05); padding: 10px; border-radius: 5px; overflow-x: auto; }
16
+ code { font-family: 'Courier New', monospace; color: #A3BE8C; } </style> </head> <body> <div class="container">
17
+ <div class="header">
18
+ <h1>Aura-llama</h1> </div> <div class="info">
19
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/QYpWMEXTe0_X3A7HyeBm0.webp" alt="Aura-llama image">
20
+ <p>Now that the cute anime girl has your attention.</p>
21
+ <p>Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining. Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.</p>
22
+ <p>Aura-llama is a merge of the following models to create a base model to work from:</p>
23
+ <ul>
24
+ <li><a href="https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct">meta-llama/Meta-Llama-3-8B-Instruct</a></li>
25
+ <li><a href="https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct">meta-llama/Meta-Llama-3-8B-Instruct</a></li>
26
+ </ul>
27
+ </div>
28
+ <div class="update-section">
29
+ <h2>Merged Evals (Has Not Been Finetuned):</h2>
30
+ <p>Aura-llama</p>
31
+ <ul>
32
+ <li>Avg: ?</li>
33
+ <li>ARC: ?</li>
34
+ <li>HellaSwag: ?</li>
35
+ <li>MMLU: ?</li>
36
+ <li>T-QA: ?</li>
37
+ <li>Winogrande: ?</li>
38
+ <li>GSM8K: ?</li>
39
+ </ul>
40
+ </div>
41
+ <div class="update-section">
42
+ <h2>🧩 Configuration</h2>
43
+ <pre><code>
44
+ slices:
45
+ - sources:
46
+ - model: meta-llama/Meta-Llama-3-8B-Instruct layer_range: [0, 23]
47
+ - sources:
48
+ - model: meta-llama/Meta-Llama-3-8B-Instruct layer_range: [7, 31]
49
+ merge_method: passthrough
50
+ dtype: bfloat16
51
+ </code></pre>
52
+ </div>
53
+ </div>
54
+ </body>
55
+ </html>