Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ tags:
|
|
18 |
</a>
|
19 |
|
20 |
<!-- HuggingFace -->
|
21 |
-
<a href="https://huggingface.co/
|
22 |
<img src="https://img.shields.io/badge/NotaGen_Weights-HuggingFace-%23FFD21F?logo=huggingface&logoColor=white" alt="Weights">
|
23 |
</a>
|
24 |
|
@@ -58,21 +58,21 @@ pip install -r requirements.txt
|
|
58 |
We provide pre-trained weights of different scales:
|
59 |
| Models | Parameters | Patch-level Decoder Layers | Character-level Decoder Layers | Hidden Size | Patch Length (Context Length) |
|
60 |
| ---- | ---- | ---- | ---- | ---- | ---- |
|
61 |
-
| [NotaGen-small](https://huggingface.co/
|
62 |
-
| [NotaGen-medium](https://huggingface.co/
|
63 |
-
| [NotaGen-large](https://huggingface.co/
|
64 |
|
65 |
### Fine-tuning
|
66 |
|
67 |
-
We fine-tuned NotaGen-large on a corpus of approximately 9k classical pieces. You can download the weights [here](https://huggingface.co/
|
68 |
|
69 |
### Reinforcement-Learning
|
70 |
|
71 |
-
After pre-training and fine-tuning, we optimized NotaGen-large with 3 iterations of CLaMP-DPO. You can download the weights [here](https://huggingface.co/
|
72 |
|
73 |
### 🌟 NotaGen-X
|
74 |
|
75 |
-
Inspired by Deepseek-R1, we further optimized the training procedures of NotaGen and released a better version --- [NotaGen-X](https://huggingface.co/
|
76 |
|
77 |
- We introduced a post-training stage between pre-training and fine-tuning, refining the model with a classical-style subset of the pre-training dataset.
|
78 |
- We removed the key augmentation in the Fine-tune stage, making the instrument range of the generated compositions more reasonable.
|
|
|
18 |
</a>
|
19 |
|
20 |
<!-- HuggingFace -->
|
21 |
+
<a href="https://huggingface.co/ElectricAlexis/NotaGen">
|
22 |
<img src="https://img.shields.io/badge/NotaGen_Weights-HuggingFace-%23FFD21F?logo=huggingface&logoColor=white" alt="Weights">
|
23 |
</a>
|
24 |
|
|
|
58 |
We provide pre-trained weights of different scales:
|
59 |
| Models | Parameters | Patch-level Decoder Layers | Character-level Decoder Layers | Hidden Size | Patch Length (Context Length) |
|
60 |
| ---- | ---- | ---- | ---- | ---- | ---- |
|
61 |
+
| [NotaGen-small](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagen_pretrain_p_size_16_p_length_2048_p_layers_12_c_layers_3_h_size_768_lr_0.0002_batch_8.pth) | 110M | 12 | 3 | 768 | 2048 |
|
62 |
+
| [NotaGen-medium](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagen_pretrain_p_size_16_p_length_2048_p_layers_16_c_layers_3_h_size_1024_lr_0.0001_batch_4.pth) | 244M | 16 | 3 | 1024 | 2048 |
|
63 |
+
| [NotaGen-large](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagen_pretrain_p_size_16_p_length_1024_p_layers_20_c_layers_6_h_size_1280_lr_0.0001_batch_4.pth) | 516M | 20 | 6 | 1280 | 1024 |
|
64 |
|
65 |
### Fine-tuning
|
66 |
|
67 |
+
We fine-tuned NotaGen-large on a corpus of approximately 9k classical pieces. You can download the weights [here](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagen_pretrain-finetune_p_size_16_p_length_1024_p_layers_c_layers_6_20_h_size_1280_lr_1e-05_batch_1.pth).
|
68 |
|
69 |
### Reinforcement-Learning
|
70 |
|
71 |
+
After pre-training and fine-tuning, we optimized NotaGen-large with 3 iterations of CLaMP-DPO. You can download the weights [here](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagen_pretrain-finetune-RL3_beta_0.1_lambda_10_p_size_16_p_length_1024_p_layers_20_c_layers_6_h_size_1280_lr_1e-06_batch_1.pth).
|
72 |
|
73 |
### 🌟 NotaGen-X
|
74 |
|
75 |
+
Inspired by Deepseek-R1, we further optimized the training procedures of NotaGen and released a better version --- [NotaGen-X](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagenx_p_size_16_p_length_1024_p_layers_20_h_size_1280.pth). Compared to the version in the paper, NotaGen-X incorporates the following improvements:
|
76 |
|
77 |
- We introduced a post-training stage between pre-training and fine-tuning, refining the model with a classical-style subset of the pre-training dataset.
|
78 |
- We removed the key augmentation in the Fine-tune stage, making the instrument range of the generated compositions more reasonable.
|