Files changed (1) hide show
  1. README.md +9 -30
README.md CHANGED
@@ -1,16 +1,8 @@
1
  ---
2
- inference: true
3
  tags:
4
  - musicgen
5
  license: cc-by-nc-4.0
6
- pipeline_tag: text-to-audio
7
- widget:
8
- - text: "a funky house with 80s hip hop vibes"
9
- example_title: "Prompt 1"
10
- - text: "a chill song with influences from lofi, chillstep and downtempo"
11
- example_title: "Prompt 2"
12
- - text: "a catchy beat for a podcast intro"
13
- example_title: "Prompt 3"
14
  ---
15
 
16
  # MusicGen - Small - 300M
@@ -54,31 +46,18 @@ Try out MusicGen yourself!
54
 
55
  You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
56
 
57
- 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
58
 
59
  ```
60
- pip install --upgrade pip
61
- pip install --upgrade transformers scipy
62
  ```
63
 
64
- 2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
65
 
66
- ```python
67
- from transformers import pipeline
68
- import scipy
69
-
70
- synthesiser = pipeline("text-to-audio", "facebook/musicgen-small")
71
-
72
- music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True})
73
-
74
- scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"])
75
- ```
76
-
77
- 3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
78
-
79
- ```python
80
  from transformers import AutoProcessor, MusicgenForConditionalGeneration
81
 
 
82
  processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
83
  model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
84
 
@@ -93,7 +72,7 @@ audio_values = model.generate(**inputs, max_new_tokens=256)
93
 
94
  3. Listen to the audio samples either in an ipynb notebook:
95
 
96
- ```python
97
  from IPython.display import Audio
98
 
99
  sampling_rate = model.config.audio_encoder.sampling_rate
@@ -102,7 +81,7 @@ Audio(audio_values[0].numpy(), rate=sampling_rate)
102
 
103
  Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
104
 
105
- ```python
106
  import scipy
107
 
108
  sampling_rate = model.config.audio_encoder.sampling_rate
@@ -122,7 +101,7 @@ pip install git+https://github.com/facebookresearch/audiocraft.git
122
 
123
  2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
124
  ```
125
- apt-get install ffmpeg
126
  ```
127
 
128
  3. Run the following Python code:
 
1
  ---
2
+ inference: false
3
  tags:
4
  - musicgen
5
  license: cc-by-nc-4.0
 
 
 
 
 
 
 
 
6
  ---
7
 
8
  # MusicGen - Small - 300M
 
46
 
47
  You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
48
 
49
+ 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main:
50
 
51
  ```
52
+ pip install git+https://github.com/huggingface/transformers.git
 
53
  ```
54
 
55
+ 2. Run the following Python code to generate text-conditional audio samples:
56
 
57
+ ```py
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  from transformers import AutoProcessor, MusicgenForConditionalGeneration
59
 
60
+
61
  processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
62
  model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
63
 
 
72
 
73
  3. Listen to the audio samples either in an ipynb notebook:
74
 
75
+ ```py
76
  from IPython.display import Audio
77
 
78
  sampling_rate = model.config.audio_encoder.sampling_rate
 
81
 
82
  Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
83
 
84
+ ```py
85
  import scipy
86
 
87
  sampling_rate = model.config.audio_encoder.sampling_rate
 
101
 
102
  2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
103
  ```
104
+ apt get install ffmpeg
105
  ```
106
 
107
  3. Run the following Python code: