full code snippets
Browse files
README.md
CHANGED
@@ -44,6 +44,8 @@ Gengembre, N., Le Blouch, O., Gendrot, C. (2024) Disentangling prosody and timbr
|
|
44 |
```
|
45 |
|
46 |
# Usage
|
|
|
|
|
47 |
```
|
48 |
import torch
|
49 |
import torch.nn as nn
|
@@ -77,10 +79,46 @@ class EmbeddingsModel(WavLMPreTrainedModel):
|
|
77 |
x_stats = torch.cat((base_out.mean(dim=1),v.pow(0.5)),dim=1).unsqueeze(dim=2)
|
78 |
return self.top_layers(x_stats)
|
79 |
|
80 |
-
nt_extractor = EmbeddingsModel("ggmbr/wnt")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
```
|
82 |
|
83 |
# Evaluations
|
|
|
|
|
|
|
|
|
|
|
84 |
|
85 |
# Limitations
|
86 |
The fine tuning data used to produce this model (VoxCeleb, VCTK) are mostly in english, which may affect the performance on other languages.
|
|
|
44 |
```
|
45 |
|
46 |
# Usage
|
47 |
+
This first code snippet is for the model creation and download:
|
48 |
+
|
49 |
```
|
50 |
import torch
|
51 |
import torch.nn as nn
|
|
|
79 |
x_stats = torch.cat((base_out.mean(dim=1),v.pow(0.5)),dim=1).unsqueeze(dim=2)
|
80 |
return self.top_layers(x_stats)
|
81 |
|
82 |
+
nt_extractor = EmbeddingsModel.from_pretrained("ggmbr/wnt")
|
83 |
+
nt_extractor.eval()
|
84 |
+
```
|
85 |
+
|
86 |
+
You may have noticed that the model produces normalized vectors as embeddings.
|
87 |
+
Next, we define a function that extracts the non-timbral embedding from an audio signal. In this tutorial version, the audio file is expected to be sampled at 16kHz.
|
88 |
+
|
89 |
+
```
|
90 |
+
import torchaudio
|
91 |
+
|
92 |
+
MAX_SIZE = 320000 # max number of audio samples
|
93 |
+
|
94 |
+
def compute_embedding(fnm, model):
|
95 |
+
sig, sr = torchaudio.load(fnm)
|
96 |
+
assert sr == 16000, "please convert your audio file to a sampling rate of 16 kHz"
|
97 |
+
sig = sig.mean(dim=0).to(device)
|
98 |
+
if sig.shape[0] > MAX_SIZE:
|
99 |
+
print(f"truncating long signal {fnm}")
|
100 |
+
sig = sig[:MAX_SIZE]
|
101 |
+
embd = model(sig.unsqueeze(dim=0))
|
102 |
+
return embd.clone().detach()
|
103 |
+
```
|
104 |
+
|
105 |
+
And finally, we can compute two embeddings from two different files and compare them with a cosine similarity:
|
106 |
+
|
107 |
+
```
|
108 |
+
wav1 = "/data/AUDIO/speakerid/corpus/voxceleb1_2019/test/wav/id10270/x6uYqmx31kE/00001.wav"
|
109 |
+
wav2 = "/data/AUDIO/speakerid/corpus/voxceleb1_2019/test/wav/id10270/8jEAjG6SegY/00008.wav"
|
110 |
+
|
111 |
+
e1 = compute_embedding(wav1, nt_extractor)
|
112 |
+
e2 = compute_embedding(wav2, nt_extractor)
|
113 |
+
sim = float(torch.matmul(e1,e2.t()))
|
114 |
```
|
115 |
|
116 |
# Evaluations
|
117 |
+
Although it is not directly designed for this use case, evaluation on a standard ASV task can be performed with this model. Applied to
|
118 |
+
the [VoxCeleb1-clean test set](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/meta/veri_test2.txt), it leads to an equal error rate (EER) of **10.681%**
|
119 |
+
(with a decision threshold of **0.467**). This value can be interpreted as the ability to identify speakers only with non-timbral cues. Discussion about this interpretation can be
|
120 |
+
found in the paper mentioned hereabove, as well as other experiments showing correlations between these embeddings and non-timbral voice attributes.
|
121 |
+
|
122 |
|
123 |
# Limitations
|
124 |
The fine tuning data used to produce this model (VoxCeleb, VCTK) are mostly in english, which may affect the performance on other languages.
|