JSWOOK commited on
Commit
46c5858
·
verified ·
1 Parent(s): cf725ad

Training in progress, epoch 1

Browse files
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: pyannote_fine_tuning
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # pyannote_fine_tuning
14
+
15
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.1283
18
+ - Model Preparation Time: 0.0036
19
+ - Der: 0.0490
20
+ - False Alarm: 0.0309
21
+ - Missed Detection: 0.0091
22
+ - Confusion: 0.0090
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 0.001
42
+ - train_batch_size: 32
43
+ - eval_batch_size: 32
44
+ - seed: 42
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: cosine
47
+ - num_epochs: 5
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Der | False Alarm | Missed Detection | Confusion |
52
+ |:-------------:|:-----:|:----:|:---------------:|:----------------------:|:------:|:-----------:|:----------------:|:---------:|
53
+ | No log | 1.0 | 21 | 0.1258 | 0.0036 | 0.0485 | 0.0287 | 0.0105 | 0.0093 |
54
+ | 0.228 | 2.0 | 42 | 0.1327 | 0.0036 | 0.0509 | 0.0300 | 0.0098 | 0.0112 |
55
+ | 0.1873 | 3.0 | 63 | 0.1280 | 0.0036 | 0.0496 | 0.0307 | 0.0092 | 0.0097 |
56
+ | 0.166 | 4.0 | 84 | 0.1280 | 0.0036 | 0.0487 | 0.0307 | 0.0091 | 0.0090 |
57
+ | 0.152 | 5.0 | 105 | 0.1283 | 0.0036 | 0.0490 | 0.0309 | 0.0091 | 0.0090 |
58
+
59
+
60
+ ### Framework versions
61
+
62
+ - Transformers 4.44.2
63
+ - Pytorch 2.5.0+cu121
64
+ - Datasets 3.1.0
65
+ - Tokenizers 0.19.1
config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "SegmentationModel"
4
+ ],
5
+ "chunk_duration": 10.0,
6
+ "max_speakers_per_chunk": 3,
7
+ "max_speakers_per_frame": 2,
8
+ "min_duration": null,
9
+ "model_type": "pyannet",
10
+ "sample_rate": 16000,
11
+ "torch_dtype": "float32",
12
+ "transformers_version": "4.44.2",
13
+ "warm_up": [
14
+ 0.0,
15
+ 0.0
16
+ ],
17
+ "weigh_by_cardinality": false
18
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dda3e4c42bd47099ea195dc8c627cd32cc9c4247e4ab266fdd30401434e2f283
3
+ size 5899124
runs/Nov04_06-28-34_1e334afff52f/events.out.tfevents.1730701808.1e334afff52f.5532.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46809a0d6de91b0d71dc3b163f49ad5759404f6a637c28fe4d82e337675991b2
3
+ size 563
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2454d7f1c178661c3fc40d2a7654a56492941ff19b13585a98dbc118f34ae16a
3
+ size 5240