File size: 2,832 Bytes
be6a0c8
 
 
 
 
 
faf27a2
be6a0c8
 
 
 
 
 
 
 
faf27a2
be6a0c8
030cfe1
 
46dc872
030cfe1
 
faf27a2
030cfe1
 
 
 
be6a0c8
030cfe1
 
 
be6a0c8
030cfe1
be6a0c8
ea419e0
 
be6a0c8
030cfe1
be6a0c8
ea419e0
 
 
 
 
 
 
 
be6a0c8
030cfe1
be6a0c8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
base_model: openai/whisper-large-v3
model-index:
- name: whisper_final
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# ꡬ음μž₯μ•  ν™˜μžλ₯Ό μœ„ν•œ μŒμ„±μΈμ‹ λͺ¨λΈ

## ν”„λ‘œμ νŠΈ 정보
  μž¬λ‹¨λ²•μΈ λ―Έλž˜μ™€ μ†Œν”„νŠΈμ›¨μ–΄μ™€ ν•¨κ»˜ν•˜λŠ” 제 3νšŒμ•„μ΄λ””μ–΄ 곡λͺ¨μ „

## ν”„λ‘œμ νŠΈ λͺ…
  "ꡬ음μž₯μ•  μŒμ„± 데이터λ₯Ό ν™œμš©ν•œ κ³ λ Ή ν™˜μžμ˜ μ˜μ‚¬μ†Œν†΅ κ°œμ„ λ°©μ•ˆ"
  
## λͺ¨λΈ μ„€λͺ…
- **openai/whisper-large-v3**에 λŒ€ν•œ νŒŒμΈνŠœλ‹ λͺ¨λΈ
- λ³Έ λͺ¨λΈμ€ "ꡬ음μž₯μ•  μŒμ„± 데이터λ₯Ό ν™œμš©ν•œ κ³ λ Ή ν™˜μžμ˜ μ˜μ‚¬μ†Œν†΅ κ°œμ„ λ°©μ•ˆ" ν”„λ‘œμ νŠΈμ˜ ꡬ음μž₯μ• ν™˜μžλ“€μ— λŒ€ν•œ ν•œκ΅­μ–΄ μŒμ„±μΈμ‹ λͺ¨λΈμž„. OpenAI의 Whisper λͺ¨λΈμ„ νŒŒμΈνŠœλ‹ ν•˜μ—¬ ꡬ음μž₯μ• μ˜ μŒμ„±μ  νŠΉμ„±μ„ λ°˜μ˜ν•œ λͺ¨λΈμ„ κ΅¬μΆ•ν•˜μ˜€μŒ.
- 였λ₯Έμͺ½ "Inference API"λ₯Ό 톡해 μŒμ„±μΈμ‹ λͺ¨λΈμ„ ν…ŒμŠ€νŠΈ ν•΄λ³Ό 수 μžˆμŠ΅λ‹ˆλ‹€.

## ν•™μŠ΅ λͺ¨λΈ
  - **Paper**: Radford, A., Kim, J. W., Xu, T., Brockman, G., McLeavey, C., & Sutskever, I. (2023, July). Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning (pp. 28492-28518). PMLR.
  - **URL**: https://proceedings.mlr.press/v202/radford23a.html

## ν•™μŠ΅ 데이터

  - **AIHub** "ꡬ음μž₯μ•  μŒμ„± 데이터" (KOR)
  - **URL**: https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=608

### ν•™μŠ΅ νŒŒλΌλ―Έν„°

- **learning_rate**: 5e-07
- **train_batch_size**: 8
- **eval_batch_size**: 8
- **seed**: 42
- **optimizer**: Adam with betas=(0.9,0.999) and epsilon=1e-08
- **lr_scheduler_type**: linear
- **lr_scheduler_warmup_steps**: 10
- **mixed_precision_training**: Native AMP

### ν•™μŠ΅ κ²°κ³Ό

| Training Loss | Epoch | Step | Validation Loss | Wer     |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 4.2932        | 0.09  | 10   | 4.6306          | 16.0442 |
| 4.2744        | 0.18  | 20   | 4.1942          | 16.2348 |
| 3.7418        | 0.27  | 30   | 3.7625          | 15.5107 |
| 3.2037        | 0.36  | 40   | 3.5635          | 14.6723 |
| 3.4714        | 0.45  | 50   | 3.4383          | 14.3674 |
| 2.8962        | 0.55  | 60   | 3.3494          | 14.1768 |
| 2.7958        | 0.64  | 70   | 3.2752          | 18.2927 |
| 2.8691        | 0.73  | 80   | 3.2208          | 19.5884 |
| 2.8693        | 0.82  | 90   | 3.1857          | 20.6174 |
| 2.9474        | 0.91  | 100  | 3.1644          | 20.6555 |
| 3.1712        | 1.0   | 110  | 3.1551          | 20.6174 |


### Framework versions

- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1