sorryhyun commited on
Commit
04a6a48
โ€ข
1 Parent(s): 148fa52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -105
README.md CHANGED
@@ -6,51 +6,21 @@ tags:
6
  - feature-extraction
7
  - sentence-similarity
8
  - transformers
9
-
 
 
 
 
10
  ---
11
 
12
- # {MODEL_NAME}
13
-
14
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
15
-
16
- <!--- Describe your model here -->
17
-
18
- ## Usage (Sentence-Transformers)
19
-
20
- Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
21
-
22
- ```
23
- pip install -U sentence-transformers
24
- ```
25
-
26
- Then you can use the model like this:
27
-
28
- ```python
29
- from sentence_transformers import SentenceTransformer
30
- sentences = ["This is an example sentence", "Each sentence is converted"]
31
-
32
- model = SentenceTransformer('{MODEL_NAME}')
33
- embeddings = model.encode(sentences)
34
- print(embeddings)
35
- ```
36
-
37
-
38
 
39
  ## Usage (HuggingFace Transformers)
40
- Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
41
 
42
  ```python
43
  from transformers import AutoTokenizer, AutoModel
44
  import torch
45
 
46
-
47
- #Mean Pooling - Take attention mask into account for correct averaging
48
- def mean_pooling(model_output, attention_mask):
49
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
50
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
51
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
52
-
53
-
54
  # Sentences we want sentence embeddings for
55
  sentences = ['This is an example sentence', 'Each sentence is converted']
56
 
@@ -58,87 +28,41 @@ sentences = ['This is an example sentence', 'Each sentence is converted']
58
  tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
59
  model = AutoModel.from_pretrained('{MODEL_NAME}')
60
 
61
- # Tokenize sentences
62
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
63
 
64
- # Compute token embeddings
65
  with torch.no_grad():
66
  model_output = model(**encoded_input)
 
 
 
 
 
 
 
 
67
 
68
- # Perform pooling. In this case, mean pooling.
69
- sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
70
-
71
- print("Sentence embeddings:")
72
- print(sentence_embeddings)
73
  ```
74
 
75
 
76
-
77
  ## Evaluation Results
78
 
79
- <!--- Describe how your model was evaluated -->
 
 
 
 
 
 
 
 
80
 
81
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
82
 
 
83
 
84
- ## Training
85
- The model was trained with the parameters:
86
 
87
- **DataLoader**:
88
-
89
- `torch.utils.data.dataloader.DataLoader` of length 503 with parameters:
90
- ```
91
- {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
92
- ```
93
-
94
- **Loss**:
95
-
96
- `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
97
- ```
98
- {'scale': 20.0, 'similarity_fct': 'cos_sim'}
99
- ```
100
-
101
- **DataLoader**:
102
-
103
- `torch.utils.data.dataloader.DataLoader` of length 730 with parameters:
104
- ```
105
- {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
106
- ```
107
-
108
- **Loss**:
109
-
110
- `sentence_transformers.losses.AnglELoss.AnglELoss` with parameters:
111
- ```
112
- {'scale': 20.0, 'similarity_fct': 'pairwise_angle_sim'}
113
- ```
114
-
115
- Parameters of the fit()-Method:
116
- ```
117
- {
118
- "epochs": 5,
119
- "evaluation_steps": 0,
120
- "evaluator": "CustomizedESEv.customizedEmbeddingSimilarityEvaluator",
121
- "max_grad_norm": 1,
122
- "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
123
- "optimizer_params": {
124
- "lr": 2e-05
125
- },
126
- "scheduler": "WarmupLinear",
127
- "steps_per_epoch": null,
128
- "warmup_steps": 50,
129
- "weight_decay": 0.01
130
- }
131
- ```
132
-
133
-
134
- ## Full Model Architecture
135
- ```
136
- SentenceTransformer(
137
- (0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: RobertaModel
138
- (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
139
- )
140
- ```
141
-
142
- ## Citing & Authors
143
 
144
- <!--- Describe where people can find more information -->
 
6
  - feature-extraction
7
  - sentence-similarity
8
  - transformers
9
+ license: cc-by-sa-4.0
10
+ datasets:
11
+ - klue
12
+ language:
13
+ - ko
14
  ---
15
 
16
+ ๋ณธ ๋ชจ๋ธ์€ multi-task loss (MultipleNegativeLoss -> AnglELoss) ๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  ## Usage (HuggingFace Transformers)
 
19
 
20
  ```python
21
  from transformers import AutoTokenizer, AutoModel
22
  import torch
23
 
 
 
 
 
 
 
 
 
24
  # Sentences we want sentence embeddings for
25
  sentences = ['This is an example sentence', 'Each sentence is converted']
26
 
 
28
  tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
29
  model = AutoModel.from_pretrained('{MODEL_NAME}')
30
 
 
31
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
32
 
33
+ # I used mean-pool method for sentence representation
34
  with torch.no_grad():
35
  model_output = model(**encoded_input)
36
+ inputs = {k: v.to(device) for k, v in inputs.items()}
37
+ representations, _ = self.model(**inputs, return_dict=False)
38
+ attention_mask = inputs["attention_mask"]
39
+ input_mask_expanded = (attention_mask.unsqueeze(-1).expand(representations.size()).to(representations.dtype))
40
+ summed = torch.sum(representations * input_mask_expanded, 1)
41
+ sum_mask = input_mask_expanded.sum(1)
42
+ sum_mask = torch.clamp(sum_mask, min=1e-9)
43
+ res.extend(summed.cpu() / sum_mask.cpu()) # mean_pool
44
 
 
 
 
 
 
45
  ```
46
 
47
 
 
48
  ## Evaluation Results
49
 
50
+ | Organization | Backbone Model | KlueSTS average | KorSTS average |
51
+ | -------- | ------- | ------- | ------- |
52
+ | team-lucid | DeBERTa-base | 54.15 | 29.72 |
53
+ | monologg | Electra-base | 66.97 | 29.72 |
54
+ | LMkor | Electra-base | 70.98 | 43.09 |
55
+ | deliciouscat | DeBERTa-base | - | 67.65 |
56
+ | BM-K | Roberta-base | 82.93 | **85.77** |
57
+ | Klue | Roberta-large | **86.71** | 71.70 |
58
+ | Klue (Hyperparameter searched) | Roberta-large | 86.21 | 75.54 |
59
 
60
+ ๊ธฐ์กด ํ•œ๊ตญ์–ด ๋ฌธ์žฅ ์ž„๋ฒ ๋”ฉ ๋ชจ๋ธ์€ mnli, snli ๋“ฑ ์˜์–ด ๋ฐ์ดํ„ฐ์…‹์„ ๊ธฐ๊ณ„๋ฒˆ์—ญํ•˜์—ฌ ํ•™์Šต๋œ ์ ์„ ์ฐธ๊ณ ์‚ผ์•„ Klue ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ๋Œ€์‹  ํ•™์Šตํ•ด ๋ณด์•˜์Šต๋‹ˆ๋‹ค.
61
 
62
+ ๊ทธ ๊ฒฐ๊ณผ, Klue-Roberta-large ๋ชจ๋ธ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•™์Šตํ–ˆ์„ ๊ฒฝ์šฐ KlueSTS ๋ฐ KorSTS ํ…Œ์ŠคํŠธ์…‹์— ๋ชจ๋‘์— ๋Œ€ํ•ด ์ค€์ˆ˜ํ•œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ, ์ข€ ๋” elaborateํ•œ representation์„ ํ˜•์„ฑํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์‚ฌ๋ฃŒํ–ˆ์Šต๋‹ˆ๋‹ค.
63
 
64
+ ๋‹ค๋งŒ ํ‰๊ฐ€ ์ˆ˜์น˜๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ์„ธํŒ…, ์‹œ๋“œ ๋„˜๋ฒ„ ๋“ฑ์œผ๋กœ ํฌ๊ฒŒ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ฐธ๊ณ ํ•˜์‹œ๊ธธ ๋ฐ”๋ž๋‹ˆ๋‹ค.
 
65
 
66
+ ## Training
67
+ NegativeRank loss -> simcse loss ๋กœ ํ•™์Šตํ–ˆ์Šต๋‹ˆ๋‹ค.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68